In the ever-evolving world of Deep Neural Networks (DNNs), the demand for increased computing power and storage capabilities has grown exponentially. Chiplet technology emerges as a compelling solution to meet these demands, offering the potential to boost performance, reduce power consumption, and enhance design flexibility. However, it comes with its challenges, including elevated packaging costs and costly Die-to-Die (D2D) interfaces. Addressing these challenges head-on, a collaborative research team from Tsinghua University, Xi’an Jiaotong University, IIISCT, and Shanghai AI Laboratory has introduced Gemini. This groundbreaking framework aims to revolutionize large-scale DNN chiplet accelerators.
Gemini shines with impressive results
In their recent paper titled “Gemini: Mapping and Architecture Co-exploration for Large-scale DNN Chiplet Accelerators,” the research team presents Gemini as a comprehensive solution. This innovative framework focuses on co-exploration of both architecture and mapping to push the boundaries of large-scale DNN chiplet accelerators. The results are nothing short of remarkable, with Gemini achieving an average performance improvement of 1.98× and a significant energy efficiency boost of 1.41× when compared to the state-of-the-art Simba architecture.
Key challenges in chiplet technology
Gemini’s development comes in response to two primary challenges of chiplet technology. On the architectural front, the key challenge lies in determining the optimal chiplet granularity. This requires striking a delicate balance between using numerous smaller chiplets to improve yield and opting for fewer, larger chiplets to control costs. In DNN mapping, challenges arise from the expansive scale enabled by chiplet technology and the associated costly D2D links.
Gemini’s innovative solutions
To address these challenges effectively, the research team introduces a layer-centric encoding method for representing LP SPM (Layer Processing Scratchpad Memory) schemes in many-core chiplet DNN inference accelerators. This encoding method delineates the optimization space for LP mapping, revealing significant opportunities for improvement. Gemini leverages this encoding and a highly configurable hardware template to formulate a mapping and architecture co-exploration framework for large-scale DNN chiplet accelerators. This framework comprises the Mapping Engine and the Monetary Cost Evaluator.
The Mapping Engine utilizes a Simulated Annealing (SA) algorithm with five specifically designed operators to navigate the extensive space the encoding method defines. It does so while automatically minimizing costly D2D communication. Simultaneously, the Monetary Cost Evaluator assesses the monetary cost of accelerators with varying architectural parameters.
In an empirical study comparing Gemini’s co-optimized architecture and mapping with the Simba architecture using Tangram SPM, the results speak for themselves. Gemini achieves an impressive average performance improvement of 1.98× and a remarkable 1.41× energy efficiency enhancement across various DNNs and batch sizes. All this comes with only a modest 14.3% increase in monetary cost.
Pioneering advancements
The significance of Gemini’s work lies in its pioneering approach to systematically defining the optimization space of LP SPM for DNN inference accelerators. Gemini stands out as the first framework to jointly explore mapping and architecture optimization space for large-scale DNN chiplet accelerators, considering critical factors such as energy consumption, performance, and monetary cost.
A Promising Future for DNN Inference Accelerators
The research team concludes by emphasizing the potential of Gemini to facilitate the design of employing a single chiplet for multiple accelerators in DNN inference accelerators. This innovation opens new avenues for efficiency and innovation in this rapidly evolving field.
Gemini, the brainchild of a collaborative research effort, emerges as a game-changer in large-scale DNN chiplet accelerators. With impressive results, innovative solutions, and a pioneering spirit, Gemini is poised to reshape the landscape of deep neural network acceleration. As chiplet technology continues to evolve, Gemini’s contributions to enhanced performance, reduced power consumption, and improved design flexibility are bound to impact the field.
Source: https://www.cryptopolitan.com/gemini-a-breakthrough-in-large-scal/