2022 DesignCon shows the evolution of communication and memory with electronic chips

2022 DesignCon shows the evolution of communication and memory with electronic chips

DesignCon is a trade show focused on electronic product design, electronic components and applications that drive the demand for electronics. It has been going on for decades in Silicon Valley. The in-person conference included three engaging keynotes. John Bowers, Frad Kavli Chair of Nanotechnology, UCSB, spoke about how photonics could be used in high-capacity co-packaging electronics. Laurence Moroney, Google’s Artificial Intelligence Lead talked about practical applications for AI and Machine Learning. Jose Morey, consultant for NASA, IBM, Hyperloop Transportation and Liberty BioSecurity; gave an inspiring speech on the future of humanity in space, the cure of old age and a future made possible by robots.

John Bowers showed the future evolution of co-packaging optics and electronic chips for data center communication, as shown below. True co-packaging will require chip stacking and heterogeneous integration of various types of chips, including optical engines. The PIPES project in which UCSB is involved is building 10 Tbps link technology with an efficiency of 0.5 pJ / bit that includes technologies such as quantum dot lasers.

Electronic products need memory and storage to function, and there have been several sessions at DesignCon that explored how storage and memory are evolving to meet the needs of current and future products. As shown in the Rambus talk image below, memory technology is evolving to provide more bandwidth, capacity, and new, more efficient and secure computer architectures, driven by new interconnections (e.g. CXL) and data unbundling. center.

Memory is an important part of server costs and must be used efficiently to provide the best total cost of ownership (see figure below). CPU, memory and storage have different life cycles and must be replaced separately. This has led to the use of similar resource pools, such as a memory pool using CXL.

Furthermore, accessing data and moving data on chips are extremely costly in terms of energy (see below). This is causing system and data center designers to rethink architectures to emphasize data locality and minimize data movement.

CXL allows for memory unbundling with the short term memory access changes shown below. CXL offers memory bandwidth and capacity expansion with “remote memory” which provides additional memory levels which may include non-volatile memory.

Conventional memory systems for artificial intelligence applications include on-chip memory (with the highest bandwidth, but limited capacity), HBM (with very high bandwidth and density but high cost) and GDDR (which has a good compromise between bandwidth, energy efficiency, cost and reliability).

Memory also plays an important role in edge computing, which also reduces potential power consumption by processing data close to where it is generated. While data centers play an important role in ML training, edge computing plays an important role in ML inference. The following figure shows Rambus’ view of memory types for servers, ML training, and inference. The sweet spot for inference favors GDDR6. Accelerator cards appear to play an important role in AI edge computing and automotive applications.

Rambus also offers “root of trust” solutions for automotive design to prevent hacking of vehicles that are increasingly running computer systems. One of their talks was about advanced packaging options, including UCIe (chiplet-specific) and HBM solutions that are approaching 1 TB / s of bandwidth.

DesignCon 2022 covered electronic design and integration, including photonic communication in chips. Rambus gave talks on the need to process data closer to where it is stored and discussed the use of various xDDR and HBM memory for various applications, including edge AI training, edge inference and ADAS.

.

Leave a Comment

Your email address will not be published.