Advances in W2W Hybrid and Fusion Bonding to Enable Device Inflections in Logic and Memory

64 views
Download
  • Share
Create Account or Sign In to post comments
#Hybrid bonding #advanced packaging #metrology #2.5D #3D #die stacking #Cu-Cu

(26:34 + Q&A) Dr. Raghav Sreenivasan, Senior Director, Semiconductor Products Group, Applied Materials
From the First IEEE Hybrid Bonding Symposium 
Summary: Generative AI, particularly large language models, require significant compute power and large amounts of memory to store and process enormous model parameters during training and inference. The explosion in AI workloads has further accelerated the need for chip level innovations to lower power consumption. As traditional Moore’s law scaling has slowed down, 3D heterogeneous integration has gained in popularity as the next big frontier in enabling higher performance while lowering power consumption. Fusion and hybrid bonding technologies (W2W and D2W) have been the key enablers for new 3D architectures due to their ability to vertically stack devices through a high density of I/O connections. In this talk we will provide an overview of the key technology inflections in logic, memory and photonic devices enabled by W2W bonding. We will discuss the materials and process innovations needed to address fundamental challenges such as heat dissipation, pitch scaling and stress management to enable the next generation of devices.
Bio: Dr. Raghav Sreenivasan is a senior director in the Semiconductor Products Group at Applied Materials, leading their 3D heterogenous integration efforts in W2W hybrid and fusion bonding. His group focuses on differentiated module solutions for key bonding inflections in logic, memory and photonic applications through fundamental materials and process innovations. He has previously held lead integrator roles in 10nm Finfet and 14nm Cu BEOL modules at IBM Fishkill and Albany NY. He holds a Ph.D. in Materials Science and Engineering from Stanford University.

(26:34 + Q&A) Dr. Raghav Sreenivasan, Senior Director, Semiconductor Products Group, Applied Materials
From the First IEEE Hybrid Bonding Symposium 
Summary: Generative AI, particularly large language models, require significant compute power and large amounts of memory to store and process enormous model parameters during training and inference. The explosion in AI workloads has further accelerated the need for chip level innovations to lower power consumption. As traditional Moore’s law scaling has slowed down, 3D heterogenous integration ...

Speakers in this video

Advertisment

Advertisment