Tesla is advancing its supercomputing capabilities with a massive new cluster being built at its Gigafactory Texas. CEO Elon Musk revealed plans for the center’s power consumption, highlighting the company’s ambitious goals for this year and beyond.
Musk announced on X that the supercomputer center is designed to require approximately 130MW of power and cooling this year, with plans to scale up to over 500MW within the next 18 months. He noted that half of the chips in the cluster will be Tesla’s own hardware, while the other half will be comprised of Nvidia or other manufacturers’ hardware.
See also:Â Tesla Unveils AI5, Dismissing HW5 Name for Next-Generation Self-Driving Computer in 2025
Sizing for ~130MW of power & cooling this year, but will increase to >500MW over next 18 months or so.
Aiming for about half Tesla AI hardware, half Nvidia/other.
Play to win or donât play at all.
— Elon Musk (@elonmusk) June 20, 2024
The development of this supercomputing cluster aligns with the nearing completion of Tesla’s South factory expansion, where the training center will be housed. The facility’s substantial cooling requirements have been a topic of discussion within the Tesla community, with many observing the installation of large fans to aid in cooling.
In addition to the fans, the training center will incorporate four water tanks, extensive underground water lines, and a comprehensive cooling infrastructure on its second floor.
See also:Â Tesla Secures Approval to Test Full Self-Driving Systems in Shanghai, Eyes Expansion in China
Musk previously confirmed the existence of the supercomputing cluster, describing it as a “super dense, water-cooled supercomputer cluster.” He later estimated that Nvidia hardware would account for $3-$4 billion of Tesla’s AI-related expenditures this year, out of a total estimated at around $10 billion. Musk indicated that approximately half of these purchases would be for Tesla’s own AI inference computer and Dojo.
“Of the roughly $10B in AI-related expenditures I said Tesla would make this year, about half is internal, primarily the Tesla-designed AI inference computer and sensors present in all of our cars, plus Dojo,” Musk explained. “For building the AI training superclusters, NVidia hardware is about 2/3 of the cost.”