Elon Musk has disclosed further details about Tesla’s supercomputing cluster, Dojo, and outlined the company’s expected expenditure on Nvidia products in 2024.
The discussion about Nvidia’s supercomputers emerged after Tesla redirected some of the company’s chips to X and xAI locations due to storage constraints, according to Musk. Subsequently, Musk took to X to share insights into Tesla’s purchases from Nvidia this year, estimating the spending to range between $3 billion and $4 billion, along with other updates on the company’s AI supercomputing requirements.
Of the roughly $10B in AI-related expenditures I said Tesla would make this year, about half is internal, primarily the Tesla-designed AI inference computer and sensors present in all of our cars, plus Dojo.
For building the AI training superclusters, NVidia hardware is aboutâŠ
— Elon Musk (@elonmusk) June 4, 2024
Musk also mentioned the nearing completion of Tesla’s Giga Texas south extension, which will accommodate 50,000 Nvidia H100 chips used for Full Self-Driving (FSD) training.
Responding to a user’s query about Tesla producing its Dojo supercomputers at a sufficient volume to reduce reliance on Nvidia hardware, Musk clarified that training compute at Tesla is relatively small compared to inference compute, especially as the fleet size grows.
“Perhaps the best way to think about it is in terms of power consumption,” Musk explained. “When the Tesla fleet reaches 100M vehicles, peak power consumption of AI hardware in cars will be ~100GW. Training power consumption is probably <5GW. These are very rough guesses.”
While Musk considers it a “long shot,” he believes it could be possible for Tesla’s Dojo production to surpass that of Nvidia.
“There is a path for Dojo to exceed Nvidia,” Musk stated. “It is a long shot, as I’ve said before, but success is one of the possible outcomes.”