car tesla

Elon Musk envisions Tesla as more than just an automaker; he wants it to be an AI-driven company. Key to this vision is Dojo, Tesla’s custom-built supercomputer designed to train its Full Self-Driving (FSD) neural networks. While FSD isn’t fully autonomous yet, Tesla believes more data, compute power, and training will help achieve full autonomy. Dojo plays a crucial role in this journey.

2019: First Mentions of Dojo

April 22 – During Tesla’s Autonomy Day, Musk teases Dojo, revealing it as a supercomputer to train AI for self-driving cars. He also mentions that all Tesla vehicles at the time have the hardware required for full autonomy, awaiting only software updates.

2020: Musk Begins the Dojo Roadshow

February 2 – Musk discusses Tesla’s growing fleet of connected vehicles and highlights Dojo’s capabilities, claiming it will process vast amounts of video data for neural networks.

August 14 – Musk reiterates Tesla’s plan for Dojo, describing it as “a beast” that will process vast video data. He predicts the first version of Dojo will be ready by August 2021.

December 31 – Musk mentions that while Dojo isn’t essential, it will improve self-driving by making it much safer than human drivers.

2021: Tesla Officially Announces Dojo

August 19 – Tesla announces Dojo at its first AI Day, showcasing the D1 chip and detailing plans to use 3,000 of them to power the supercomputer.

October 12 – Tesla releases a whitepaper detailing Dojo Technology, outlining new binary floating-point arithmetic used for deep learning.

2022: Tesla Reveals Dojo’s Progress

tesla

August 12 – Musk announces that Tesla will begin to “phase in Dojo,” reducing its reliance on GPUs.

September 30 – At Tesla’s second AI Day, the company reveals it has installed its first Dojo cabinet, testing 2.2 megawatts of load. Tesla sets a target to complete a full Exapod cluster by Q1 2023.

2023: Dojo Becomes a ‘Long-Shot Bet’

April 19 – Musk calls Dojo a potential “order of magnitude improvement” for training costs and hints it could become a sellable service, similar to Amazon Web Services.

June 21 – Musk confirms Dojo is operational and running tasks in Tesla’s data centers. Tesla also projects its compute power will be among the top five globally by February 2024.

July 19 – Tesla reports production of Dojo has begun and plans to invest $1 billion in the project through 2024.

September 6 – Musk highlights that Dojo and Nvidia will solve Tesla’s AI training limitations, particularly around managing the vast data from its vehicles.

2024: Scaling Up Dojo

January 24 – Musk acknowledges the risks and rewards of Dojo, mentioning future versions like Dojo 1.5 and Dojo 2.

January 26 – Tesla announces plans to invest $500 million in a Dojo supercomputer in Buffalo, but Musk downplays the amount, saying it’s a small fraction of the necessary investment.

April 30 – Tesla confirms production of the D2 tile for Dojo, which integrates the entire training tile onto a single wafer.

May 20 – Musk announces plans for a water-cooled supercomputer cluster in the rear extension of Giga Texas.

June 4 – Musk clarifies that thousands of Nvidia chips reserved for Tesla were diverted to X and xAI, mentioning Tesla’s plans to house 50,000 H100 GPUs for FSD training.

July 1 – Musk reveals that current Tesla vehicles may require hardware upgrades for next-gen AI models.

2025: From Dojo to Cortex

January 29 – Tesla’s Q4 2024 earnings call mentions Cortex, Tesla’s new AI supercluster, which comprises 50,000 H100 Nvidia GPUs. Dojo is not mentioned, but Tesla notes that Cortex is driving improvements in FSD V13, with enhanced safety and video input resolution.

Musk continues to focus on both Nvidia and Dojo, scaling up AI infrastructure, with $5 billion in accumulated AI-related capital expenditures and $500 million for Dojo in Buffalo.

Conclusion

Dojo is a key project for Tesla in its pursuit of becoming a leader in artificial intelligence and self-driving cars. Although in 2025 Tesla is focused on expanding the capabilities of its new supercomputer Cortex, Dojo remains an important element in training neural networks for Full Self-Driving. The project is ambitious, with large investments and prospects, but also comes with high risks. Tesla continues to develop both its own infrastructure and its collaboration with Nvidia, increasing computational power to process data and achieve a higher level of autonomy.

Leave a Reply

Your email address will not be published. Required fields are marked *