NVIDIA is arguably the leader in creating GPU hardware for AI, and it shouldn't come as a surprise that it's using AI to accelerate chip design and production. Staying ahead of the game is a part of the picture; NVIDIA is now the world's sixth-largest company thanks to the rise of AI.
NVIDIA's custom LLM (Large Language Model) is called ChipNeMo, adapted from Meta's Llama 2, and is trained on the company's vast amounts of architectural data, documentation, and source code. ChipNeMo was first unveiled in late 2023 and is also used as a tool for training engineers via its chatbot functionality.
According to Bryan Catanzaro, NVIDIA's vice president of applied deep learning research, designing a new GPU takes close to 1,000 people working together. Access to an AI chatbot with all that data is an invaluable tool. The question now becomes, how much - if any - of the design for upcoming GPUs is being designed by ChipNeMo?
According to a new report by Business Insider, NVIDIA hasn't provided any commentary regarding using ChipNeMo for GPU production. If the answer is 'not yet,' it's only a matter of time.
AI developing hardware directly or indirectly is now a reality. Google plans to use its new DeepMind AI system to design custom chips, while software companies like Synopsys are launching AI tools to boost productivity amongst engineers.
Designing chips for AI is the new arms race. Apple, Amazon, Microsoft, Google, Meta, AMD, and NVIDIA are all vying to develop and create the most powerful and efficient AI hardware. And there's a tremendous amount of silicon on the table; in 2024 alone, Meta - a single company - is set to stockpile upwards of 600,000 GPUs for AI.