Google is making a significant shift in its approach to designing its next-generation AI chip. Traditionally, the company has collaborated with Broadcom to develop its Tensor Processing Units (TPUs), specialized AI accelerator chips. However, recent reports indicate that Google may replace Broadcom with Taiwanese chip manufacturer MediaTek as its design partner for the upcoming seventh-generation TPUs. Despite this potential change, it is important to note that Google is not severing its ties with Broadcom entirely.
There are compelling reasons for Google to pursue a partnership with MediaTek. Notably, MediaTek has established a strong relationship with TSMC, the world’s leading chip foundry. This connection may allow MediaTek to offer Google lower production costs per chip compared to Broadcom. Reports suggest that Google spent between $6 billion and $9 billion on TPUs last year, highlighting the financial implications of any changes in design partnership.
Additionally, Google has developed its TPU AI accelerators to diminish its dependence on Nvidia’s GPUs, which are widely used for training AI models. By utilizing TPUs that are specifically optimized for AI workloads, Google has reduced its reliance on Nvidia compared to other major players in the AI field, such as OpenAI and Meta Platforms. These organizations continue to face challenges, particularly when Nvidia experiences supply shortages, which can hinder their operations. For instance, OpenAI’s CEO, Sam Altman, recently noted that they had exhausted their supply of Nvidia GPUs, leading to delays in releasing their new GPT-4.5 model.
This incident underscores the importance of choosing the right hardware for AI applications. GPUs are preferred for AI because of their capability to handle large volumes of data simultaneously, a key requirement for AI processing, while CPUs tend to be less efficient for such tasks. Thus, Google’s decision to partner with MediaTek could be a strategic move to enhance its AI capabilities.