However, a technical document released by Apple after the event revealed that Alphabet’s Google also plays a crucial role in Apple’s AI development efforts. To create its foundational AI models, Apple engineers utilized a mix of hardware, including Apple’s on-premise graphics processing units (GPUs) and Google’s tensor processing units (TPUs) available exclusively through Google’s cloud services.
Google has been developing TPUs for about a decade and has publicly discussed two types of its fifth-generation chips, which offer AI training performance competitive with Nvidia’s H100 AI chips. Google also announced the upcoming launch of a sixth-generation TPU at its annual developer conference this year.
These processors are specifically designed for AI applications and model training, with Google providing a cloud computing hardware and software platform built around them.
Both Apple and Google did not immediately respond to requests for comment. Apple did not clarify the extent of its reliance on Google’s chips and software compared to hardware from Nvidia or other AI vendors. Generally, using Google’s chips requires purchasing access through its cloud division, similar to how customers buy computing time from Amazon Web Services (AWS) or Microsoft’s Azure.