Alphabet Inc.’s Google is reportedly in talks with Marvell Technology to develop two new chips designed to improve how artificial intelligence models are run.
Summary
- Google is in talks with Marvell to develop two AI-focused chips, including a memory processing unit and a next-generation TPU, to improve model efficiency.
- The push is part of Google’s effort to position its TPUs as an alternative to Nvidia GPUs, while expanding partnerships with Intel and Broadcom.
- The move comes alongside the launch of Gemma 4, as Google aligns its AI models and hardware stack amid intensifying competition in AI computing.
According to a report by The Information, citing people familiar with the matter, one of the proposed chips could be a memory processing unit built to work alongside Google’s tensor processing units, or TPUs. The second chip is expected to be a new TPU tailored specifically for running AI workloads more efficiently.
The move is part of Google’s effort to position its in-house chips as an alternative to Nvidia’s GPUs. TPU adoption has been contributing to Google Cloud revenue growth, as the company looks to show returns on its AI infrastructure spending.
The report added that Google plans to complete the design of the memory-focused chip by next year before moving to test production. At the same time, it has expanded partnerships with chipmakers such as Intel and Broadcom to support growing demand for AI infrastructure.
As Google steps up development of its AI accelerators, it could begin to challenge Nvidia’s long-standing lead in high-performance computing.
NVIDIA, for instance, is advancing its own lineup of AI inference chips, including designs that incorporate technology from Groq. The entry of another large-scale competitor may intensify the race in AI hardware and reshape how companies source computing power for models.
Investors are likely to look for further clarity when Google reports its first-quarter results on April 29. The earnings release is expected to offer signals on cloud performance, advertising trends, and how aggressively the company plans to invest in AI and semiconductors in the coming quarters.
AI model advances support hardware push
Google’s latest chip discussions come as it continues to expand its AI model capabilities. Earlier this month, the company introduced Gemma 4, a new open model family built for advanced reasoning and agent-style workflows.
Gemma 4 is available in four sizes and is designed to handle multi-step logic and structured problem-solving more effectively. It has also delivered improved results in benchmarks tied to mathematics and instruction-following tasks.
The models include features such as native function calling, structured JSON outputs, and system-level instructions, allowing developers to build autonomous systems that can connect with APIs and external tools. They can also generate code offline, turning local machines into capable AI coding assistants.
Together, the model upgrades and chip development plans show how Google is aligning its software and hardware stack as competition in the AI space continues to intensify.