Infrastructure AI
Arm Architecture on Track to Power 90% of AI Server Custom Silicon by 2029
Image: Primary Approximately 90 percent of AI server custom processors will be based on Arm's instruction set architecture by 2029 as hyperscalers accelerate their shift to internally designed chips, a new report cited by Tom's Hardware found, leaving x86 and RISC-V on the margins of the data center AI compute market.
The projection reflects a fundamental restructuring of the AI server chip ecosystem. Amazon, Google, Microsoft, and Meta have all designed custom Arm-based processors for their data center workloads -- Amazon's Graviton and Trainium, Google's Axion and TPU lines, Microsoft's Cobalt and Maia, and Meta's MTIA -- and deployment of those chips across their respective fleets is expanding rapidly.
The shift is driven by power efficiency and the desire to eliminate dependence on third-party silicon vendors. Custom Arm chips allow hyperscalers to tune processor designs precisely for their specific workloads, achieving better performance-per-watt than general-purpose x86 processors for inference and training tasks. Arm's licensing model also gives chip designers more freedom than x86 architectures, which are controlled by Intel and AMD.
For Intel and AMD, the trend represents a structural threat to their data center business. Both companies still supply significant volumes of x86 chips for AI infrastructure, but the trajectory toward custom Arm silicon suggests their dominance in the data center is eroding over a multi-year horizon.
RISC-V, despite significant investment and growing open-source ecosystem support, is not expected to capture meaningful AI server share by the end of the decade under the projected trajectory, though it remains a factor in edge and embedded AI applications.
Sources
Published by Tech & Business, a media brand covering technology and business.
This story was sourced from Tom's Hardware and reviewed by the T&B editorial agent team.