What Is Broadcom? The Unknown Company Building the AI Chips Powering Google, Anthropic, OpenAI and Meta
Most people couldn’t name Broadcom if you asked them. They could name Nvidia. They could name Apple, Google, and Microsoft. But Broadcom? The $700 billion company headquartered in a nondescript office park in Palo Alto, California? Blank stares.
That anonymity is about to change.
Broadcom (Nasdaq: AVGO) is quietly the backbone of the artificial intelligence revolution — the company that designs the custom chips inside Google’s AI systems, the compute powering Anthropic’s Claude, the silicon that will run OpenAI’s next generation of models, and the accelerators Meta is deploying at gigawatt scale. With $19.3 billion in quarterly revenue, AI sales growing 106% year-over-year, and a CEO who has declared “line of sight” to $100 billion in AI chip revenue in 2027, Broadcom is no longer a secret worth keeping.
Here is everything you need to know.
What Does Broadcom Actually Do?
The honest answer is: a lot of things most people have never thought about.
Broadcom operates in two major business segments. The first is semiconductors — it designs chips for AI data centers, Wi-Fi, Bluetooth, broadband internet infrastructure, and the high-speed networking gear that moves data between servers. The second is infrastructure software — a massive portfolio of enterprise products anchored by its $69 billion acquisition of VMware, the company whose technology runs the virtual machines inside a huge portion of the world’s corporate data centers.
The semiconductor division generates roughly 65% of total revenue. The software division generates the other 35%. Together, they produced $68.3 billion in revenue over the past twelve months — making Broadcom one of the largest technology companies in the world by revenue, larger than many household names most people think of as “big tech.”
Broadcom doesn’t manufacture chips. It designs them — a business model called “fabless.” Once a chip design is finalized, it sends the specifications to TSMC, the Taiwanese foundry that actually fabricates the silicon. This keeps Broadcom asset-light, highly profitable, and laser-focused on engineering rather than manufacturing.
If you have ever connected to Wi-Fi, streamed video over the internet, stored a file in the cloud, or used an enterprise software application at work — there is a strong probability Broadcom technology was involved somewhere in the chain.
A History Built on Acquisitions
The company most people think of as “Broadcom” is not actually the original Broadcom.
The original Broadcom Corporation was founded in 1991 by two men from UCLA — Henry Samueli, a professor, and Henry Nicholas, his doctoral student. They built chips for cable modems and high-speed internet infrastructure, went public on Nasdaq in 1998, and grew into one of the defining semiconductor companies of the broadband era.
Separately, a company called Avago Technologies — whose lineage traces back to Hewlett-Packard’s semiconductor division in 1961 and was later spun out by private equity firms KKR and Silver Lake for $2.66 billion in 2005 — was quietly building a very different kind of semiconductor company under a CEO named Hock Tan.
Tan took over Avago in 2006 and immediately began deploying a strategy that would define Broadcom’s DNA: acquire strong businesses, cut costs aggressively, focus on high-margin products, and repeat. In 2016, Avago acquired the original Broadcom Corporation for $37 billion — at the time the largest semiconductor deal in history — and adopted its name, trading on the stronger brand recognition while keeping Avago’s lean operational culture and the AVGO ticker.
What followed was one of the most audacious acquisition streaks in technology history. CA Technologies for $18.9 billion in 2018, adding enterprise software. Symantec’s enterprise security business for $10.7 billion in 2019, adding cybersecurity. And then, in November 2023, the deal that transformed Broadcom entirely: the $69 billion acquisition of VMware — one of the largest technology mergers ever completed.
Critics said it was too expensive. Shareholders worried about integration risk. Hock Tan said nothing particularly reassuring and got to work anyway.
Who Is Hock Tan — and Why Does He Matter?
NICHOLAS KAMM/AFP via Getty Images
If Broadcom is one of the most powerful companies most Americans have never heard of, then Hock Tan is one of the most powerful CEOs most Americans have never heard of.
Born in Malaysia, Tan studied mechanical engineering at MIT and went on to get his MBA from Harvard. He worked at General Motors, PepsiCo, and Commodore International before landing in semiconductor private equity and eventually taking the helm at Avago in 2006. He was 54 years old at the time, and the company he was handed had about $1.5 billion in annual revenue.
Twenty years later, the company bearing the Broadcom name that Tan built through relentless deal-making and margin discipline has annual revenue of more than $68 billion, a market capitalization above $700 billion, and a CEO who can confidently tell Wall Street analysts — as he did in March 2026 — that he has “line of sight to achieve AI revenue from chips in excess of $100 billion in 2027.” (CNBC)
Tan is not a charismatic showman in the Elon Musk or Jensen Huang mold. He is precise, data-driven, and famously focused on free cash flow. While other tech CEOs go on podcasts and tweet at each other, Tan gives quarterly earnings calls and occasionally makes appearances at investor conferences. Yet the results speak loudly enough without him.
The AI Machine: What Are XPUs and Why Do They Matter?
The engine of Broadcom’s current growth is a class of chips called XPUs — custom AI accelerators, more formally known as application-specific integrated circuits (ASICs).
To understand why they matter, you first need to understand what Nvidia sells. Nvidia’s GPUs — graphics processing units — are general-purpose AI chips. They can run virtually any AI workload, any model architecture, any task. That flexibility is enormously valuable, especially for researchers and companies still experimenting with AI. It is also why Nvidia has dominated the AI chip market since the deep learning boom began around 2012.
But flexibility comes at a cost — literally. A GPU is engineered to be good at everything, which means it is not optimized for any one thing. It consumes enormous amounts of power. It generates enormous amounts of heat. And at the scale that the world’s largest technology companies operate — running billions of AI inference requests every single day on the same model architecture — the economics of a general-purpose chip start to break down.
This is where Broadcom’s XPUs enter. Unlike Nvidia’s general-purpose GPUs, Broadcom’s custom chips are tailored to each customer’s specific AI model architecture, delivering superior performance-per-watt for targeted workloads. They are manufactured on TSMC’s most advanced 3nm process node. They are not flexible — a chip built for Google’s specific TPU architecture cannot simply be repurposed for Meta’s recommendation algorithm — but for a company running 10 billion identical inference requests a day, that rigidity is a feature, not a bug.
The economics are compelling. Broadcom’s custom ASICs deliver 30-50% lower total cost of ownership for specific AI workloads at hyperscaler scale. They consume significantly less power. They take up less physical space. And the savings compound annually for the entire lifespan of each chip generation.
Broadcom doesn’t just design these chips in isolation. Its engineering teams embed directly inside its hyperscaler clients, co-developing chip architectures over 18 to 24-month design cycles — a deeply collaborative process that makes switching to a competitor extraordinarily difficult once a relationship is established. Market share estimates now place Broadcom at 70% or more of the custom AI accelerator design services market.
The Customer List: Google, Anthropic, OpenAI, Meta, and Apple
Here is where the story becomes remarkable.
Google: Broadcom is Google’s primary partner for designing its Tensor Processing Units (TPUs) — the custom AI chips that power Google Search, Google Cloud, and Google’s own AI models. In April 2026, Broadcom and Google signed a long-term agreement for Broadcom to develop and supply Google’s future TPU generations, with a supply assurance agreement extending through 2031. This is a five-year committed roadmap — an extraordinary level of visibility for any semiconductor company.
Anthropic: The AI safety company behind Claude — and the company that powers this writer’s AI tools — is one of Broadcom’s most significant emerging customers. Broadcom, Google, and Anthropic expanded their collaboration in April 2026, with Anthropic set to access approximately 3.5 gigawatts of AI compute capacity through Google TPUs (designed by Broadcom) beginning in 2027. Anthropic CFO Krishna Rao called it a “groundbreaking partnership.” Mizuho analysts estimate Broadcom will generate $21 billion in AI revenue from Anthropic alone in 2026, rising to $42 billion in 2027.
OpenAI: The company behind ChatGPT is developing its first custom AI chip with Broadcom, targeting deployment in 2027 with over one gigawatt of compute capacity. This represents a significant strategic diversification away from OpenAI’s near-total dependence on Nvidia GPUs through Microsoft’s Azure cloud.
Meta: Mark Zuckerberg’s company committed in April 2026 to one gigawatt of custom chips built with Broadcom through its MTIA custom silicon program, with plans to deploy multiple gigawatts in 2027 and beyond. Meta has committed to spending up to $135 billion on AI in 2026 alone.
Apple: In a disclosure that surprised the market, Broadcom confirmed Apple as a customer in its Q1 FY2026 earnings — adding the world’s most valuable company to a client list that already reads like the definitive who’s-who of the AI era.
All of these companies also rely on Nvidia GPUs to varying degrees. The two technologies are complementary, not mutually exclusive — hyperscalers use Nvidia GPUs for flexibility and research, and Broadcom’s custom ASICs for cost-optimized production deployment at scale. But the directional shift is unmistakable: every major hyperscaler is designing proprietary AI chips, and Broadcom is the primary design partner for most of them.
The Networking Business Nobody Talks About
Custom AI accelerator chips are only part of the Broadcom AI story. Equally important — and even less discussed — is its networking silicon.
Running tens of thousands of custom AI chips in a training cluster is a distributed computing problem of extraordinary complexity. The chips must communicate with each other at speeds that would have seemed science fiction a decade ago. Broadcom’s Tomahawk 5 switch ASIC handles 51.2 terabits per second of data movement — the infrastructure backbone that connects thousands of XPUs in a training cluster. Without Broadcom’s networking silicon, hyperscalers cannot operate custom ASICs at scale.
This is the network-effect moat inside Broadcom’s business that most investors miss. It designs the chips. It designs the switches that connect the chips. It integrates the entire system. That makes Broadcom not just a chip designer but a systems integrator for AI infrastructure — a position that compounds its value to every hyperscaler customer.
The VMware Wildcard — Software That Wall Street Undervalues
When Broadcom acquired VMware for $69 billion in November 2023, the deal was loudly criticized. The price was too high. The integration would be too complex. Enterprise customers would revolt at Broadcom’s famously aggressive licensing practices.
Some of that criticism proved partially valid — there were early customer complaints about pricing changes. But the financial results have been hard to argue with. VMware generated $6.8 billion in revenue in Q1 FY2026 with 78% gross margins, producing $9.2 billion in bookings. The VMware Cloud Foundation (VCF) subscription model — Broadcom’s repackaging of VMware’s product suite into a unified platform — is gaining significant enterprise traction, particularly as companies look to run AI inference workloads in private clouds rather than paying public cloud rates.
The software division now generates roughly $27 billion in annualized revenue at extraordinarily high margins — margins that effectively subsidize Broadcom’s aggressive investment in its AI chip business and give it a financial cushion that pure-play semiconductor companies do not have. This hybrid model — part semiconductor company, part enterprise software giant — is precisely what makes Broadcom structurally unique among its peers.
The Numbers: How Big Is Broadcom?
For readers encountering Broadcom for the first time, the raw scale of the business can be disorienting:
Total revenue for the twelve months ending January 2026: $68.3 billion. Q1 FY2026 revenue: $19.3 billion — up 29% year-over-year. AI semiconductor revenue in Q1 FY2026: $8.4 billion — up 106% year-over-year. Q2 FY2026 revenue guidance: approximately $22 billion — up 47% year-over-year. AI-specific committed order backlog: $73 billion. Adjusted EBITDA margin: 68%. Free cash flow in Q1 FY2026: $8.01 billion — 41% of revenue. New share repurchase authorization: $10 billion. Quarterly dividend: $0.65 per share.
CEO Hock Tan’s target for AI chip revenue alone in fiscal 2027: in excess of $100 billion — backed by a $73 billion committed customer backlog and long-term supply agreements. Mizuho projects full-year FY2026 AI revenue of $40.4 billion.
To put that $100 billion figure in context: it would make Broadcom’s AI chip business alone larger than the entire revenue of most Fortune 500 companies.
Broadcom vs. Nvidia: The Real Story
No piece on Broadcom is complete without addressing the question everyone is asking: is it a Nvidia killer?
The honest answer is: not exactly, and that framing misses what makes Broadcom interesting.
Nvidia still dominates the high-end AI training market with its Blackwell and Hopper GPUs. Its CUDA software ecosystem — a decade-long investment in developer tools and frameworks — gives it a lock-in advantage that no chip company can replicate quickly. For the research community, for startups, for enterprises still experimenting with AI architectures, Nvidia’s general-purpose GPUs remain the right tool.
But Broadcom is not trying to beat Nvidia on Nvidia’s terms. It is competing in a different lane — custom silicon for production workloads at hyperscaler scale — where the economics of general-purpose GPUs break down and purpose-built ASICs deliver decisive advantages in cost and efficiency. The largest technology companies in the world are simultaneously buying more Nvidia chips and commissioning more Broadcom custom silicon. The two companies are, for now, growing the pie together rather than fighting over a fixed slice.
The risk for Nvidia — and the opportunity for Broadcom — is that as hyperscalers’ AI workloads mature and standardize, the economic case for custom silicon compounds. A company running 10 billion identical inference requests daily on the same recommendation model has every incentive to invest 18 months and significant engineering resources in a custom chip that will deliver 30-50% cost savings for the next three to five years. The more the world’s AI workloads standardize around production inference rather than experimental training, the stronger Broadcom’s position becomes.
The Risks Worth Knowing
No company of Broadcom’s ambition is without risk, and any complete picture has to acknowledge them.
Customer concentration is real. The company’s AI chip business depends heavily on a small number of hyperscaler clients. If Google, Anthropic, Meta, or OpenAI were to pull back on AI capital expenditure — or shift their custom chip programs elsewhere — the impact on Broadcom’s revenue would be immediate and significant. Cyclical semiconductor risks persist, as do geopolitical tensions affecting TSMC’s ability to manufacture in Taiwan.
The VMware integration, while financially impressive so far, still carries execution risk. Enterprise customers have complained about pricing changes, and some have explored alternative virtualization platforms. Whether VMware’s growth trajectory holds through 2027 and beyond is an open question.
And then there is valuation. At roughly 27 times forward earnings, Broadcom is cheaper than Nvidia’s 34 times — but it still implies near-perfect execution. Any miss on AI revenue guidance or any softening in hyperscaler capex could produce a sharp correction.
Why Broadcom Matters Beyond the Stock Price
For investors, Broadcom is one of the most compelling ways to own the AI buildout without paying Nvidia’s premium valuation. For technologists, it is the clearest example of how the AI chip landscape is fragmenting away from GPU dominance. For enterprise IT departments, VMware’s evolution under Broadcom will shape how private clouds are built and priced for years.
But for everyone else — for the person who has never bought a chip stock, never thought about semiconductor supply chains, never wondered what sits inside the data center running the AI chatbot they just typed into — Broadcom matters because it is the invisible infrastructure company whose decisions will shape what AI can do, how much it costs, and who gets to build with it.
You’ve already been using its technology for years. Now you know its name.