In Chip War: The Fight for the World’s Most Critical Technology (2022) historian Chris Miller delivers a comprehensive and urgent account of how the semiconductor has become the most critical – and contested – resource in the modern world. His central insight is that microchips are not merely the foundation of the digital economy but the decisive factor shaping military power, economic growth, and geopolitical dominance in the twenty-first century. By tracing the industry’s evolution from postwar American military innovation to today’s fragile, globally dispersed and bottlenecked supply chain, Miller argues that control over chip design and manufacturing will determine which nations lead or lag in the emerging era of artificial intelligence, automation, and cyberwarfare.
When I first arrived in Silicon Valley in 2000, the semiconductor industry had already fallen out of favor with ambitious young graduates like me. Intel had, for all intents and purposes, cornered the market on logic chips, while “also-rans” such as AMD and Nvidia endured a steady cycle of stomach-churning stock drops that seemed never-ending. The money – and the future — clearly lay with internet consumer startups and enterprise software companies. Twenty-five years later, I’m still in Silicon Valley, but the landscape looks entirely different.
Creating cutting-edge semiconductors is perhaps the greatest technological challenge of the last one hundred years, nuclear power not excepted. In 1961, four transistors fit on a cutting edge microchip. In 2025, Nvidia’s Blackwell/B100 AI chip reportedly packs 208 billion transistors. In one year, the global chip industry will produce more transistors than the combined quantity of all goods produced by all companies, in all other industries, in all of human history.
Perhaps even more amazingly, this dizzying output is generated by a handful of global corporations. Two American firms – Synopsys and Cadence – control upwards of 75 percent of the world’s Electronic Design Automation (EDA) software required to make chips. Taiwan’s TSMC produces about a third of new computing power each year. Another American semiconductor juggernaut, Nvidia, commands over 90 percent market share in the design of GPU’s needed to train leading AI large language models. Two firms in South Korea – Samsung and Hynix – produce over half of the world’s DRAM memory chips. The Dutch firm ASML has a total monopoly on the world’s extreme ultraviolet lithography machines required to make the world’s most sophisticated chips. There are so few companies at each stage of the supply chain, because each step requires solving some of the most challenging technical feats of the modern era. Miller quips that “No other facet of the economy is so dependent on so few firms … OPEC’s 33 percent market share of world oil production looks unimpressive by comparison.”
A semiconductor is a material – usually silicon, but also including germanium – that can conduct electricity under some conditions but not others, making it ideal for controlling electrical signals. This property allows engineers to build tiny switches called transistors, which are the basic building blocks of all modern electronics. The semiconductor transistor – the key breakthrough – was invented in 1947 at Bell Labs by John Bardeen, William Shockley, and Walter Brattain. In 1955, Shockley established Shockley Semiconductor in Mountain View, California so he could be close to his ailing mother in Palo Alto. And, thus, Silicon Valley was born. NASA’s Apollo program launched Fairchild from a small startup to a thousand employee company with over $20 million in revenue in just two years.
A few years later and a couple of thousand miles away, Jack Kilby made one of the most important breakthroughs in the history of technology: he invented the integrated circuit (IC) in 1958 while working at Texas Instruments (TI) in Dallas, a company originally focused on making technology that created seismic waves to help oilmen determine where to drill. Kilby realized that all the main components – transistors, resistors, and capacitors – could be made from the same piece of semiconductor material and connected directly on a single chip, which made it possible to miniaturize and mass-produce electronic circuits, drastically lowering costs and increasing reliability. At almost the same time, another TI employee, Jay Lathrop, developed and patented a process called photolithography to print circuits on silicon using light. In 1964 TI won the contract to supply integrated circuits for the Minuteman II missile guidance system. Within a year TI’s shipments to the Air Force accounted for over sixty percent of all dollars spent on chips to date. Overall, 95 percent of all chips produced went to military and space applications.
By the mid-1960s, Fairchild and Texas Instruments had emerged as the dominant players in the fledgling semiconductor industry, and they faced the urgent challenge of finding mass-market applications for their products. At that pivotal moment, Gordon Moore made what Miller calls “the greatest technological prediction of the century” – Moore’s Law: the number of transistors on a microchip would roughly double every two years, driving exponential growth in computing power. Advances in technology, combined with the shift to low-cost labor in East Asia, caused transistor prices to plummet while performance soared. Wages in Asia were roughly a tenth of those in the United States, and production was twice as fast. As a result, the price of Fairchild chips dropped almost overnight from $20 to $2 (by 2020 the cost of a transistor would drop to a millionth of the 1958 price). By 1968, the nascent computer industry was buying half of all chips produced.
The United States completely dominated the first two decades of the semiconductor industry. The Soviets, by contrast, adopted a simple “copy it” mentality that left them perpetually behind American technological innovation. Their planned city for technological research, Zelenograd, was doomed to operate “like a poorly run outpost of Silicon Valley,” as Miller writes. Japan, on the other hand, strategically integrated into America’s semiconductor ecosystem as part of the broader Cold War strategy. Companies like Sony and Sharp licensed chip designs from Fairchild and TI for 3.5 to 4.5 percent of sales and built commercial products around them. Japanese electronics exports surged from $600 million in 1965 to $60 billion less than two decades later.
Defense applications still mattered, however. At the tail end of the Vietnam War, TI developed a simple laser-guided munitions system that used just a few transistors to control the wings of a standard 750-pound M-117 bomb, transforming it into a precision strike weapon. Semiconductors would go on to play a foundational role in America’s “offset strategy” against the Soviet Union, devised by defense thinkers like William Perry and Andrew Marshall. Simply put, the United States could not hope to match the Warsaw Pact in sheer numbers of tanks and planes, so it focused on building qualitatively superior weapons. While the U.S. military had lost the war in Vietnam, American semiconductors helped win the peace. The dramatic results of this technological edge were put on global display during the lightning-fast, American-led campaign against Saddam Hussein’s Soviet-supplied army in 1990.
Meanwhile, Intel was founded in 1968 by Gordon Moore and Bob Noyce after growing frustrated by the lack of stock options at Fairchild. Within two years, they launched their first product: the dynamic random-access memory (DRAM) chip. Unlike specialized chips, DRAM could be mass-produced, and Intel bet that large-scale manufacturing would yield substantial economies of scale. Moore recognized that cheap, ubiquitous computing power could have transformative effects on society. “We are really the revolutionaries in the world today,” he declared in 1973, “not the kids with the long hair and beards who were wrecking the schools a few years ago.”
Moore was right about the revolutionary potential of the mass-produced integrated circuit, but he and American firms like Intel and TI were unprepared for stiff competition from Japanese upstarts such as NEC and Toshiba, who quickly produced memory chips that were more reliable yet offered the same capabilities at the same price. A decade after pioneering DRAM chips, Japanese firms had pushed Intel out of the market. By 1985, they accounted for nearly half of the world’s capital expenditures on semiconductors, compared with less than a third from the United States. Meanwhile, American dominance in the semiconductor lithography equipment market, led by firms like GCA, collapsed – from 85 percent market share in 1978 to just 20 percent a decade later. By the end of the Reagan administration, an industry that the United States had virtually invented and once dominated was in disarray, despite its growing global importance.
American semiconductor CEOs created the Semiconductor Industry Association (SIA) to lobby Washington for political and economic support. In the 1980s, Intel’s Bob Noyce spent nearly half his time in meetings in the capital. Japanese government subsidies to their semiconductor industry – affordable because the United States bore most of the cost of their national defense – were viewed as a national security concern almost as serious as oil. The SIA successfully lobbied to lower the capital gains tax from 49 to 28 percent and allowed pension funds to invest in Silicon Valley venture capital, with far-reaching consequences. Noyce also spearheaded the creation of Sematech, a consortium of the Defense Department and industry designed to foster U.S. collaboration in the way the Japanese had. Miller describes Sematech as “a strange hybrid, neither a company nor a university, nor a research lab. No one knew exactly what it was supposed to do.” Half of its funding went toward defending the American position in lithography – a mission that ultimately failed to save national champion GCA, which closed in 1993.
In 1989 – just as Japan’s economic tidal wave was cresting before sinking into a thirty-year stagnation – Sony co-founder Akio Morita co-authored, with a right-wing Japanese nationalist, a brazen and triumphant manifesto titled The Japan That Can Say No: Why Japan Will Be First Among Equals. The book argued, among other things, that Japan no longer needed to bow to American demands because it controlled the manufacture of semiconductors. According to Miller, the book provoked outrage in the United States not so much for its tone or allegations, but for its underlying truth. America’s strategic effort to build an anti-Soviet supply chain had largely succeeded – except that the principal beneficiary was Japan. “America’s strategy to turn Japan into a transistor salesman,” Miller writes, “seemed to have gone horribly wrong.”
The American economy surged through the 1990s and 2000s, fueled by a renaissance in the semiconductor industry. Intel, under its hard-driving new CEO – Hungarian Jewish émigré Andy Grove – made the bold decision to disrupt itself, abandoning the DRAM market it had pioneered and conceding it to Japanese competitors in order to focus on microprocessors for personal computers. The Japanese, however, would have little time to savor their victory. New challengers – South Korea’s Samsung and a Boise-based startup called Micron – emerged as credible, low-cost DRAM producers that steadily eroded Japan’s hard-won dominance from 90 percent in the late 1980s to just 20 percent by 1998. “Japan’s seeming dominance,” Miller writes, “had been built on an unsustainable foundation of government-backed overinvestment.” Japanese banks kept lending, Japanese chipmakers kept building new fabs, and in the process they completely missed the microprocessor-driven PC revolution that powered America’s resurgence.
A pivotal moment in the evolution of the global semiconductor industry came in 1987 with the founding of TSMC, the world’s first pure-play semiconductor foundry. Conceived as a strategic initiative by the Taiwanese government, TSMC focused exclusively on manufacturing chips designed by other companies, allowing it to capture massive economies of scale. Intel’s Gordon Moore reportedly told founder Morris Chang, “Morris, you’ve had a lot of good ideas in your time, but this isn’t one of them.” Yet, as Chris Miller observes, it was a “Gutenberg moment” – an innovation that revolutionized both the semiconductor industry and the world. By giving chip designers a dependable manufacturing partner, TSMC dramatically lowered startup costs and sparked the rise of dozens of inventive “fabless” design firms known as “authors.” Singapore tried to replicate the model with Chartered Semiconductor in 1987, China did the same with SMIC (Semiconductor Manufacturing International Corporation) in 2000, and Samsung opened their first foundry in 2005. Those attempts at emulation were a mixed bag but still American market share in chip production declined from 37 percent in 1990 to 19 percent in 2000 to 13 percent in 2010 and leveled off to 12 percent in 2020.
The same year that Morris Chang founded TSMC, across the Taiwan Strait Ren Zhengfei founded Huawei, a Chinese telecommunications and technology company with close ties to the Chinese state. It was part of China’s “Four Modernizations” proclaimed by Deng Xiaoping, which in part sought to free China from foreign dependence on semiconductors and chip making. The company became highly controversial because of allegations that its equipment could enable Chinese government espionage, leading to restrictions and bans by several Western countries.
Meanwhile, the enormous cost and technical complexity of developing extreme ultraviolet (EUV) lithography equipment, technology pioneered in America’s National Labs and funded largely by Intel, drove many competitors out of business. Its development took over a decade and cost tens of billions of dollars. The lasers alone required almost half a million component parts each. Miller says is was “one of the biggest technological gambles of our time” American firms like GCA and SVG (Silicon Valley Group, a descendant of lithography pioneer Elmer Perkins) fell behind Japanese stalwarts Nikon and Canon, which themselves ultimately could not keep pace with the Dutch firm ASML, which on its own only produces 15 percent of the components in its $350M EUV machines, which Miller says is the most expensive mass-produced machine tool in history. Partnering closely with TSMC, ASML became the sole global supplier of EUV machines and an indispensable player in advanced chip production. On the surface, this appeared to mark a triumph of globalization, but as Miller notes, it was in fact a triumph of monopolization – and one dominated by non-American firms.
Just as the balance of power in chip manufacturing and expertise moved offshore, America’s national champion, Intel, stumbled badly. Addicted to the enormous revenue streams and healthy profit margins from its PC and server markets – which Miller calls Intel’s “castles,” protected by the “moat” of its proprietary x86 architecture – the company found the status quo simply too profitable to disrupt. Between 1990 and 2020, Intel harvested a quarter trillion dollars in profit. Intel spent over $10 billion a year on R&D throughout the 2010s, four times more than TSMC and three times more than the entire DARPA budget, but its increasingly MBA-minded leadership focused almost exclusively on maintaining margins, making it nearly impossible to pursue risky innovation, including their new foundry venture. It was the quintessential “innovator’s dilemma.” As Miller writes, “[Intel’s] leaders were simply more focused on engineering the company’s balance sheet than its transistors.” Consequently, Intel completely missed the mobile computing revolution, which embraced a more energy-efficient model – vital for battery life – known as RISC (Reduced Instruction Set Computing). The most successful commercial implementation of RISC would come from Arm (originally “Acorn RISC Machine”), whose processor designs became the foundation of nearly every smartphone in the world.
The United States has retained a dominant position across several key segments of the semiconductor industry well into the twenty-first century. American firms lead the world in semiconductor manufacturing equipment, with Applied Materials and Lam Research providing advanced tools for etching circuits into silicon wafers, and KLA producing precision machines to detect nanometer-scale defects on wafers and lithography masks. Another trio of American companies – Cadence, Synopsys, and Mentor – control roughly three-quarters of the global market for Electronic Design Automation (EDA) software, the essential tools used to design chips. Meanwhile, Nvidia, AMD, and a still-breathing Intel continue to dominate high-performance chip design. Qualcomm, for its part, remains a global leader in mobile and wireless chipsets that power most smartphones (Miller writes: “The company’s patents are so fundamental it’s impossible to make a cell phone without them.”), while Texas Instruments thrives as one of the world’s largest producers of analog and embedded processing chips used in everything from automobiles to industrial machinery. But these firms don’t make any of their own chips. In fact, they may have gone out of business if they had to.
American firms continue to struggle to manufacture cutting-edge chips. In 2009, GlobalFoundries (GF) was spun out of AMD and quickly became a major global foundry following its 2010 acquisition of Singapore’s Chartered Semiconductor, the country’s former national champion. However, in 2018, GF halted development of 7nm and smaller process nodes, effectively exiting the “bleeding-edge” semiconductor race dominated by TSMC and Samsung. Instead, it shifted focus to profitable, high-volume specialty manufacturing, producing mature-node chips for automotive, communications, and secure U.S. government applications – a pragmatic retreat that underscored America’s waning dominance in advanced fabrication.
An important distinction is that the smartphone supply chain looks very different from that of PCs. Smartphones contain a dense array of specialized chips: while the main processor may be designed by Apple itself, other vendors supply components for radio frequency, Wi-Fi, Bluetooth, camera image sensors, memory, motion sensing, and battery management, among others. These components are often designed in California, assembled in China, but crucially, manufactured in Taiwan – the only place capable of producing many of these advanced parts at scale and with the required precision.
Miller devotes an entire section of the book to the rise and challenge posed by China. In short, he writes, the Chinese remain “staggeringly reliant on foreign products” when it comes to semiconductors. Less than ten percent of the world’s chips are manufactured in China – and none of them are high-value, leading-edge technology. The Chinese Communist Party is determined to change that. In 2017, Xi Jinping issued a rallying cry to the nation’s technology leaders in Beijing: “We must promote strong alliances and attack strategic passes in a coordinated manner… We must assault the fortifications of core technology research and development… we must concentrate the most powerful forces to act together, compose shock brigades and special forces to storm the passes.”
The centerpiece of this effort was Made in China 2025, a sweeping national plan launched in 2015 to transform the country from the “world’s factory” of low-cost goods into a global leader in advanced manufacturing. One of its primary goals was to reduce dependence on foreign chips – a dependence that cost China more than $260 billion in imports in 2017 alone, which exceeds the annual oil exports from Saudi Arabia. The government poured tens of billions of dollars into domestic semiconductor initiatives, from state-backed national champions like Huawei, which Miller says is more a strategic challenge than a commercial one, and ambitious private startups, to covert investment-funds like Tsinghua Unigroup and Canyon Bridge, which have invested billions in ways Miller says are “impossible to comprehend from the perspective of business logic” but are rather “a government-led effort to seize foreign chip firms.” The strategic goal, the author claims, is a Chinese “off-set strategy” of its own to beat the United States on the future battlefield. Yet progress has fallen far short of expectations: the goal of reducing imported chips from 85 percent to 30 percent by 2025 remains well beyond reach, underscoring the immense difficulty of catching up in one of the world’s most complex and capital-intensive industries.
Chinese behavior has undermined the two core pillars of post–Cold War American economic and foreign policy: faith in globalization and the belief that the United States could “win the race by running faster” – that is, out-innovate rather than contain its competitors. “American tech policy,” Miller writes, “was held hostage to the banalities about globalization that were easily seen to be false.” The Trump era marked a sharp break from this orthodoxy, as Washington began focusing on semiconductors with an intensity not seen since the late 1980s. Miller is notably critical of the Obama administration’s passivity and unexpectedly sympathetic to the Trump administration’s more combative, “red in tooth and claw” approach to Chinese competition.
Meanwhile, the world’s dependence on Taiwan has grown ever more perilous: as of 2022, the island produced only 11 percent of global memory chips and 37 percent of global logic chips, yet a staggering 99 percent of the GPUs essential for artificial intelligence. Against this backdrop, Miller concludes that “The United States is in desperate need of a new Andy Grove” – a leader capable of marrying strategic foresight with industrial strength to navigate a global landscape dominated by a handful of irreplaceable companies.

Leave a comment