明摆着 means plainly visible. 暗摆着 , After Dark still starts with data, and uses analytical frameworks assuming rational actors to make inferences about what isn't. The possible dark matter hidden from plain sight.
The starting gun nobody heard
OpenAI kicked off America's AI race in late 2022. The GPU frenzy, the Nvidia stock run, the permeation of the zeitgeist. Two years later DeepSeek dropped, and the Western narrative absorbed it neatly: China has joined the race.
The chronology is backward. China has been running full-throttle since the spring of 2016, 10 years ago. The starting gun wasn't fired in San Francisco. It came from London.
By the time ChatGPT was launching to breathless commentary and a Mag7 capex arms race, China had already implemented and iterated a national AI program across two Five-Year Plans. It was drafted swiftly, in response to a single figure almost no one talks about in connection to the Chinese AI Cambrian explosion, even though he's the one who lit the flame.
If you think Chinese AI started with DeepSeek, you cannot understand Chinese AI.
The day Demis shocked China
In March 2016, Demis Hassabis and the DeepMind team defeated Lee Sedol at Go, a game thousands of years old and a point of civilizational pride in China. An estimated 280 million Chinese watched those matches. The single biggest Baidu spike for 人工智能 artificial intelligence) landed during the Lee Sedol matches..
The American response to DeepMind was mostly polite applause and a Netflix documentary. The gushing compliments came from one of the creators of Deep Blue — which of course famously beat Kasparov at chess. But that was the emphasis: it was seen as having solved board games. The following year, President Trump proposed cutting National Science Foundation funding for AI.
In China, they had experienced their equivalent of the Sputnik moment. Of course AI research was already underway well before AlphaGo; but it was not a national priority as such, and seen more as a niche computer science project than civilizational infrastructure.
That all changed when Lee Sedol conceded match 3. Less than a month after AlphaGo's triumph, the People's Liberation Army (PLA) convened a top-level workshop titled A Summary of the Workshop on the Game between AlphaGo and Lee Sedol and the Intelligentization of Military Command and Decision-Making. Yes, it was serious.
On July 20, 2017, the State Council issued the 新一代人工智能发展规划 , the New Generation Artificial Intelligence Development Plan. A staged 2020 → 2025 → 2030 blueprint covering basic research, industrial applications, military-civil fusion, talent pipelines, and ethical norms.
That's national strategy. Top to bottom. Five years before ChatGPT.
One country treated AlphaGo as a tech demo that beat the hardest board game ever; the other as if it had seen fire for the first time.
An archetype the Chinese admire
What landed in Beijing wasn't just a board-game victory. It was a demonstration of a specific kind of intelligence — one that taught itself by self-play, learned a value function over positions, and produced moves no human had ever seriously considered. Small data, big task. A crow, not a parrot. Hassabis didn't build a machine that mimicked. He built one that seemed to reason.
Then, as if to help the Chinese research community attain more funding over time, he kept going:
- AlphaFold (2020) collapsed a fifty-year problem in structural biology and became public scientific infrastructure within two years.
- GraphCast outperformed every deterministic numerical weather model on Earth.
- GNoME (2023) identified 2.2 million stable inorganic crystal structures in a single paper — an order of magnitude more than humanity had catalogued in all of prior science.
- The Nobel Prize in Chemistry, 2024, for the AlphaFold work.
This was not AI as a chatbot. This was a scientist delivering civilizational infrastructure on a first-principles basis, paper after paper.
The persona registered, too. Hassabis is quiet, scholarly, technical, mission-driven, contemptuous of hype. That is the cultural register of the Chinese 学者 the scholar-scientist whose authority comes from work rather than performance. Demis Hassabis pattern-matches to someone the system has been built to admire for two thousand years.
So to understand Chinese AI, you have to understand that their intellectual inspiration runs through Hassabis, not Altman.
In 2016, the Chinese state-scientific establishment looked at AlphaGo and recognised something the West dismissed: an existence proof of the kind of AI they had always thought AI should be. In response, the state apparatus pivoted, quietly realigning capital, institutional mandates, and engineering talent to domesticate the underlying architecture.
Two Engines, One Brain
This story has already happened in modern Chinese history.
In 1955, after a five-year fight with the FBI, Qian Xuesen flew home to China carrying American rocketry and a decade at JPL in his head. The state built an institution around him. What followed was 两弹一星 Two Bombs, One Satellite — an impossibly ambitious program for an impoverished country with a destroyed industrial base. It was widely considered impossible after the Soviets withdrew their support. And for good reason — China at that time was far poorer than most of sub-Saharan Africa and much of the rocket program was done in the rural west in places that started without running water or electricity.
But then it produced exactly what it set out to: the atom bomb in 1964, hydrogen bomb in 1967, satellite in 1970. One phrase, one program, one national push that turned the People's Republic into a strategic power despite simultaneous US and Soviet pressure.
It is taught proudly in Chinese schools to this day. It is the canonical Chinese name for generational state-plus-science executed against a great-power deadline. Imported catalyst, indigenous system, civilisational soul.
Zhu Songchun, China's patriarch of AGI research and director of BIGAI, has formally proposed to China's top political advisory body that AI research be elevated to the same level as Two Bombs One Satellite. He is very influential.
Which leaves a question. If AGI is the successor to Two Bombs, One Satellite, what is its institutional vehicle? My analytical conjecture: China doesn't need to create one. It already has a home.
两机 Two Engines is an existing top-level state program. It covers aero engines and heavy-duty gas turbines, written into the 13th and 14th Five-Year Plans, executed through the consolidation of forty-six fragmented engine units into the Aero Engine Corporation of China (AECC) in 2016. It's the closest thing China has to a strategic-tier, state-mobilised, generationally compounding scientific program.
What is Two Engines, and why does it matter?
The Two Engines convert chemical and nuclear energy into electricity, into thrust, into industrial work — through ultra-high temperature turbines. Aero engines and gas turbines.
For seventy years, 心脏病 literally heart disease, has been the standard Chinese figurative term for the country's inability to build a world-class jet engine. Chronic. The condition you live with, not the emergency you survive. China could build a bullet train, a stealth airframe, a third-generation nuclear reactor, a lunar sample-return mission — and then hang a Russian AL-31 off the back of its most advanced fighter because the domestic hot section couldn't survive a combat sortie. A country with chronic heart disease cannot run, no matter how strong the legs.
The reason it's hard isn't engineering exotic. It's metallurgy. A modern turbine blade operates at gas-path temperatures above the melting point of nickel itself. It survives because of single-crystal casting, internal cooling channels, ceramic thermal barrier coatings, and exotic alloying elements that slow atomic diffusion at temperature. Walking from a working second-generation alloy to a working third-generation alloy took the West fifteen years per generation. Consensus expert opinion in 2020 was that China was thirty-plus years behind on the hot section. Most Western analysis still thinks that today.
My analysis leads me to append cognition to the program name for completeness: 两机一智 Two Engines, One Brain. The brain isn't a chatbot. It's the AI that searches for materials, optimises industrial cycles, runs the closed loops, and ships the next-generation turbine hot section the body can't ship without. Two Engines in plain sight, One Brain in the dark — and, I believe, executed together, written into the same plan.
The DeepSeek moment, revisited
On January 20, 2025, DeepSeek released R1. Western coverage absorbed it as a cost story: China caught up to OpenAI for $5 million. Scary. Recalibrate. The real story isn't about cost. It's about method.
OpenAI's o1 had launched four months earlier — the first publicly released LLM trained with large-scale reinforcement learning on reasoning. But o1 was closed: the blog post described outcomes, the method was a trade secret, the system card was unreproducible. R1 was the same kind of model, but the entire training pipeline was disclosed in a peer-reviewable paper. Algorithm pseudocode, reward functions, training dynamics, failure modes all directly into the public domain.
DeepSeek was first to publish, and in doing so, launched the post-training era. Within sixty days, every frontier lab on Earth had reproduced GRPO or a close variant. Within six months it was the field standard.
The cost story missed the point because cost wasn't the point. The point was the method. Train a model not to mimic correct text, but to be rewarded for being right. Replace the supervised teacher with a value function over outcomes. No human in the loop, except as the writer of the reward.
That isn't an LLM technique. That's the AlphaGo technique, applied to language. Reinforcement learning over a search space, with a verifier replacing the human grader, producing reasoning traces no human had taught the model to write.
After ChatGPT launched, Google panicked. Their first response was Bard, which flubbed its first public demo in February 2023 and wiped $100 billion off Alphabet's market cap in a single day. So the panic worsened. Two months later DeepMind was merged with Google Brain, and Hassabis became CEO of the combined entity. Gemini became his day job. The teacher had been derailed by his own employer. The AI community in China had no Sundar Pichai.
The LLM arena was a parrot fight. The Chinese arrived late and changed the game by adding the crow back in.
The Dark Matter of Chinese AI is materials
Why materials? Because that's where the AlphaFold-and-GNoME template fits cleanest, and because materials is where every twenty-first-century industrial bottleneck actually lives. The gas turbine hot section is a single-crystal-superalloy plus thermal-barrier-coating plus rhenium-hafnium-tantalum problem. Supercritical CO₂ power cycles are an alloy-corrosion problem. Fusion is a plasma-facing-materials problem. Battery density is a chemistry-search problem. Post-silicon compute is a 2D-materials problem. Every one of these is a high-dimensional combinatorial search over composition and processing — exactly the shape AlphaFold solved for proteins and GNoME extended to crystals. The lack of progress on these axes is exactly the cause of China's chronic heart disease.
AI4Mat (AI for advanced materials) by itself isn't enough. A model that proposes a million candidate alloys is useless if you can't make any of them and test any of them. This is where the West has, for thirty years, slowly walked from second-generation to third-generation superalloys at a pace of one generation every fifteen years — because most of that time wasn't science, it was the institutional friction between lab and foundry, foundry and OEM, OEM and certification.
China built something different. The same AECC consolidation that backstops Two Engines plus the existing pilot-scale infrastructure at Wuxi, Baowu, Anshan, and the AECC foundry network — the whole stack is purpose-engineered to compress the lab-to-qualified-part handoff. CAS Shenzhen's 19-agent autonomous-discovery system doesn't just generate candidates. It routes them, with pre-negotiated qualification protocols, into pilot-scale runs that would take years to set up in any Western jurisdiction.
Wuxi Jinye doesn't wait for peer review. It waits for simulation convergence, then pours metal.
This is the half even DeepMind structurally cannot buy. The Hassabis template plus a continental-scale validation surface, under one coordination umbrella. China is the only country currently holding both halves.
And the payoff has started to arrive.
Heart disease, on the mend
Five years of preparation. Then, beginning in 2020, a phase transition.
The CJ-2000 core engine achieves first ignition, reaching 100.6% of design speed. The C929's future power plant clears its first wall.
Per open-source defense reporting, AECC enters serial production of the WS-15 — a clean-sheet fifth-generation afterburning turbofan, 180 kN thrust class, designed around the DD6 single-crystal superalloy. The WS-15 rivals the F135 in the F-35 and exceeds the F119 in the F-22 on thermodynamic efficiency. Four months later, a J-20 prototype flies on twin WS-15s.
China's first fully domestic 300 MW F-class heavy-duty gas turbine ignites in Shanghai — eight years of development, 50,000 components, the civilian analogue of the WS-15 program, sharing the same hot-section metallurgy. Two months later, on December 26, two sixth-generation fighter prototypes — designated J-36 and J-50 in open-source defense reporting — fly on consecutive days from Chengdu and Shenyang.
The CJ-2000 completes its full-state high-altitude test bench campaign. Peak thrust 35.2 metric tons — the Trent 1000 / GEnx class. Cumulative operation over 3,000 hours. 12:1 bypass ratio, 3-metre fan, turbine inlet temperature 1700°C, 15% claimed fuel efficiency improvement over the GEnx. Single-crystal blade scrap rate has fallen from 30% to 7.8%.
The J-20A enters frontline PLAAF service with the WS-15 as the standard production engine. China ceases to have heart disease in its fifth-generation fighter for the first time in the program's history.
Per Chinese trade press, AECC reportedly completed ground and high-altitude bench tests of an adaptive-cycle engine — three-stream bypass architecture, mode-switching from cruise to combat in under 0.5 seconds, Mach-4-equivalent simulated conditions. The Chinese counterpart to GE's XA102 and Pratt's XA103. The next-generation engine — the one that will power what comes after J-36.
A paper lands in npj Computational Materials from Central South University: CSU-S1, a low-cost nickel-based single-crystal superalloy designed end-to-end by an NLP-and-ML pipeline that screened 340,000 virtual compositions. γ' solvus near 1300°C, 224.7-hour creep life at 1100°C / 137 MPa. Performance comparable to a Western third-generation reference alloy. Lower cost. The first peer-reviewed confirmation that AI4Mat is producing turbine-grade alloys in China, not just screening candidates.
FFour engine classes — fifth-generation fighter, wide-body commercial, heavy-duty industrial, sixth-generation adaptive — moving from far behind to possible parity in five years when consensus said would take three decades. The 3,000-hour CJ-2000 cumulative test figure is the signal that matters most: hot-section creep fatigue was the historical Chinese failure mode, the wall every prior program died at. 3,000 hours of bench operation means the wall has been cleared — at least at the test stand.
To be clear, no one is claiming these specific engines were designed by AI. WS-15 development has been underway since the 1990s, and brute-force engineering could explain the breakthroughs. The papers are careful. What we know is that the tooling is in hand, the validation surface is hot, and the cycle time is collapsing. The next engine generation — the mid-2030s one — is almost certainly already running through AI4Mat pipelines at every Chinese lab that matters.
If it works, what breaks
If half of this thesis is what it looks like, the following things break before 2035:
- The single-crystal superalloy moat collapses. GE and Rolls-Royce lose their last structural advantage in commercial wide-body propulsion
- Supercritical CO₂ cycles go from product category to platform, wrapping every thermal source on Earth in compact efficient conversion.
- Fusion compresses on the materials side.
- Battery chemistry iteration cements into a permanent lead.
- Hualong, Linglong, and HTR-PM quietly become best-in-class via metallurgy.
Every one of these arrives in Western press as an engineering announcement; nobody covers as an "AI story" because it's never announced that way. The AI layer that produced them stays invisible. Which is precisely how the Hassabis template was always going to work.
Close
The LLM race is loud. It is public. It is enormous business. That's the visible matter. But just like in our universe, there's a lot more dark matter than what we can see.
The heavier bet is the crow Hassabis showed them how to build — the one his own company had to set partly aside. It was placed in 2016, alongside AECC and the Two Engines plan, inside the same Five-Year Plan. It has been running for ten years. DeepSeek R1 was a visible flame. The wildfire is still burning where the Western floodlights don't reach.
The payoff will not arrive as a chatbot. It will arrive (perhaps has started arriving) as an alloy, a blade, a reactor vessel, a battery chemistry — delivered through a COMAC aircraft, AECC turbine, a CNNC reactor, or a CATL pack. And when it arrives, nobody is going to call it AI. They'll call it an engine. They'll call it a reactor. They'll call it a car.
Three signals that will tell you, over the next two years, whether this thesis is right or wrong. Each one is independently observable, technically specific, and mostly invisible unless you know what you're looking at.
The rhenium ratchet
Rhenium is one of the rarest stable elements that gets produced in industrial quantities at all. Annual global production is roughly 50–60 tonnes. For comparison: gold, ~3,000 tonnes; copper, ~22 million tonnes. Rhenium is only produced as a byproduct of molybdenum smelting, which is itself a byproduct of copper mining. You cannot decide to mine more rhenium. The supply curve is almost completely inelastic on a five-year horizon. Top producers: Chile, the United States, Poland, Kazakhstan. China produces almost none.
About 80% of all rhenium produced ends up in jet engines and rocket nozzles. Why? Because modern turbine blades operate at gas-path temperatures above the melting point of nickel. They survive because of single-crystal casting, internal cooling channels, ceramic thermal barrier coatings, and exotic alloying elements that slow atomic diffusion at temperature. Without those elements the blade creeps — atoms migrate along the metal's internal interfaces under the load of an engine spinning at 10,000 RPM, and the blade slowly elongates until it contacts the casing or fractures. Of every element on the periodic table, rhenium has the slowest diffusion rate in nickel. Add 3% by weight and you have roughly tripled the creep life of the blade at constant temperature, or you can run the engine 30°C hotter at constant life. That 30°C is the difference between an engine generation.
Western superalloys are literally categorised by rhenium content. First generation: 0% Re (1980s, CMSX-2). Second generation: ~3% Re (1990s, CMSX-4, René N5, China's DD6). Third generation: ~6% Re (2000s, CMSX-10, René N6, China's DD9). Each generation took roughly fifteen years of trial-and-error metallurgy to walk to.
Every Chinese superalloy roadmap since 2018 has had "reduce rhenium content while preserving creep life" as an explicit national priority — because rhenium is China's biggest superalloy import vulnerability and the LEAP-1C cutoff in 2025 was the warning shot. Machine learning is the right tool for this specifically. A modern superalloy has eight to twelve alloying elements. The search space — composition, processing, microstructure, multi-property targets — is high-dimensional and non-linear. Brute-forcing it is hopeless. ML compresses the walk. The CSU-S1 paper (December 2025) screened 340,000 virtual compositions and identified a turbine-grade alloy with creep performance comparable to a Western reference, at lower cost. Lower cost almost certainly means lower rhenium loading, because rhenium is by far the most expensive ingredient by mass — roughly $4,000/kg versus nickel's $15/kg.
The pilot cycle speed
Inventing an alloy in a lab is the easy part. Qualifying it for production turbine blades in a flight engine is, in the West, a fifteen-year project. The reason is not chemistry. The reason is institutional friction — the handoffs between three stages of materials maturation are bespoke and unfunded by default.
Stage one is lab scale: a coupon, a button, a rod. Grams of material, ideal conditions, single-property tests. Stage two is 中试 pilot scale. Kilograms to tonnes, real foundry equipment, realistic process variation, multi-property qualification campaigns. Stage three is production: industrial volumes, certified suppliers, full creep-rupture and fatigue test campaigns, regulatory sign-off. Each stage takes years. The bottleneck is the handoff: lab people don't talk to foundry people, foundry people don't talk to OEMs, OEMs don't talk to certification authorities until late. Each interface gets rebuilt from scratch on every new alloy generation. There is no standing institutional pipeline.
The Western model has been to spin up project-specific consortia — GE plus Pratt plus Rolls plus a national lab plus a university — for each new alloy. That's why third-generation alloys took thirty years to walk from lab demonstration to qualified flight hardware. Most of that time was not science. It was institutional friction.
What China built, post-2016, is purpose-engineered to compress the handoff. The 揭榜挂帅 open competition for state-funded projects) framework, plus AI4S (AI for Science, MOST's program), plus the AECC consolidation, plus the existing 中试 base infrastructure at Wuxi, Baowu, Anshan, and the AECC foundry network — that whole stack is a standing pipeline. The CAS Shenzhen 19-agent autonomous-discovery system doesn't just generate alloy candidates. It routes them through pre-negotiated qualification protocols into pilot-scale runs that would take years to set up in any Western jurisdiction.
The adaptive-cycle clock
Conventional jet engines have one bypass ratio — the share of intake air that flows around the core rather than through it. Low bypass means high thrust at high speed: good for fighters, terrible for fuel economy. High bypass means high efficiency at cruise: good for airliners, bad for combat acceleration. You cannot have both at once. Adaptive-cycle engines (ACE) introduce a third stream of bypass air with valves that let the engine change its bypass ratio in flight. Cruise economically at high bypass; fight at low bypass; switch in milliseconds. ACE is the propulsion technology that defines sixth-generation aircraft. It is the single biggest engine architectural shift since the high-bypass turbofan in the 1960s.
The Western position is mature on paper, slow in execution. GE's XA102 and Pratt's XA103 are the two American programs. The earlier XA100, developed under the Adaptive Engine Transition Program for the F-35, was cancelled to save money. The current programs are funded but not flying. The F-47, America's sixth-generation fighter, is scheduled for first flight in 2028.
The Chinese position, as of late 2025: AECC reportedly completed ground and high-altitude bench testing of a triple-stream adaptive-cycle prototype in November 2025, with mode-switching demonstrated in under 0.5 seconds at Mach-4-equivalent simulated conditions. The J-36 — China's principal sixth-generation fighter prototype, three-engine tailless flying wing, attributed to Chengdu Aircraft Corporation — has been flying since December 26, 2024. As of early 2026, three prototypes have flown. Current J-36 prototypes are likely on WS-15 derivatives, not the new ACE. But the engine pipeline is moving in parallel with the airframe, and the gap between bench-test and flight-test for Chinese engines has been collapsing — five years for the WS-15, almost certainly less for what comes next.