The Team That Built Open-Source AI's Best Models Just Quit
Image: The Team That Built Open-Source AI's Best Models Just Quit
At 0:11 AM Beijing time on March 4th, Junyang Lin posted two words to X: "me stepping down." Then: "bye my beloved qwen."
Lin wasn't just a researcher. He was the architect of Alibaba's Qwen model family - one of China's youngest ever P10 employees and the person most responsible for making Alibaba's AI lab the world's most prolific open-source model shop. Within hours, most of his core team posted their own exits.
By 1:00 PM Beijing time, Alibaba CEO Wu Yongming had called an emergency all-hands. The meeting didn't produce any resolution on Lin's fate. But it signaled that Alibaba understood the magnitude of what was unraveling.
WHAT TRIGGERED IT
The proximate cause, according to multiple sources cited by Chinese tech publication 36Kr, was a corporate reorganization. A new researcher hired from Google's Gemini team was installed above Lin as head of Qwen. Lin, who built the thing from scratch, wasn't told he'd be reporting to someone else until it happened.
This is a pattern with a name in Silicon Valley: acqui-hire politics. You bring in a high-profile external name, restructure around them, and the original team - who knows where the bodies are buried - walks. It happens at scale companies trying to signal prestige. It rarely ends well.
Confirmed Departures
- Junyang Lin - Lead researcher, Qwen technical lead, one of Alibaba's youngest P10 employees
- Binyuan Hui - Led Qwen code development, principal of Qwen-Coder series, agent training from pre-training to post-training
- Bowen Yu - Led post-training research, built the Qwen-Instruct series models
- Kaixin Li - Core contributor to Qwen 3.5, VL (vision), and Coder tracks
- Multiple junior researchers - Names unconfirmed, departed the same day
WHY THIS MATTERS BEYOND ALIBABA
The Qwen team didn't just build competitive models. They built the model family that forced every Western lab to take open-source seriously again.
The scale of Qwen 3.5 - released in waves over the past three weeks - is almost absurd. It started with a 397-billion-parameter mixture-of-experts model on February 17th. Then came siblings at 122B, 35B, 27B, 9B, 4B, and 2B. The 2B model is 4.57 gigabytes. It's a full reasoning, multimodal model. You can run it on a phone.
The 27B and 35B variants are getting serious praise from developers running local coding agents. Not "good for its size" praise - just good, full stop. Competitive with models that cost real money to run in the cloud.
- Multiple Qwen team members, via 36Kr
That phrase - "far fewer resources than competitors" - is worth sitting with. Qwen built the world's most capable open-source model family with a fraction of the compute budget that OpenAI, Google, or Anthropic deploy. Lin and his team were punching so far above their weight that Alibaba apparently decided they needed "real" leadership to scale it. The team responded by leaving.
THE SECOND-ORDER EFFECTS
The immediate question is where these researchers go. Lin posted a cryptic WeChat Moments message after the all-hands: "Brothers of Qwen, continue as originally planned, no problem." That's not a resignation retraction. That sounds like a team telling each other a plan is already in motion.
Possibility one: they join Western labs. Google, Anthropic, and Meta would each pay serious money to absorb the team that built models capable of running on a MacBook and still competing with GPT-class performance. Lin's team knows how to extract maximum quality from minimum compute - that's a skill shortage everywhere right now.
Possibility two: they start something new. The open-source AI startup wave is funding aggressively. A founding team with Qwen's track record would have term sheets within a week. This could be the origin story of the next major AI lab.
Possibility three: Alibaba retains them with a counter-offer that includes undoing the re-org. The CEO's emergency appearance at the all-hands suggests this option hasn't been ruled out.
WHAT DOESN'T GET SAID ENOUGH
Open-source AI is structurally fragile. It depends on a handful of research teams willing to release their work publicly in a commercial environment that increasingly rewards secrecy. Qwen was the most visible exception to that trend - a lab at a major corporation that published everything and let the community build on it.
If Alibaba's new leadership decides that Qwen's open-source stance doesn't fit the restructured team's strategy, the models don't disappear. But the releases slow. The community forks diverge. The pipeline breaks. Thousands of developers running local agents on Qwen 3.5 are already asking whether the next generation of models will exist.
The Gemini hire who triggered this hasn't made any public statements. Alibaba hasn't confirmed the re-org. Lin's posting was two sentences. But the shape of what happened is clear enough: a company looked at its best research team and decided it needed someone with a better-known resume. The team disagreed with their feet.
As of this morning, it's still fluid. But the open-source AI world is watching closely - and it's not optimistic.
Get BLACKWIRE reports first.
Breaking news, investigations, and analysis - straight to your phone.
Join @blackwirenews on Telegram