Alibaba's Qwen team released Qwen3.6-27B on April 22 under Apache 2.0—a 27-billion-parameter dense model with a 262,144-token native context window (extensible to 1,010,000 with YaRN) and hybrid Gated DeltaNet/Gated Attention architecture. Despite weighing roughly 55 GB versus the prior flagship's 807 GB, it outperforms the Qwen3.5-397B-A17B MoE on every major agentic coding benchmark, posting 77.2% on SWE-bench Verified, 86.2% on MMLU-Pro, and 87.8% on GPQA Diamond. It is available on Hugging Face and ModelScope in BF16 and FP8 variants.