The post Google Jumps Back Into the Open Source AI Race With Gemma 4 appeared on BitcoinEthereumNews.com. In brief Google dropped Gemma 4, a family of open modelsThe post Google Jumps Back Into the Open Source AI Race With Gemma 4 appeared on BitcoinEthereumNews.com. In brief Google dropped Gemma 4, a family of open models

Google Jumps Back Into the Open Source AI Race With Gemma 4

2026/04/03 03:49
Okuma süresi: 5 dk
Bu içerikle ilgili geri bildirim veya endişeleriniz için lütfen crypto.news@mexc.com üzerinden bizimle iletişime geçin.

In brief

  • Google dropped Gemma 4, a family of open models under the Apache 2.0 license.
  • The four-model lineup spans phones to data centers with the 31B model ranking #3 globally already.
  • U.S. open-source AI gets a needed boost, as Gemma 4—backed by DeepMind—positions itself as the strongest American contender against DeepSeek, Qwen, and other Chinese leaders.

Google’s open AI ambitions got a lot more serious today. The company released Gemma 4, a family of four open-weight models built on the same research as Gemini 3, and licensed under Apache 2.0—a significant departure from the more restrictive terms on previous Gemma versions.

Developers have downloaded past Gemma generations over 400 million times, spawning more than 100,000 community variants. This release is the most ambitious one yet.

For the past year, the open-source AI leaderboard has been largely a Chinese affair. DeepSeek, Minimax, GLM and Qwen have dominated the top spots, leaving American alternatives scrambling for relevance. As Decrypt reported last year, Chinese open models went from barely 1.2% of global open-model usage in late 2024 to roughly 30% by the end of 2025, with Alibaba’s Qwen even overtaking Meta’s Llama as the most-used self-hosted model worldwide.

Meta’s Llama used to be the default choice for developers who wanted a capable, locally runnable model. That reputation has eroded—Llama’s Meta-controlled license raised questions about its true open-source status, and its performance slipped behind the Chinese competition. The Allen Institute’s OLMo family tried to fill the gap but failed to gain meaningful traction. OpenAI released its gpt-oss models in August 2025, which gave the ecosystem a breath of fresh air, but they were never designed to be frontier competitors.

And yesterday, a 30-person U.S. startup called Arcee AI released Trinity, a 400 billion parameter open model that made a compelling case that the American scene wasn’t completely dead. Gemma 4 follows that momentum, this time with the full weight of Google DeepMind behind it, turning it into arguably the best American model in the open-source AI scene.

The model is “built from the same world-class research and technology as Gemini 3,” Google said in its announcement. Gemma 4 ships in four sizes: Effective 2B and 4B for phones and edge devices, a 26B Mixture of Experts model focused on speed, and a 31B Dense model optimized for raw quality.

The 31B Dense currently ranks third among all open models on Arena AI’s text leaderboard. The 26B MoE sits sixth. Google claims both outcompete models 20 times their size—a claim that holds up, at least against the Arena AI numbers, where Chinese models still hold the top two spots.

We tested Gemma 4. It’s capable, with some caveats. The model applies reasoning even to tasks that don’t require it, which can make responses feel over-engineered for simple prompts. Creative writing is decent—serviceable, not inspired—and likely improves with more specific guidance and prompt engineering.

Where it delivered most clearly was code. Asked to generate a game, the output wasn’t particularly flashy or elaborate, but it ran without errors on the first try. Not bad for a 41 billion parameter model. That zero-shot reliability is arguably more valuable than a prettier result that needs debugging.

You can try the (basic, yet functional) game here.

The four variants cover the full hardware spectrum. The E2B and E4B models are built for Android phones, Raspberry Pi, and edge devices, running completely offline with near-zero latency, native audio input, and a 128K context window. The 26B and 31B models target workstations and cloud deployments, extending context to 256K and adding native function-calling and structured JSON output for building autonomous agents. All four models process images and video natively. The larger models’ full-precision weights fit on a single 80GB NVIDIA H100 GPU; quantized versions run on consumer hardware.

The Apache 2.0 license is the other headline. Google’s previous Gemma releases used a custom license that created legal ambiguity for commercial products. Apache 2.0 removes that friction entirely—developers can modify, redistribute, and commercialize without worrying about Google changing the terms later. Hugging Face co-founder Clement Delangue praised it, saying that “Local AI is having its moment,” and it is the future of the AI industry. Google DeepMind CEO Demis Hassabis went further, calling Gemma 4 “the best open models in the world for their respective sizes.”

That’s a strong claim. Proprietary systems from Anthropic, OpenAI, and Google’s own Gemini still lead on the hardest benchmarks. But for open-weight models you can run locally, modify freely, and deploy on your own infrastructure? The competition just got significantly thinner. You can try Gemma 4 now in Google AI Studio (31B and 26B) or Google AI Edge Gallery (E2B and E4B). Model weights are also available on Hugging Face, Kaggle, and Ollama.

Daily Debrief Newsletter

Start every day with the top news stories right now, plus original features, a podcast, videos and more.

Source: https://decrypt.co/363178/google-gemma-4-open-source-ai

Piyasa Fırsatı
4 Logosu
4 Fiyatı(4)
$0.012456
$0.012456$0.012456
+2.08%
USD
4 (4) Canlı Fiyat Grafiği
Sorumluluk Reddi: Bu sitede yeniden yayınlanan makaleler, halka açık platformlardan alınmıştır ve yalnızca bilgilendirme amaçlıdır. MEXC'nin görüşlerini yansıtmayabilir. Tüm hakları telif sahiplerine aittir. Herhangi bir içeriğin üçüncü taraf haklarını ihlal ettiğini düşünüyorsanız, kaldırılması için lütfen crypto.news@mexc.com ile iletişime geçin. MEXC, içeriğin doğruluğu, eksiksizliği veya güncelliği konusunda hiçbir garanti vermez ve sağlanan bilgilere dayalı olarak alınan herhangi bir eylemden sorumlu değildir. İçerik, finansal, yasal veya diğer profesyonel tavsiye niteliğinde değildir ve MEXC tarafından bir tavsiye veya onay olarak değerlendirilmemelidir.

Ayrıca Şunları da Beğenebilirsiniz

Samsung Electronics Targets Record Q1 Profit as Memory Chip Supercycle Hits Full Stride

Samsung Electronics Targets Record Q1 Profit as Memory Chip Supercycle Hits Full Stride

TLDR Samsung Electronics is expected to report a six-fold jump in operating profit for Q1 2025, potentially hitting 40.5 trillion won ($26.9 billion). The expected
Paylaş
Coincentral2026/04/03 16:49
One Of Frank Sinatra’s Most Famous Albums Is Back In The Spotlight

One Of Frank Sinatra’s Most Famous Albums Is Back In The Spotlight

The post One Of Frank Sinatra’s Most Famous Albums Is Back In The Spotlight appeared on BitcoinEthereumNews.com. Frank Sinatra’s The World We Knew returns to the Jazz Albums and Traditional Jazz Albums charts, showing continued demand for his timeless music. Frank Sinatra performs on his TV special Frank Sinatra: A Man and his Music Bettmann Archive These days on the Billboard charts, Frank Sinatra’s music can always be found on the jazz-specific rankings. While the art he created when he was still working was pop at the time, and later classified as traditional pop, there is no such list for the latter format in America, and so his throwback projects and cuts appear on jazz lists instead. It’s on those charts where Sinatra rebounds this week, and one of his popular projects returns not to one, but two tallies at the same time, helping him increase the total amount of real estate he owns at the moment. Frank Sinatra’s The World We Knew Returns Sinatra’s The World We Knew is a top performer again, if only on the jazz lists. That set rebounds to No. 15 on the Traditional Jazz Albums chart and comes in at No. 20 on the all-encompassing Jazz Albums ranking after not appearing on either roster just last frame. The World We Knew’s All-Time Highs The World We Knew returns close to its all-time peak on both of those rosters. Sinatra’s classic has peaked at No. 11 on the Traditional Jazz Albums chart, just missing out on becoming another top 10 for the crooner. The set climbed all the way to No. 15 on the Jazz Albums tally and has now spent just under two months on the rosters. Frank Sinatra’s Album With Classic Hits Sinatra released The World We Knew in the summer of 1967. The title track, which on the album is actually known as “The World We Knew (Over and…
Paylaş
BitcoinEthereumNews2025/09/18 00:02
Ripple CTO Says Freeze-Proof Stablecoins Can’t Work As Circle Misses $285M Drift Hack

Ripple CTO Says Freeze-Proof Stablecoins Can’t Work As Circle Misses $285M Drift Hack

The post Ripple CTO Says Freeze-Proof Stablecoins Can’t Work As Circle Misses $285M Drift Hack appeared first on Coinpedia Fintech News Can a stablecoin choose
Paylaş
CoinPedia2026/04/03 17:19

$30,000 in PRL + 15,000 USDT

$30,000 in PRL + 15,000 USDT$30,000 in PRL + 15,000 USDT

Deposit & trade PRL to boost your rewards!