AT2k Design BBS Message Area
Casually read the BBS message area using an easy to use interface. Messages are categorized exactly like they are on the BBS. You may post new messages or reply to existing messages! You are not logged in. Login here for full access privileges. |
Previous Message | Next Message | Back to Slashdot <-- <--- | Return to Home Page |
|
||||||
From | To | Subject | Date/Time | |||
![]() |
VRSS | All | In 'Milestone' for Open Source, Meta Releases New Benchmark-Beat |
April 6, 2025 1:20 PM |
||
Feed: Slashdot Feed Link: https://slashdot.org/ --- Title: In 'Milestone' for Open Source, Meta Releases New Benchmark-Beating Llama 4 Models Link: https://news.slashdot.org/story/25/04/06/1822... It's "a milestone for Meta AI and for open source," Mark Zuckerberg said this weekend. "For the first time, the best small, mid-size, and potentially soon frontier [large-language] models will be open source." Zuckerberg anounced four new Llama LLMs in a video posted on Instagram and Facebook - two dropping this weekend, with another two on the way. "Our goal is to build the world's leading AI, open source it, and make it universally accessible so that everyone in the world benefits." Zuckerberg's announcement: I've said for a while that I think open source AI is going to become the leading models. And with Llama 4 this is starting to happen. - The first model is Llama 4 Scout. It is extremely fast, natively multi-modal. It has an industry- leading "nearly infinite" 10M-token context length, and is designed to run on a single GPU. It is 17 billion parameters by 16 experts, and it is by far the highest performing small model in its class. - The second model is Llama 4 Maverick - the workhorse. It beats GPT-4o and Gemini Flash 2 on all benchmarks. It is smaller and more efficient than DeepSeek v3, but it is still comparable on text, plus it is natively multi-modal. This one is 17B parameters x 128 experts, and it is designed to run on a single host for easy inference. This thing is a beast. Zuck promised more news next month on "Llama 4 Reasoning" - but the fourth model will be called Llama 4 Behemoth. "This thing is massive. More than 2 trillion parameters." (A blog post from Meta AI says it also has a 288 billion active parameter model, outperforms GPT-4.5, Claude Sonnet 3.7, and Gemini 2.0 Pro on STEM benchmarks, and will "serve as a teacher for our new models." |
||||||
|
Previous Message | Next Message | Back to Slashdot <-- <--- | Return to Home Page |
![]() Execution Time: 0.0158 seconds If you experience any problems with this website or need help, contact the webmaster. VADV-PHP Copyright © 2002-2025 Steve Winn, Aspect Technologies. All Rights Reserved. Virtual Advanced Copyright © 1995-1997 Roland De Graaf. |