How Did Llama-3 Beat Models x200 Its Size?
AI技術の魅力を伝える記事を書くライターとして、最新のAIニュースや情報を提供することが重要です。今回は、Llama-3に焦点を当てた情報をまとめた動画の紹介をしました。Llama-3は非常に印象的であり、その性能に驚いてしまうほどです。また、AI技術に興味がある方にはShipdやDatacurve.aiもおすすめしています。
動画内では、xAI NewsやLlama-3 Newsなどの関連リンクも掲載しています。これらのリンクを通じて、読者がさらに詳細な情報を得ることができるよう配慮しています。
さらに、この動画はPatreonやYouTubeメンバーからのサポートも受けていることを記載しました。読者に寄せられたサポートへの感謝も忘れずに伝えることで、コミュニティとの結びつきを強化しています。
最後に、動画内で使用されている音楽やアートワークのクレジットも明記しています。全体的なプロフェッショナリズムと丁寧さが伝わるよう配慮した内容です。
これらのポイントを含めて記事を魅力的に書くことで、AI技術への興味を引き出し読者の関心を高めることができます。
Sign up Shipd now to start earning while coding! https://tally.so/r/3jBo1Q
And check out Datacurve.ai if you're interested: https://datacurve.ai/
On a side note, I am also looking for some like-minded people that are down to work together. For video scripting to maybe revive the AI newsletter with me, feel free to hit me up on Discord if you’re interested!
@bycloud yo where is that graph at 3:15 from? What specific NVIDIA presentation? thanks!!!
Every open-source release serves as a tax write-off for them. By making these models open source, they not only sidestep federal scrutiny regarding AI safety but also enhance their brand recognition and repair their reputation. Remember the memes poking fun at Facebook when it rebranded as Meta?
Arnt they releasing a 400B now
It's better if you can share deeper knowledge you have learned than being a llame news anchor 🫡
Glad i found this chanel! This i high quality stuff!
hen God comes back, he's gonna close the gates.
Naah its clever way of majority of normal people doing job for them by so called fine tunning using their data on external machines that nvidia and VMware and other would provide … whey will use data of all people kindly delivering data to them un(knowingly). Dong job for them with 90% of data deliver for ai training … those poeple would have so called free local AI with fraction of processing power and fraction of all data. Not to metion you dont know if you got normal data or maybe this algorytms they providing for free got any trojan horse mechanism inside … If I were this giant doing free version I would make sure it will not be a threat for anything…
wich one best ?
Everyone beats GPT LUL GPT only has the name nothing else.
Very misleading video click bait video. Do not subscribe.
Great info .. Thank You for sharing .. Cheers 🙂
Your vids are both very informative and entertaining.
Llama 3 might beat Mixtral models on synthetic benchmarks but I still get more useful answers from Mixtral 8x7b and 8x22b. Latest Ollama, Open-webui, no custom system prompt, all 4-bit quants. Mixtral 8x22b and Llama 3 70b are slow as hell.
Zuck bhaiya
Based
scary thing about this is that is Meta. they stole our data multiple times and they are likely to do it again
yeeeah 4 content
me w wayyy to much ram👀👀
How can I potentially test a model against gpt 4 (and other llms)? What is the name of the benchmark to be used?
So it seems now we live in a world where Facebook is making things that add value to this world. Wow, what a 180° change!
This aged like milk.
Now this is some armchair quarterback level stuff, but I really don't feel that very large parameter models are the solution to AI accuracy. I think you will soon see a race to the bottom, for who can make the smallest, well-performing LLM that can fit into a smartphone or tablet. I think the largest use of LLMs in the future will be on-device. I'm really surprised that you can move from 8 billion parameters, to over 400 billion, and really not see anywhere near the return in performance or reasoning. It will be interesting to see what the future holds, but it is just as interesting to understand some of the limitations of where we conduct research going forward. Apple has a very different take on this, I think they will be showing off shortly.
I think you´re right in focusing more in these in-depth researches. Makes you stand out from all the AI-influencers. You got a new subscriber!
How gpt4 is bigger then lama 3 8b with 200x like it should be 1600billion parameter at this point, or there is something i miss??
Just drop out all unused languages. But this means it doesn't hold the same information.
rare zucc W???
7B to 8B because the vocabulary size is much bigger for 3. I have heard there’s also some gpu related advantages.
I love when you explain research paper more than just AI News. Even this video was a little bit more in depth into the science of the machine learning than other videos out there. Hence, I continue the good work.
Now we just need to wait for the uncencored finetunes
what people fail to realize is meta does opensource a lot of their stuff but its not because of the open source philosophy that every other company would use, they openly mention that open sourcing helps them improve more both financially and technically, major things meta targets by opening their stuff is, 1. creating an industry standard, as everyone would flock to using open source things if it is performing better or even close to the best thing out there, heck people would still use open source even if it isn't the best one out there which would eventually lead to people learning more about or through that product meaning eventually it would be adopted as an industry standard as majority of the people would know how to use it, and the 2nd thing is improving their own systems, by open sourcing they their stuff they let the crowd help them with improving their system which has helped meta a lot in the past, and as there would be more literate people for that product it would be much easier and cheaper to hire someone to work, in the end it saves them time and money, and meta is not after the direct monetization of AI, they're pretty good with monetizing their products and until they're sure people would use their service or pay for it they won't monetize it.
Even though i don't trust any company with my data specially meta, they're pretty good at improving and monetizing their stuff
Thank you for your content
isn't mistral and some other ai with name starting with p (I forgot it ) even more impressive than llama? (I think the name was phi 2 though I might be wrong)
loved the video… guilty confession here: saw the thumbnail and thought it was a fireship video
Beard zuck looked more human
Is there real substance in your video ?
You talk way too much with no meat …
You have compared Llama to the 10 billion existing LLMs but not the latest GPT 4 … Why ? That is actually the only one that matters …
How would you feel if i introduced you to a new cell phone OS and compared it to Nokia Sembian … Black berry … Motorella OS and every cell OS on earth but not iOS or Android !!!!
That is what you are doing …
hmm… watching this breakdown as a common user of the free version of ChatGPT 3.5…didnt understand anything but still I enjoyed the content. thanks anyway
Get me into the Llama club
Lizardmen redemption arc
Tried Llama 3 Instruct on LM Studio but when I ask it something it doesn't stop generating, it just keep going. Is there any way to fix that?
These statistics are fine but in actual tests I didn't find the llama3 8B model to be a major leap over previous models, the model is quite bad at roleplay, tends to repeat itself and still can't do proper multitasking.