At least this should finally put the 'Chinese can't innovate, they can only copy' meme into retirement.
It’s interesting how the media focuses on the panic at Meta. While they’ve been pursuing open-source models like LLaMA, OpenAI appears far more impacted, as their business relies on selling access to a proprietary model-as-a-service.
Probably a coincidence and/or got some scoop from Meta specifically.
Or it's important to the media companies to not alienate Microsoft because of reasons.
I mean, it's very strange. Open Ai is the obvious loser on this, not Facebook. Obviously Microsoft doesn't want the press reminding people of alternatives to the big tech models.
I mean there's been a lot of news about DeepSeek in the past few days, but very little has been said regarding how this impacts the company that's most affected by this development.
Gonna cry?
This whole DeepSeek freakout seems like an Op by the AI grifters to get more money. "We have to defeat China at the new AI space race!"
The freakout is over SV grift being exposed for what it is. Turns out you don't need to pour billions of dollars into this industry to get results.
I am scrambling a war room to try and get this scum bag out of my life.
I think you got the wrong link. It goes to an article about chip tariffs.
oops fixed
Think about the tariffs as well! ;P
DeepSeek is not that great. I run it here locally, but the answers are often still wrong. And I get Chinese characters in my English output
What makes DeepSeek important is that it shows that you can train and run a large scale model at a fraction of the cost of what existing models require. Meanwhile, in terms of quality it outperforms the top Llama model in benchmarks https://docsbot.ai/models/compare/deepseek-r1/llama-3-1-405b-instruct
Yes that is true.. now the question I have back is: How is this price calculated? I mean the price can also be low, because they ask less. Or the price can be low because interference costs less time / energy. You might answer the latter is true, but where is the source for that?
Again, since I can run it locally my price is $0 per million tokens, I only pay electricity for my home.
EDIT: The link you gave me also says "API costs" at the top of the article. So that means, they just ask less money. The model itself might use the same amount (or even more) energy than other existing models costs.
The reason they ask for less money is due to the fact that it's a more efficient algorithm, which means it uses less power. They leveraged mixture-of-experts architecture to get far better performance than traditional models. While it has 671 billion parameters overall, it only uses 37 billion at a time, making it very efficient. For comparison, Meta’s Llama3.1 uses 405 billion parameters used all at once. You can read all about here https://arxiv.org/abs/2405.04434
I see ok. I only want to add that DeepSeek is not the first or the only model that is using mixture-of-experts (MoE).
Ok, but it is clearly the first one to use this approach to such an effect.
The claim going around is that it uses 50x less energy
Technology
This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.
Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.
Rules:
1: All Lemmy rules apply
2: Do not post low effort posts
3: NEVER post naziped*gore stuff
4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.
5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)
6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist
7: crypto related posts, unless essential, are disallowed