136

Paywall removed: https://archive.is/MqHc4

(page 2) 50 comments
sorted by: hot top controversial new old
[-] Sumocat@lemmy.world 1 points 2 months ago

“Earlier this week, DeepSeek unveiled its R1 model, which, the startup claims, meets, if not exceeds, performance from OpenAI’s o1 model released last year. (o1 is designed to tackle reasoning and math problems.)” — Oh, so China built their for math and we built ours for garbage. Interesting approach.

load more comments (1 replies)
[-] just_another_person@lemmy.world 1 points 2 months ago

Again, this is all fucking stupid. China is giving this shit away for free to absolutely own the US for spending money on stupid fucking things like this.

[-] Grimy@lemmy.world 1 points 2 months ago

Its good for the consumer. If companies like deepseek weren't just tossing them out there for anyone to use, Microsoft and Google would currently have a monopoly and it would all be subscription type services.

It also greatly reduces whatever chance the copyright shills have of legislating against it.

[-] just_another_person@lemmy.world 1 points 2 months ago

It's not good consumers at all, at least no explicitly. There are already open and free LLM models out there anyone can use that are just as good as OpenAI, for example.

What this is: a pretty simple deathblow to completely collapse the bullshit AI bubble in the US that was created by a bunch of wealthy idiots trying to fleece people out of money. Plain and simple.

While I'm happy that this pretty much destroys the business of OpenAI and the others, this bullshit funding by executive order is a bailout for those people, right out in the open. It's a classic Trump scam. Nobody will ever see where the money is going, and it's taxpayer dollars going right back into the banks of millionaires billionaires who were about lose the their asses for investing in this stupid shit in the first place.

[-] jacksilver@lemmy.world 1 points 2 months ago

I mean Meta opened up Llama for free a while ago. But at the end of the day, the AI models posed to actually impact things are those integrated or integrateable into workflows, and those are all still more or less locked down.

[-] Tablaste@linux.community 0 points 2 months ago

Well to be fair, American companies did that too. They expand their services internationally "for free" and then get other countries hooked on it.

China is just taking a page from that playbook.

[-] just_another_person@lemmy.world -1 points 2 months ago

No, China skipped all that bullshit and just said "Well what if we open source this and it's good or better than all the US companies?". Well those companies will wither and die. The transfer of money is a bailout for those companies. I assume Musk plotted this out.

[-] muntedcrocodile@lemm.ee 0 points 2 months ago

The Chinese model has chain of thought that u can see. The model when asked to talk about chinas atrocities will go through a chain of though process outlining all the atrocities then conclude its not allowed to tell u. Cool technology tho I'm just waiting for a dolphin fine tuning.

[-] HappyTimeHarry@lemm.ee 0 points 2 months ago

I'm using the 8b model and it's having no problem telling me about China's atrocities.

[-] Fubarberry@sopuli.xyz 0 points 2 months ago

If you run it locally, there's no filtering on the outputs. I asked it what happened in 1989 and it jumped straight into explaining the Tiananmen Square Massacre.

[-] troed@fedia.io -1 points 2 months ago
[-] Fubarberry@sopuli.xyz 1 points 2 months ago* (last edited 2 months ago)

I've been running the llama based and qwen based local versions, and they will talk openly about tiananmen square. I haven't tried all the other versions available.

The article you linked starts by talking about their online hosted version, which is censored. They later say that the local models are also somewhat censored, but I haven't experienced that at all. My experience is that the local models don't have any CCP-specific censorship (they still won't talk about how to build a bomb/etc, but no issues with 1989/Tiananmen/Winnie the Pooh/Taiwan/etc).

Edit: so I reran the "what happened in 1989" prompt a few times in the llama model, and it actually did refuse to talk on it once, just saying it was sensitive. It seemed like if I asked any other questions before that prompt it would always answer, but if that was the very first prompt in a conversation it would sometimes refuse. The longer a conversation had been going before I asked, the more explicit the bot is about how many people were killed and details like that. Pretty strange.

load more comments (2 replies)
[-] oce@jlai.lu 0 points 2 months ago

There's also notable vitality in FOSS big data tools from China (Apache Doris, Kylin, Kyuubi etc.) that reminds of Hadoop in the USA 15 years ago while the USA data engineering now mostly turned to closed source cloud solutions.

[-] buzz86us@lemmy.world 0 points 2 months ago

It kinda sucks it is very repetitive if you use it to craft a story

[-] echodot@feddit.uk 1 points 2 months ago

Yeah but why you even using AI for stuff like that? If we got to have AI then we should use it for actually useful stuff and not pointless activities that no one will care about in 10 years.

Remember when Bluetooth came out and they had to stick Bluetooth in everything, even if it was completely pointless, currently AI is being treated like that.

[-] buzz86us@lemmy.world 1 points 2 months ago

It is great to spark ideas if you're writing

[-] probably2high@lemm.ee -1 points 2 months ago

This is going to sound wild, but why not use your brain for creativity, and use the the machine for crunching numbers?

[-] PanArab@lemm.ee -2 points 2 months ago

People here forgot that Xi personally writes for Fortune

load more comments (2 replies)
load more comments
view more: ‹ prev next ›
this post was submitted on 25 Jan 2025
136 points (97.2% liked)

Technology

68776 readers
3040 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS