[-] pkjqpg1h@lemmy.zip 1 points 6 minutes ago

Tor is used by many countries, both users and governments. The reason for Tor is that it's not searchable: you need an exact, password-like URL to reach, for example, login pages. This ensures there is no chance another country can spy on or access those communications.

[-] pkjqpg1h@lemmy.zip 1 points 30 minutes ago

if FBI owns every endpoint why is still there CSAM? why they don't remove all of them?

[-] pkjqpg1h@lemmy.zip 1 points 1 hour ago

What a week! GLM-5, Minimax-2.5 and now Qwen-3.5 let's see

[-] pkjqpg1h@lemmy.zip 29 points 3 hours ago

"Anna's Archive is a non-profit project with two goals:

  1. Preservation: Backing up all knowledge and culture of humanity.

  2. Access: Making this knowledge and culture available to anyone in the world."

Thanks everyone who contributed this great project.

[-] pkjqpg1h@lemmy.zip 1 points 3 hours ago

I don't think so there is a mutual relationship with AI companies and the copyright's future is not bright

8

MiniMax-M2.5 is a SOTA large language model designed for real-world productivity. Trained in a diverse range of complex real-world digital working environments, M2.5 builds upon the coding expertise of M2.1 to extend into general office work, reaching fluency in generating and operating Word, Excel, and Powerpoint files, context switching between diverse software environments, and working across different agent and human teams. Scoring 80.2% on SWE-Bench Verified, 51.3% on Multi-SWE-Bench, and 76.3% on BrowseComp, M2.5 is also more token efficient than previous generations, having been trained to optimize its actions and output through planning.

[-] pkjqpg1h@lemmy.zip 1 points 10 hours ago

The real reason is LLMs are still using the same architecture and there is no breakthrough at the end of the day their intelligence will become so close to each other, when this happens they will have to decrease the prices to compete with open-weight models and even with these prices they don't generate revenue so instead of just scaling they will have to focus on optimization and innovation

[-] pkjqpg1h@lemmy.zip 11 points 13 hours ago

Free Software is essential

[-] pkjqpg1h@lemmy.zip 3 points 13 hours ago

Filterlist mainteiners are GOAT

[-] pkjqpg1h@lemmy.zip 1 points 13 hours ago

Besides, clothing without logos is more expensive than others.

[-] pkjqpg1h@lemmy.zip 1 points 13 hours ago

GLM-5 and Kimi-K2.5 is really good.

ArtificialAnalysis Intellegence vs Cost

[-] pkjqpg1h@lemmy.zip 3 points 22 hours ago

OpenRouter is basically the place for third-party AI stuff: tools, research, benchmarks...

Sure, it won't tell you what regular users are doing, but it shows where professionals actually spend money. And since it's pricey (seriously, once you're used to free ChatGPT or Gemini, paying hurts :D), it reveals which models are actually worth it.

[-] pkjqpg1h@lemmy.zip 3 points 22 hours ago* (last edited 22 hours ago)

General ranking (weekly) (higlighted models are open-weight)

General ranking (weekly)

  1. Kimi K2.5 - 1.45T tokens
  2. Gemini 3 Flash Preview - 737B tokens
  3. DeepSeek V3.2 - 711B tokens
  4. Claude Sonnet 4.5 - 678B tokens
  5. MiniMax M2.1 - 454B tokens
  6. Gemini 2.5 Flash - 449B tokens
  7. Grok 4.1 Fast - 421B tokens
  8. Trinity Large Preview - 388B tokens
  9. Gemini 2.5 Flash Lite - 358B tokens
  10. Claude Opus 4.5 - 345B tokens
  11. Grok Code Fast 1 - 314B tokens
  12. Claude Opus 4.6 - 275B tokens
  13. gpt-oss-120b - 266B tokens
  14. GPT-5 Nano - 265B tokens
  15. Gemini 2.0 Flash - 175B tokens
  16. GLM 4.7 - 171B tokens
  17. Gemini 3 Pro Preview - 169B tokens
  18. Pony Alpha (GLM-5) - 147B tokens
  19. GPT-5.2 - 145B tokens
  20. Claude Haiku 4.5 - 132B tokens

Wow 46% of tokens are now going through open-weight models thats amazing.

20
submitted 2 days ago by pkjqpg1h@lemmy.zip to c/firefox@lemmy.ml

Firefox will redesign toobar for Android in version 148

view more: next ›

pkjqpg1h

joined 2 days ago