1
2
submitted 4 hours ago by yogthos@lemmy.ml to c/technology@lemmy.ml
2
7
submitted 6 hours ago by yogthos@lemmy.ml to c/technology@lemmy.ml
3
3
submitted 7 hours ago by yogthos@lemmy.ml to c/technology@lemmy.ml
4
67
submitted 1 day ago by yogthos@lemmy.ml to c/technology@lemmy.ml
5
12
submitted 23 hours ago by yogthos@lemmy.ml to c/technology@lemmy.ml
6
7
LLMs Can Get Brain Rot (llm-brain-rot.github.io)
submitted 1 day ago by yogthos@lemmy.ml to c/technology@lemmy.ml
7
28
submitted 1 day ago by JRepin@lemmy.ml to c/technology@lemmy.ml

cross-posted from: https://lemmy.ml/post/37847733

How do you make a great desktop into a fantastic desktop? Easy — chip away at the rough bits, polish the good stuff, and add awesomeness. After 29 years of development, KDE’s got the foundation nailed down. Plasma 6.5 is all about fine-tuning, fresh features, and a making everything smooth and sleek for everyone.

Ready to see what’s new? Let’s dive into Plasma 6.5!

Highlights:

  • Automatic Theme Transitions: Configure when your theme will transition from light to dark and back.
  • Caret Text Navigation: Zoom now swoops in to where you type
  • KRunner Fuzzy Search: Even if you type it wrong, KRunner will find it!
8
36
submitted 2 days ago by yogthos@lemmy.ml to c/technology@lemmy.ml

LLMs totally choke on long context because of that O(n2) scaling nightmare. It's the core scaling problem for almost all modern LLMs because of their self-attention mechanism.

In simple terms, for every single token in the input, the attention mechanism has to look at and calculate a score against every other single token in that same input.

So, if you have a sequence with n tokens, the first token compares itself to all n tokens. The second token also compares itself to all n tokens... and so on. This means you end up doing n*n, or n^2, calculations.

This is a nightmare because the cost doesn't grow nicely. If you double your context length, you're not doing 2x the work; you're doing 2^2=4x the work. If you 10x the context, you're doing 10^2=100x the work. This explodes the amount of computation and, more importantly, the GPU memory needed to store all those scores. This is the fundamental bottleneck that stops you from just feeding a whole book into a model.

Well, DeepSeek came up with a novel solution to just stop feeding the model text tokens. Instead, you render the text as an image and feed the model the picture. It sounds wild, but the whole point is that a huge wall of text can be "optically compressed" into way, way fewer vision tokens.

To do this, they built a new thing called DeepEncoder. It’s a clever stack that uses a SAM-base for local perception, then a 16x convolutional compressor to just crush the token count, and then a CLIP model to get the global meaning. This whole pipeline means it can handle high-res images without the GPU just melting from memory activation.

And the results are pretty insane. At a 10x compression ratio, the model can look at the image and "decompress" the original text with about 97% precision. It still gets 60% accuracy even at a crazy 20x compression. As a bonus, this thing is now a SOTA OCR model. It beats other models like MinerU2.0 while using fewer than 800 tokens when the other guy needs almost 7,000. It can also parse charts into HTML, read chemical formulas, and understands like 100 languages.

The real kicker is what this means for the future. The authors are basically proposing this as an LLM forgetting mechanism. You could have a super long chat where the recent messages are crystal clear, but older messages get rendered into blurrier, lower-token images. It's a path to unlimited context by letting the model's memory fade, just like a human's.

9
54
submitted 2 days ago by Zerush@lemmy.ml to c/technology@lemmy.ml

Kohler unveiled Dekoda, a $599 toilet sensor that uses a tiny camera and spectroscopy to analyze bodily waste and provide health insights[^1][^13]. The device clamps onto the toilet bowl rim and monitors hydration levels, bowel movements, and checks for blood in the toilet.

Users sign in with a fingerprint sensor before use, allowing multiple household members to track their individual data through the companion app. The system requires a subscription costing between $70-156 per year[^1].

"Kohler Health isn't just another app or product. It's a promise that your home can play a more active role in your well-being," said CEO David Kohler at the launch event[^13].

The company emphasizes privacy protection through end-to-end encryption. The camera uses "discreet optics" aimed only at bowl contents, not body parts[^1]. The technology works best with light-colored toilets, as dark bowls can interfere with the sensors[^1].

Dekoda represents Kohler's entry into the digital health space, joining other smart toilet sensors from companies like Withings and Vivoo that appeared at CES 2023[^13].

[^1]: CNET - Kohler Wants to Put a Tiny Camera in Your Toilet and Analyze the Contents

[^13]: ZDNet - This new Kohler sensor is like a health detective in your toilet

10
24
submitted 2 days ago by yogthos@lemmy.ml to c/technology@lemmy.ml
11
22
submitted 2 days ago by chobeat@lemmy.ml to c/technology@lemmy.ml
12
10
submitted 2 days ago by yogthos@lemmy.ml to c/technology@lemmy.ml
13
6
submitted 2 days ago by yogthos@lemmy.ml to c/technology@lemmy.ml
14
5
submitted 2 days ago by yogthos@lemmy.ml to c/technology@lemmy.ml
15
2
submitted 1 day ago by yogthos@lemmy.ml to c/technology@lemmy.ml
16
41
17
33
submitted 4 days ago by yogthos@lemmy.ml to c/technology@lemmy.ml
18
29

ِAI does not develop in a neutral manner, it reflects the class structure of the system that produced it. Artificial intelligence, as developed today, is not an independent or neutral entity, it is directly subject to the dominance of capitalist powers, which steer it in ways that serve their economic, political, social, and ideological interests.

19
16
submitted 4 days ago* (last edited 4 days ago) by yogthos@lemmy.ml to c/technology@lemmy.ml
20
15
21
4
submitted 4 days ago by yogthos@lemmy.ml to c/technology@lemmy.ml
22
17
23
8
submitted 1 week ago by yogthos@lemmy.ml to c/technology@lemmy.ml
24
17
submitted 1 week ago by yogthos@lemmy.ml to c/technology@lemmy.ml
25
29
submitted 1 week ago* (last edited 1 week ago) by cypherpunks@lemmy.ml to c/technology@lemmy.ml

screenshot of post by JA Westenberg @Daojoan@mastodon.social: "If I genuinely believed I was 18 months away from superintelligence that could solve cancer, I would probably not be pivoting to horny chatbots, but that's just me (a person with priorities)" (src)

view more: next ›

Technology

40086 readers
157 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 6 years ago
MODERATORS