49
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 06 Dec 2025
49 points (93.0% liked)
Technology
40955 readers
235 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 3 years ago
MODERATORS
I don't hate this article, but I'd rather have read a blog post grounded in the author's personal experience engaging with a personalized AI assistant. She clearly has her own opinions about how they should work, but instead of being about that there's this attempt to make it sound like there's a lot of objective certainty to it that falls flat because of failing to draw a strong connection.
Like this part:
Some hard evidence that stepping out of your comfort zone is good, but not really any that preventing stepping out of their comfort zone is in practice the effect that "infinite memory" features of personal AI assistants has on people, just rhetorical speculation.
Which is a shame because how that affects people is pretty interesting to me. The idea of using a LLM with these features always freaked me out a bit and I quit using ChatGPT before they were implemented, but I want to know how it's going for the people that didn't, and who use it for stuff like the given example of picking a restaurant to eat at.