398

Meta conducted an experiment where thousands of users were shown chronological feeds on Facebook and Instagram for three months. Users of the chronological feeds engaged less with the platforms and were more likely to use competitors like YouTube and TikTok. This suggests that users prefer algorithmically ranked feeds that show them more relevant content, even though some argue chronological feeds provide more transparency. While the experiment found that chronological feeds exposed users to more political and untrustworthy content, it did not significantly impact their political views or behaviors. The researchers note that a permanent switch to chronological feeds could produce different results, but this study provides only a glimpse into the issue.


I think this is bullshit. I exclusively scroll Lemmy in new mode. I scroll I see a post I already have seen. Then I leave. That doesn't mean I hate it, I'm just done!

you are viewing a single comment's thread
view the rest of the comments
[-] OneRedFox@beehaw.org 18 points 1 year ago

Is it possible to design a content recommendation algorithm that isn't game-able? As it stands right now I don't think that algorithms are fundamentally bad, just that capitalism ruins everything.

[-] Malgas@beehaw.org 10 points 1 year ago

Goodhart's Law: Any statistical regularity will tend to collapse once pressure is placed on it for control purposes.

Or, to paraphrase, any metric that becomes a target ceases to be a good metric. Ranking algorithms, by their nature, use some sort of quantifiable metric as a heuristic for quality.

[-] SkepticElliptic@beehaw.org 6 points 1 year ago

If you weighted things by clicks vs time viewing maybe? The true issue is lack of moderation.

Non genuine accounts boost the post for whatever reason. This creates engagement. This is good for the marketer and the platform because they make their money through advertising. They don't care if marketing firms are using thousands of zombie accounts to boost posts.

[-] conciselyverbose@kbin.social 6 points 1 year ago

The question is what do you use to measure quality?

Engagement is useful but leads to this, obviously. But unless people are constantly rating content they like and don't like (Reddit was the closest to a robust way to do that), it's hard to train what content they want.

[-] BarryZuckerkorn@beehaw.org 8 points 1 year ago

In the 80's, Pepsi was gaining quickly on Coca Cola with the Pepsi challenge: having tasters blindly tasting Pepsi versus Coke and choosing which one they liked better. Pepsi won a majority of these. But over the decades, it turns out that consumer preference for a sip of each didn't necessarily translate over an entire can, or an entire case of cans. When asked to drink 12-20 ounces (350 to 600 ml) of the soft drink, regularly, people behaved differently than what they did for a 2 ounce (60 ml) taste.

Asking consumers to rate things in the moment still suffers from their less reliable momentary ratings of things they experience all day, day after day. Especially of things that tend to be associated with unhealthy addictions.

[-] conciselyverbose@kbin.social 2 points 1 year ago

Yeah, you're right that even having users rate content is still limited.

I'd argue it almost definitely has to be better than engagement, though. It also has the potential to be less punitive to people who actually are thoughtful with what they like by using the likes as more of a classification problem and less shoving the same trash in everyone's face.

It's a hard problem, but sites aren't even attempting to actually attempt to do anything but tie you to a shitty dopamine loop.

[-] BarryZuckerkorn@beehaw.org 1 points 1 year ago

I’d argue it almost definitely has to be better than engagement, though.

Totally agree. I think those who design the algorithms and measure engagement need to remember that there is a difference between immediate dopamine rush versus long term user satisfaction. User votes can sometimes be poor predictors of long term satisfaction, but I imagine engagement metrics are even less reliable.

[-] conciselyverbose@kbin.social 1 points 1 year ago

They don't want satisfaction.

They want addiction.

[-] BarryZuckerkorn@beehaw.org 1 points 1 year ago

That's not a sustainable model, either. Zynga had a decent run but ended up flaming out, eventually purchased by a large gaming company.

That's to say nothing of the business models around gambling, alcohol, tobacco, and addictive pharmaceuticals. Low level background addiction is the most profitable, while intense and debilitating addictions tend to lead to unstable revenue (and heavy regulation).

[-] Zeth0s@reddthat.com 4 points 1 year ago* (last edited 1 year ago)

It is not at the moment. Models are built on the assumption of stability, i.e. that what they are modelling doesn't change over time, doesn't evolve. This is clearly untrue, and cheating is a way the environment evolves. Only way to consider that, is to create a on-line continous learning algorithm. Currently this exists and is called reinforcement learning. Main issue is that methods to account for an evolving environment are still under active research. In the sense that methods to address this issue are not yet available.

It is an extremely difficult task tbf

[-] OneRedFox@beehaw.org 3 points 1 year ago

Ok, then what about algorithms that are reasonably difficult to game?

[-] Zeth0s@reddthat.com 4 points 1 year ago* (last edited 1 year ago)

It requires continuous expansive improvements. It is like real world. Building a system robust to frauds works on the short term, but on the mid and long term is impossibile. That is why laws change, evolve, we have governments and so on. Because system reacts to your rules and algorithms, making them less effective.

And these continous expensive improvements are done daily, but it is a difficult job

[-] CapedStanker@beehaw.org 4 points 1 year ago

I don't think the idea should be to make the algorithm's ungameable because I feel like that is literally impossible with humans. The first rule of web dev or game dev is that the users are going to find ways to use your site, app, software, or api in ways you never intended regardless of how long you, or even a team of people, think about it.

I'd rather see something where the algorithm is open and pieces of it are voted on by the users and other interested parties. Perhaps let people create and curate their own algorithm's, something like playlist curation on spotify or youtube but make it as transparent as possible, let people share them and such. Kind of like how playlists are shared.

[-] SafetyGoggles@feddit.de 2 points 1 year ago

I'd rather see something where the algorithm is open and pieces of it are voted on by the users and other interested parties. Perhaps let people create and curate their own algorithm's, something like playlist curation on spotify or youtube but make it as transparent as possible, let people share them and such. Kind of like how playlists are shared.

Isn't that already how it works, sans the transparency part?

You press "like" on something you like, and the algorithm shows you more that are related to that thing you just liked. Indirectly, you're curating your feed/algorithm. Or maybe you can look at this from another angle, maybe the "like" button isn't just for the things you like, but also the things that you don't particularity like, but would like to see more.

Then there's other people around you, your Facebook friends, their likes also affect your feed, as you can see the algorithm suggests things that "people that are interested in things you're interested in, are also interested in".

this post was submitted on 29 Jul 2023
398 points (100.0% liked)

Technology

37739 readers
725 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS