115
top 50 comments
sorted by: hot top controversial new old
[-] FlashPossum@social.fossware.space 48 points 1 year ago

Internet is already full of bullshit SEO content. Sometimes written by humans, and quite often by bots. For years now. What's coming is an arms race between different content generating AIs and search/answer AIs.

[-] intensely_human@lemm.ee 2 points 1 year ago

One of two things will be true. Either:

  1. AIs can successfully train on AI-generated content OR
  2. AIs will need human-generated content to improve

If it’s 2, then we’ll have to develop AI that detects AI-generated content. But if you have the machine that can detect whether content is in the category that helps it improve, then you have an algorithm for generating content that helps it improve.

So either 1 is true, or AI will plateau, or it will be trained only on networks where confirmed humans are the only ones participating.

[-] unknowing8343@discuss.tchncs.de 33 points 1 year ago* (last edited 1 year ago)

I have to say that I feel that currently the most consumed contents in the Internet are mostly human-written; and my proof is actually that it is now when the tendency is clearly changing. I have stumbled upon a few AI-generated articles already in the past few months, without looking for them specifically. You could tell because it sometimes focuses on weird details, or even I have seen l some kind of

as an AI, I do not have an opinion on the subject [...]

which is so funny when you see it.

So, yeah, it is definitely starting to happen, and in the next few years I wouldn't be surprised if 30 to 50 % of articles are just AI blorbs built for clicks.

How to avoid this? We can't. The only way would be to shut down the Internet, forbid computers and go back to a simpler life. And that, for many reasons will not happen unless some world-class destruction event happens.

[-] lemmyvore@feddit.nl 5 points 1 year ago* (last edited 1 year ago)

We actually can prevent it. We will go back to human-curated websites, and the links to those websites will also be maintained by humans.

This is how the early web used to work in the 90s and early 00s. We will see a resurgence of things like portals, directories (like the Mozilla Directory project — DMOZ), webrings, and last but not least actual journalism.

Unless Google manages to find a way to tell AI content from human they will become irrelevant overnight because Search is 90% of their revenue. This will kill other search engines too, but will also remove Google strangle-hold on browsers.

This also means we'll finally get to use interesting technologies that Google currently suppresses by refusing to implement them, like micro-payments. MP are an alternative to ads that was proposed a long time ago but never allowed browser support.

MP are a way to pay very small sums (a cent or a fraction of a cent) when you visit a webpage, and to make it as painless as possible for both the visitor and the website. It adds up to the same earnings for websites but introduces human oversight (you decide if the page you want to visit is worth that fraction of a cent) and most importantly gets rid of the ad plague.

[-] unknowing8343@discuss.tchncs.de 8 points 1 year ago

I find this very much like a dream that will... stay a dream. Who defines human-curated websites or true journalism if I don't even really know you are an AI bot?

Also, who says people will not like AI content? Because the world will still be full of the same people who buy Apple products and complain about the green bubbles.

[-] kava@lemmy.world 2 points 1 year ago

The problem is that this assumes there is a way to tell the difference between AI generated and human generated.

Very soon this may be practically speaking impossible. Then what? You make a human board and some person makes a chatbot that you can't differentiate from a human.

We are screwed - there may be some ways with verification.. but is that practical at scale? Would websites require users to install Spyware that watches your Webcam in order to confirm you're not a bot?

And what if a bot could just generate a video feed to trick the website?

Only private and strict groups will remain AI-free.. and if they get too big they won't work anymore

[-] agitatedpotato@lemmy.world 2 points 1 year ago
[-] GenderNeutralBro@lemmy.sdf.org 1 points 1 year ago

That's going by a different metric though. It's not claiming 47% of news articles or social media posts are by bots. It's talking about cyberattacks, not social media posts.

I don't think there are any solid numbers on human-presenting bot activity on social media. Honestly wouldn't be surprised though, especially in political forums.

1. Support Human-Created Content: Supporting human-generated content involves more than just consuming the content. You can proactively participate in crowdfunding campaigns, subscribe to creator’s newsletters, or even become a patron on platforms like Patreon. This not only provides direct financial support but also signals to other consumers and platforms that human-generated content is valuable. If you’re an influencer or have a substantial online following, your endorsement of human-generated content can help create a broader cultural shift.

2. Digital Literacy Education: Start by learning about digital literacy yourself and then share this knowledge with others. This could mean setting up workshops in your local community, offering online webinars, or mentoring a younger person. Use these opportunities to highlight the difference between human and bot-generated content, teach the basics of how algorithms shape online experiences, and foster a critical approach to online information consumption.

3. Regulate AI and Algorithms: You can get involved in the legislative process at various levels. This could mean everything from writing letters to your local and national representatives, to participating in public protests or movements. You could also consider volunteering for organizations that work on these issues or even pursuing a career in tech policy.

4. Transparency: Advocate for laws that would require tech companies to disclose their use of bots and AI. Write op-eds, start social media campaigns, or coordinate with organizations that are working towards this. Additionally, as a consumer, you can also ask direct questions to companies about their use of AI and their transparency practices.

5. Promote Ethical AI Practices: Do research into which companies adhere to ethical AI practices, and consider giving them your business. You can also use your online platform, if you have one, to highlight these companies and their practices. Your recommendations can influence others to do the same.

6. Use and Develop Tools: If you have coding skills, you can contribute to open-source projects that aim to develop tools for identifying bot-generated content. You can also participate in hackathons or online coding competitions focused on this problem. If you’re not a developer, consider supporting these initiatives financially or advocating for their wider use in your own network.

While these actions can help mitigate the “Dead Internet” scenario, it’s important to keep in mind that the internet is a vast and complex ecosystem. It’s influenced by many factors, from the technology that underpins it, to the actions of users and tech companies, to legal and cultural norms. It will require a collective effort to shape its future.

[-] wutBEE@lemmy.wutbee.com 27 points 1 year ago* (last edited 1 year ago)

Of course this would be written by ChatGPT

[-] Gsus4@lemmy.one 12 points 1 year ago

Fuck, at this rate profanity and spelling mistakes are going to start becoming a badge of honor.

load more comments (2 replies)

Un-ironicallly. Haha

[-] Fenzik@lemmy.ml 19 points 1 year ago

it’s important to keep in mind

ChatGPT detected

so true aye. anytime you try to go off script.

[-] SpicaNucifera@lemm.ee 25 points 1 year ago

Get a new Google. Search engines are the gateway to the internet. It would be nice if the door wasn't a wall of ads.

[-] kava@lemmy.world 9 points 1 year ago

I've been experimenting with Kagi recently. At first I thought it was good but then I was searching for a model number for some niche Honda motor and Kagi wasn't any help. Neither was Duckduckgo

Google found it for me, though.

Having said that, I recently switched to Duckduckgo as my main search engine although I'm open to suggestions.

I agree that we need to get off of Google just because like you said - they are the gatekeepers. I don't trust a large company with that, much less Google

You should try searx, it is a meta search engine that checks some search engines (like google or DDG) and offers you the best results.

Check out the ones near you and save that page should the one you use go down.

[-] SpicaNucifera@lemm.ee 2 points 1 year ago

I still use google too. The options out there aren't great right now.

[-] notacat@lemmy.fmhy.ml 1 points 1 year ago

I’ve been playing around with kagi and am actually super impressed with the search results for finding info about a product to purchase. Google results were full of your-search-term-best-reviews.com SEO crap but kagi found info that didn’t pop up even several pages into google search. So it might be how popular/commercial the search term is and google is still better at finding obscure niche things.

load more comments (1 replies)
[-] rokejulianlockhart@lemmy.ml 2 points 1 year ago
[-] intensely_human@lemm.ee 2 points 1 year ago

Come in and see
The software side
Of Sears 🎵

[-] WraithGear@lemmy.world 22 points 1 year ago

I find it odd that it’s multiple times reinforced that its a conspiracy theory, even unprompted. Like its happening, it’s just by what degree would we consider the internet “dead”.

[-] Scew@lemmy.world 14 points 1 year ago

That's been the meta for awhile. Anyone with a stake in something vehemently tries to discredit anyone's skepticism by calling them a conspiracy theorist. Manipulating high-traffic social media with bots likely pays well.

[-] agitatedpotato@lemmy.world 10 points 1 year ago

They measured the bot activity in 2022 and foud it was over 47% and rising. How anyone could call this a conspiracy theory is wild to me.

https://securitytoday.com/articles/2023/05/17/report-47-percent-of-internet-traffic-is-from-bots.aspx?m=1

[-] imPastaSyndrome@lemm.ee 2 points 1 year ago

Okay but that's not % of interactions, it's amount of traffic which a single days data scraper is hundreds of people, no?

load more comments (1 replies)
[-] assassinatedbyCIA@lemmy.world 19 points 1 year ago

Honestly, I don’t know think we can. Techbro ghouls with misguided VC funding will make it happen if it’s profitable enough or even if it just seems like it will be profitable.

[-] ragica@lemmy.ml 16 points 1 year ago

Probably the easiest way to avoid it is to simply rename it to something less scary sounding. Maybe something like Alive Enhanced Rich Content Internet Theory for Human People! See, not a problem now.

Also maybe we should reread Breakfast of Champions by Kurt Vonnegut. It has a storyline about a guy who finds out he is the only actual real person on earth. Everyone else are robots. And he wants to know why.

[-] theshatterstone54@feddit.uk 2 points 1 year ago

I recently started getting into audiobooks, and I guess I have a new one to add to my to-listen list.

[-] people_are_cute@lemmy.sdf.org 15 points 1 year ago

Ironically, platforms like Twitter & Reddit are doing this already to some extent by paywalling APIs

[-] bloodfart@lemmy.ml 12 points 1 year ago

As long as there’s a profit motive for generating content there isn’t anything that can be done.

[-] golli@lemm.ee 2 points 1 year ago* (last edited 1 year ago)

The profit motive seems like the key: In the end online activity still has roots in physical hardware that requires resources which need to be provided by someone. And that someone will have an incentive to prune wasteful activity.

[-] nickajeglin@lemmy.one 1 points 1 year ago

I'm worried that the costs of the physical hardware are trivial compared to the amount of money that content farming etc pulls in, so it's just an expense that scales with the amount of junk content they produce.

[-] Rentlar@lemmy.ca 9 points 1 year ago

For a permanent solution, it will to an extent require us to give up a level of anonymity. Whether it's linking a discussion with a real life meetup... like this (NSFW warning)

or some sort of government tracking system.

When nobody knows whether you are a dog posting on the internet, or a robot or a human, mitigations like Captcha and challenge questions only will slow AI down but can't hold it off forever.

It doesn't help that on Reddit (where there are/were a lot of interacting users), the top voted discussion is often the same tired memes/puns. That's a handicap to allow AI to better imitate human users.

[-] Gsus4@lemmy.one 6 points 1 year ago

Yes, this is the solution. Each user needs to know a certain critical number of other users in person who they can trust (and trust that they won't lie about bots, like u/spez) in order for there to be a mesh of trust where you can verify if any user is human in a max of 6 hops.

tl;dr: if you have no real-life friends...it's all bots :P

[-] Tabb5@vlemmy.net 2 points 1 year ago

That sounds like the PGP Web of Trust, which has been in use for a long time and provides cryptographic signatures and encryption, particularly (but not only) for email.

[-] nickajeglin@lemmy.one 1 points 1 year ago

BOT-Albert is my oldest friend.

[-] 001100010010@lemmy.dbzer0.com 7 points 1 year ago

I've always wondered if anything on the internet is even real. I mean I don't even know if everyone else is real. Maybe it's just me and everything else is a simulation. Maybe this is a prison of some sort and every negative event is just part of the punishment.

[-] vita_man@lemmy.world 6 points 1 year ago

This simulation was created to be a utopia for you, 001100 010010. I guess we need to reset again for the 1,239,726 time.

[-] intensely_human@lemm.ee 7 points 1 year ago

Participate in the internet.

That’s how you keep it from being the dead internet (which is defined as an internet in which nobody is participating).

[-] crazyminner@lemmygrad.ml 6 points 1 year ago* (last edited 1 year ago)

Forums like this may die, but chat boards like Matrix, Discord, Slack will come out on top I believe.

Anything with Voice chat. I think we're still a little ways off from them being able to simulate a talking conversation in real time. The API delay with these AIs is what gives them away.

Once you have talked with someone you know they are real. As well if you really wanted to confirm people in your community are real you could do voice chat vetting.

[-] GenderNeutralBro@lemmy.sdf.org 5 points 1 year ago

There are already successfully convincing phone scams with AI. https://www.npr.org/2023/03/22/1165448073/voice-clones-ai-scams-ftc

This will likely get significantly easier, cheaper, and faster in the very near future. Voice generation is relatively easy. We're going to need a whole new class of captchas and shibboleths to use online, but honestly, it's such a fast-moving target that I think cutting-edge AI will forever be a step ahead. I think the best we can hope for is to have viable countermeasures for commoditized AI techniques. For now that might include logic problems (which ChatGPT and its current competitors are quite bad at) but I'm sure the big players already have more advanced language bots in development.

I reallllly hate the idea of online IDs but it might be the only way.

[-] crazyminner@lemmygrad.ml 1 points 1 year ago

Convincing someone for a scam is one thing, convincing someone you're having an actually thought out conversation with inflections and emotions and logic all making sense is another.

If we get to that point the system as we know it will be over anyways.

[-] GenderNeutralBro@lemmy.sdf.org 1 points 1 year ago

I remember some years back there was a news story about some chatbot passing the Turing test. The researchers decided to make their chatbot impersonate a young Russian boy, which made its limitations harder to identify as non-human by the native-English-speaking test subjects. So it wasn't actually that impressive.

That will likely be the first kind of thing we'll see for an artificial voice-chatbot as well. It's a big world and many of the people I talk with on Discord (and even IRL) are not native English speakers and not from my country.

I'm not intimately familiar with the accents and speech patterns from everywhere in the world, so I'm conditioned to shrug off a lot of "strange" language. Because of this wide range of human speech patterns, I'm not confident that I could validate voices with a low enough false-positive and false-negative rate in practice.

I haven't really dug into the latest voice generation AI yet so I'm not sure how capable off-the-shelf programs are. I am familiar with the general techniques, though, and I think adding realistic inflection is within reach. I don't think it's possible to automate the entire pipeline yet, at least not with publicly available programs, but the field is advancing quickly so I can't take much solace in that.

[-] HobbitFoot@thelemmy.club 3 points 1 year ago

Don't be important.

[-] Kolanaki@yiffit.net 2 points 1 year ago

I feel like it already is the reality, and has been for almost 20 years now.

[-] Nemo@slrpnk.net 4 points 1 year ago

"Web 2.0", what we now call "social media", was the death knell.

load more comments
view more: next ›
this post was submitted on 05 Jul 2023
115 points (98.3% liked)

Asklemmy

43905 readers
1091 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 5 years ago
MODERATORS