[-] llama@lemmy.dbzer0.com 1 points 1 day ago

Pretty much. It's pretty straight forward.

[-] llama@lemmy.dbzer0.com 1 points 3 days ago* (last edited 3 days ago)

That really depends on your threat model. The app isn't monitoring your activity or has imbedded trackers. It pulls content directly from YouTube's CDN. All they (Google) know is your IP address, but nothing else. For 99.9% of people that's totally ok.

[-] llama@lemmy.dbzer0.com 16 points 4 days ago* (last edited 4 days ago)

My favorite anime website is down; good thing FMHY has a bunch of great ones to choose from. Migrating sucks, though.

[-] llama@lemmy.dbzer0.com 1 points 1 week ago

There's a flatpak too, but it's not good.

[-] llama@lemmy.dbzer0.com 3 points 1 week ago

Really? It's been working just fine for me.

[-] llama@lemmy.dbzer0.com 15 points 1 week ago* (last edited 1 week ago)

There are several way, honestly. For Android, there's NewPipe. The app itself fetches the YouTube data. For PC, there are similar applications that do the same such FreeTube. Those are the solutions I recommend.

If you're one of those, you can also host your own Invidious and/or Piped instances. But I like NewPipe and FeeTube better.

[-] llama@lemmy.dbzer0.com 2 points 1 week ago

And that's when it will get real scary real soon!

[-] llama@lemmy.dbzer0.com 2 points 1 week ago* (last edited 1 week ago)

Yeah. I totally get what you're saying.

However, as you pointed out, AI can deal with more information than a human possibly could. I don't think it would be unrealistic to assume that in the near future it will be possible to track someone cross accounts based on things such as their interests, the way they type, etc. Then it will be a major privacy concern. I can totally see three letter agencies using this technique to identify potential people of interest.

1
submitted 1 week ago* (last edited 6 days ago) by llama@lemmy.dbzer0.com to c/asklemmy@lemmy.world

I created this account two days ago, but one of my posts ended up in the (metaphorical) hands of an AI powered search engine that has scraping capabilities. What do you guys think about this? How do you feel about your posts/content getting scraped off of the web and potentially being used by AI models and/or AI powered tools? Curious to hear your experiences and thoughts on this.


#Prompt Update

The prompt was something like, What do you know about the user llama@lemmy.dbzer0.com on Lemmy? What can you tell me about his interests?" Initially, it generated a lot of fabricated information, but it would still include one or two accurate details. When I ran the test again, the response was much more accurate compared to the first attempt. It seems that as my account became more established, it became easier for the crawlers to find relevant information.

It even talked about this very post on item 3 and on the second bullet point of the "Notable Posts" section.

For more information, check this comment.


Edit¹: This is Perplexity. Perplexity AI employs data scraping techniques to gather information from various online sources, which it then utilizes to feed its large language models (LLMs) for generating responses to user queries. The scraping process involves automated crawlers that index and extract content from websites, including articles, summaries, and other relevant data. It is an advanced conversational search engine that enhances the research experience by providing concise, sourced answers to user queries. It operates by leveraging AI language models, such as GPT-4, to analyze information from various sources on the web. (12/28/2024)

Edit²: One could argue that data scraping by services like Perplexity may raise privacy concerns because it collects and processes vast amounts of online information without explicit user consent, potentially including personal data, comments, or content that individuals may have posted without expecting it to be aggregated and/or analyzed by AI systems. One could also argue that this indiscriminate collection raise questions about data ownership, proper attribution, and the right to control how one's digital footprint is used in training AI models. (12/28/2024)

Edit³: I added the second image to the post and its description. (12/29/2024).

15
submitted 1 week ago* (last edited 6 days ago) by llama@lemmy.dbzer0.com to c/div0@lemmy.dbzer0.com

I created this account two days ago, but one of my posts ended up in the (metaphorical) hands of an AI powered search engine that has scraping capabilities. What do you guys think about this? How do you feel about your posts/content getting scraped off of the web and potentially being used by AI models and/or AI powered tools? Curious to hear your experiences and thoughts on this.


#Prompt Update

The prompt was something like, What do you know about the user llama@lemmy.dbzer0.com on Lemmy? What can you tell me about his interests?" Initially, it generated a lot of fabricated information, but it would still include one or two accurate details. When I ran the test again, the response was much more accurate compared to the first attempt. It seems that as my account became more established, it became easier for the crawlers to find relevant information.

It even talked about this very post on item 3 and on the second bullet point of the "Notable Posts" section.

For more information, check this comment.


Edit¹: This is Perplexity. Perplexity AI employs data scraping techniques to gather information from various online sources, which it then utilizes to feed its large language models (LLMs) for generating responses to user queries. The scraping process involves automated crawlers that index and extract content from websites, including articles, summaries, and other relevant data. It is an advanced conversational search engine that enhances the research experience by providing concise, sourced answers to user queries. It operates by leveraging AI language models, such as GPT-4, to analyze information from various sources on the web. (12/28/2024)

Edit²: One could argue that data scraping by services like Perplexity may raise privacy concerns because it collects and processes vast amounts of online information without explicit user consent, potentially including personal data, comments, or content that individuals may have posted without expecting it to be aggregated and/or analyzed by AI systems. One could also argue that this indiscriminate collection raise questions about data ownership, proper attribution, and the right to control how one's digital footprint is used in training AI models. (12/28/2024)

Edit³: I added the second image to the post and its description. (12/29/2024).

1

I use both Threads and Mastodon. However, I realized that sometimes (public) profiles on Threads don't show up on Mastodon and vice versa. I also realized that most comments made on Threads posts don't show up on Mastodon – that is, if the posts appear on Mastodon at all. The same is true the other way around. Why does this happen?

[-] llama@lemmy.dbzer0.com 3 points 1 week ago

White chocolate!

1
submitted 1 week ago* (last edited 1 week ago) by llama@lemmy.dbzer0.com to c/asklemmy@lemmy.world

I've been using Lemmy since the Reddit exodus. I haven't looked back since, but I miss a lot of mental health communities that I haven't been able to find replacements for here on Lemmy. Does anyone know any cool mental health communities that are somewhat active?

llama

joined 1 week ago