122
submitted 1 year ago by hedge@beehaw.org to c/technology@beehaw.org
top 50 comments
sorted by: hot top controversial new old

What a non-story. The username, profile picture, posts from profile, and post interactions are all required for displaying the content that the Thread's user has subscribed to. The IP address is required for connecting to the service to retrieve that content. Facebook doesn't get any more access to your data than necessary nor do they get any more access to your data than anybody else. This is just fear mongering.

[-] spaduf@slrpnk.net 15 points 1 year ago

But remember, they intend to monetize this information by building it into your ad profile.

[-] FaceDeer@kbin.social 3 points 1 year ago

Oh noes, someone is making money out there off of something I did that I can't actually make money off of myself.

I have no love for Facebook or any other big giant corporation, but IMO people have really become overly sensitive about this stuff. They think they can send me ads that are more relevant to me now that they've seen a few of my posts. That doesn't harm me at all, I don't see their ads regardless because I've got ad blockers up the wazoo.

[-] BitOneZero@beehaw.org 10 points 1 year ago

What a non-story.

Lemmy project set wild unrealistic expectations on GItHub project: 1) "high performance', maybe the Rust code but PostgreSQL logic is the ORM madness. 2) "full erase" while sending all your public comments and posts to ActivePub without agreement on concept of delete.

[-] Penguincoder@beehaw.org 17 points 1 year ago

unrealistic expectations on GItHub project: 1) "high performance

For sure. That seems to be the go to phrase for anything developed in Rust. By itself, Rust isn't any safer or faster than another similar language; it takes a good developer to make it work well.

Just because it's written in Rust doesn't make your app safe, or performant. Just like because your app is written in C, doesn't mean it's buggy and insecure.

[-] BitOneZero@beehaw.org 9 points 1 year ago* (last edited 1 year ago)

Just because it’s written in Rust doesn’t make your app safe, or performant.

Lemmy 0.18.4 listing posts, frequently via ORM Diesel:

            SELECT "post"."id", "post"."name", "post"."url", "post"."body", "post"."creator_id", "post"."community_id", "post"."removed",
              "post"."locked", "post"."published", "post"."updated", "post"."deleted", "post"."nsfw", "post"."embed_title", "post"."embed_description",
              "post"."thumbnail_url", "post"."ap_id", "post"."local", "post"."embed_video_url", "post"."language_id", "post"."featured_community",
              "post"."featured_local",
              "person"."id", "person"."name", "person"."display_name", "person"."avatar", "person"."banned", "person"."published", "person"."updated",
              "person"."actor_id", "person"."bio", "person"."local", "person"."private_key", "person"."public_key", "person"."last_refreshed_at",
              "person"."banner", "person"."deleted", "person"."inbox_url", "person"."shared_inbox_url", "person"."matrix_user_id",
              "person"."admin",
              "person"."bot_account", "person"."ban_expires", "person"."instance_id",
              "community"."id", "community"."name", "community"."title", "community"."description", "community"."removed", "community"."published",
              "community"."updated", "community"."deleted", "community"."nsfw", "community"."actor_id", "community"."local", "community"."private_key",
              "community"."public_key", "community"."last_refreshed_at", "community"."icon", "community"."banner", "community"."followers_url",
              "community"."inbox_url", "community"."shared_inbox_url", "community"."hidden", "community"."posting_restricted_to_mods",
              "community"."instance_id", "community"."moderators_url", "community"."featured_url",
              ("community_person_ban"."id" IS NOT NULL),
              "post_aggregates"."id", "post_aggregates"."post_id", "post_aggregates"."comments", "post_aggregates"."score", "post_aggregates"."upvotes",
              "post_aggregates"."downvotes", "post_aggregates"."published", "post_aggregates"."newest_comment_time_necro",
              "post_aggregates"."newest_comment_time", "post_aggregates"."featured_community", "post_aggregates"."featured_local",
              "post_aggregates"."hot_rank", "post_aggregates"."hot_rank_active", "post_aggregates"."community_id", "post_aggregates"."creator_id",
              "post_aggregates"."controversy_rank", "community_follower"."pending",
              ("post_saved"."id" IS NOT NULL),
              ("post_read"."id" IS NOT NULL),
              ("person_block"."id" IS NOT NULL),
              "post_like"."score",
              coalesce(("post_aggregates"."comments" - "person_post_aggregates"."read_comments"),
              "post_aggregates"."comments")
             
              FROM ((((((((((((
                ("post_aggregates"
                   INNER JOIN "person" ON ("post_aggregates"."creator_id" = "person"."id"))
                INNER JOIN "community" ON ("post_aggregates"."community_id" = "community"."id"))
                LEFT OUTER JOIN "community_person_ban" ON (("post_aggregates"."community_id" = "community_person_ban"."community_id") AND ("community_person_ban"."person_id" = "post_aggregates"."creator_id"))
                )
                INNER JOIN "post" ON ("post_aggregates"."post_id" = "post"."id")
                )
                LEFT OUTER JOIN "community_follower" ON (("post_aggregates"."community_id" = "community_follower"."community_id") AND ("community_follower"."person_id" = $1))
                )
                LEFT OUTER JOIN "community_moderator" ON (("post"."community_id" = "community_moderator"."community_id") AND ("community_moderator"."person_id" = $1))
                )
                LEFT OUTER JOIN "post_saved" ON (("post_aggregates"."post_id" = "post_saved"."post_id") AND ("post_saved"."person_id" = $1))
                )
                LEFT OUTER JOIN "post_read" ON (("post_aggregates"."post_id" = "post_read"."post_id") AND ("post_read"."person_id" = $1))
                )
                LEFT OUTER JOIN "person_block" ON (("post_aggregates"."creator_id" = "person_block"."target_id") AND ("person_block"."person_id" = $1))
                )
                LEFT OUTER JOIN "post_like" ON (("post_aggregates"."post_id" = "post_like"."post_id") AND ("post_like"."person_id" = $1))
                )
                LEFT OUTER JOIN "person_post_aggregates" ON (("post_aggregates"."post_id" = "person_post_aggregates"."post_id") AND ("person_post_aggregates"."person_id" = $1))
                )
                LEFT OUTER JOIN "community_block" ON (("post_aggregates"."community_id" = "community_block"."community_id") AND ("community_block"."person_id" = $1))
                )
                LEFT OUTER JOIN "local_user_language" ON (("post"."language_id" = "local_user_language"."language_id") AND ("local_user_language"."local_user_id" = $2))
                )
                
                WHERE
                (((
                    (((
                    (
                    ("community"."removed" = $3) AND ("post"."removed" = $4))
                    AND ("community_follower"."pending" IS NOT NULL)
                    )
                    AND ("post"."nsfw" = $5)
                    )
                    AND ("community"."nsfw" = $6)
                    )
                    AND ("local_user_language"."language_id" IS NOT NULL)
                    )
                    AND ("community_block"."person_id" IS NULL)
                    )
                    AND ("person_block"."person_id" IS NULL)
                    )
                    
                ORDER BY "post_aggregates"."featured_local" DESC , "post_aggregates"."hot_rank_active" DESC , "post_aggregates"."published" DESC
                    
                LIMIT $7
                OFFSET $8
			;`

That is with hand-optimized person_id = $1, which the Rust code does not do.

[-] Penguincoder@beehaw.org 8 points 1 year ago

I cry just reading that...

[-] CanadaPlus@lemmy.sdf.org 7 points 1 year ago

See, I noticed this stuff reading through the Lemmy source code, but I assumed the authors just were on another level of database use than me. Is this actually just a mess? How exactly is it bad, beyond being opaque?

[-] BitOneZero@beehaw.org 8 points 1 year ago

serous problems with scalability, it works fine if there is little data in the system.

[-] CanadaPlus@lemmy.sdf.org 1 points 1 year ago

Huh. I guess I'll have to learn a bit more.

[-] plantstho@beehaw.org 3 points 1 year ago

This is effectively a binary blob to me lol

[-] anlumo@feddit.de 1 points 1 year ago

This doesn’t look like anything out of the ordinary in a real-world application to me. We have way more complex queries in our service, even though ours are hand crafted.

One thing we did notice though is that sometimes, it’s faster to just query the whole dataset and do the complex filtering in Rust. As soon as you hit the seq scan heuristic in PostgreSQL, there’s nothing to be gained from doing it in SQL.

load more comments (1 replies)
[-] lvxferre@lemmy.ml 42 points 1 year ago* (last edited 1 year ago)

On an individual ("you") level, the data mining is only a tiny bit concerning. Sure, Meta will hoard any sort of data that you share with the Fediverse, and then share it with its "business partners", so everyone can profile you, and then fly on circles around you, like vultures, with targetted advertisement. However:

  • The amount of data that Meta can harvest from you this way is fairly limited. Because unlike in Facebook, Instagram or WhatsApp, they have no way to force you to yield more info about you than what you're comfortable with.
  • This info is already publicly available, and Meta can already profile you, with or without Threads. And regardless of being in the Fediverse or elsewhere, you should be conscious on what you're sharing.

Even then, I like Macgirvin's take on the matter, on a collective ("the Fediverse") level. It's basically telling Meta "people here ara quite hostile against data vulturing, you won't get much out of it". It helps quite a bit against the actual threat - that Meta might try to Embrace, Extend and Extinguish the Fediverse.

[-] gabe@literature.cafe 36 points 1 year ago

Remember to block metas ip ranges and not just defederate. Make data scraping as hard as possible for these weirdos

[-] FaceDeer@kbin.social 34 points 1 year ago

You think Meta can't pick up some random new IP address just for this?

A better solution would be to either stop fretting about trivialities like this, or if you can't do that stop putting your data up on an open protocol that is specifically designed to spread it around and show it to anyone who wants to see it.

[-] MoogleMaestro@kbin.social 12 points 1 year ago

Companies need to stop ignoring copyright on data they don't own and never have owned.

[-] FaceDeer@kbin.social 21 points 1 year ago

There is nothing against copyright law to read data that a person has put online in a public, unrestricted manner for the purpose of having it be read.

[-] pup_atlas@pawb.social 7 points 1 year ago

That’s not what’s happening though, they are using that data to train their AI models, which pretty irreparably embeds identifiable aspects of it into their model. The only way to remove that data from the model would be an incredibly costly retrain. It’s not literally embedded verbatim anywhere, but it’s almost as if you took an image of a book. The data is definitely different, but if you read it (i.e. make the right prompts, or enough of them), there’s the potential to get parts of the original data back.

[-] FaceDeer@kbin.social 15 points 1 year ago

which pretty irreparably embeds identifiable aspects of it into their model.

No, it doesn't. The model doesn't contain any copyright-significant amount of the original training data in it, it physically can't contain it, the model isn't large enough. The model only contains concepts that it learned from the training data - ideas, patterns, but not literal snippets of the data.

The only time you can dredge a significant snippet of training data out is in a case where a particular bit of training data was present hundreds or thousands of times in the training data - a condition called "overfitting" that is considered a flaw and that AI trainers work hard to prevent by de-duplicating the data before training. Nobody wants overfitting, it defeats the whole point of generative AI to use it to replicate the "copy and paste" function in a hugely inefficient way. It's very hard to find any actual examples of overfitting in modern models.

It’s not literally embedded verbatim anywhere

And that's all that you need to make this copyright-kosher.

Think of it this way. Draw a picture of an apple. When you're done drawing it, think to yourself - which apple did I just draw? You've probably seen thousands of apples in your life, but you didn't draw any specific one, or piece together the picture from various specific bits of apple images you memorized. Instead you learned what the concept of an apple is like from all those examples, and drew a new thing that represents that concept of "appleness." It's the same way with these AIs, they don't have a repository of training data that they copy from whenever they're generating new text.

[-] pup_atlas@pawb.social 3 points 1 year ago

I’m aware the model doesn’t literally contain the training data, but for many models and applications, the training data is by nature small enough, and the application is restrictive enough that it is trivial to get even snippets of almost verbatim training data back out.

One of the primary models I work on involves code generation, and in those applications we’ve actually observed verbatim code being output by the model from the training data, even if there’s a fair amount of training data it’s been trained on. This has spurred concerns about license violation on open source code that was trained on.

There’s also the concept of less verbatim, but more “copied” style. Sure making a movie in the style of Wes Anderson is legitimate artistic expression, but what about a graphic designer making a logo in the “style of McDonalds”? The law is intentionally pretty murky in this department, with even some colors being trademarked for certain categories in the states. There’s not a clear line here, and LLMs are well positioned to challenge what we have on the books already. IMO this is not an AI problem, it’s a legal one that AI just happens to exacerbate.

[-] FaceDeer@kbin.social 5 points 1 year ago

You're conflating a bunch of different areas here. Trademark is an entirely different category of IP. As you say, "style" cannot be copyrighted. And the sorts of models that chatter from social media is being used for is quite different from code generation.

Sure, there is going to be a bunch of lawsuits and new legislation coming down the pipe to clarify this stuff. But it's important to bear in mind that none of that has happened yet. Things are not illegal by default, you need to have a law or precedent that makes them illegal. And there's none of that now, and no guarantee that things are going to pan out that way in the end.

People are acting incensed at AI trainers using public data to train AI as if they're doing something illegal. Maybe they want it to be illegal, but it isn't yet and may never be. Until that happens people should keep in mind that they have to debate, not dictate.

[-] pup_atlas@pawb.social 2 points 1 year ago

The law is (in an ideal world), the reflection of our collective morality. It is supposed to dictate what is “right” and “wrong”. That said— I see too many folks believing that it works the other way too, that what is illegal must be wrong, and what is legal must be ok. This is (decisively) not the case.

In AI terms, I do believe some of the things that LLMs and the companies behind them are doing now may turn out to be illegal under certain interpretations of the law. But further, I think a lot of the things companies are doing to train these models are seen as “immoral” (me included), and that the law should be changed to reflect that.

Sure that may mean that “stuff these companies are doing now is legal”, but that doesn’t mean we don’t have the right to be upset about it. Tons of stuff large corporations have done was fully legal until public outcry forced the government to legislate against it. The first step in many laws being passed is the public demonstrating a vested interest in it. I believe the same is happening here.

[-] FaceDeer@kbin.social 2 points 1 year ago

The problem I have with this is that the argument seems to boil down to "I don't like this so it should be illegal." It puts me in mind of the classic objection on the grounds that something is devastating to your case. Laws should have a rationale beyond simply being what "collective morality" decides, otherwise all sorts of religious prohibitions and moral scares end up embedded in the legal system too.

Generally speaking, laws are based on the much simpler and more generic foundation of rights. Laws exist to protect rights, and get complicated because those rights can end up conflicting with each other. So what rights do the two "sides" of this conflict bring to the table? On the pro-AI side people are arguing that they have the right to learn concepts and styles from publicly available data, to analyze that data and record that analysis, and to make use of the products of that analysis. It all seems quite reasonable and foundational to me. On the anti-AI side - arguments based on complete misunderstandings of how the technology works aside - I generally see "because it's devastating to my future career, your honor."

Anti-AI artists are simply being selfish, IMO, demanding that society must continue to provide them with their current niche of employment and "specialness" by restricting other peoples' rights through new legal restrictions. Sure, if you can convince enough people to go along with that idea those laws will be passed. That doesn't make them right. There have been many laws over the years that were both popular and wrong on many levels.

Fortunately there are many different jurisdictions in the world. There isn't just one "The Law." So even if some places do end up banning AI I don't think that's going to slow it down much on a global scale, it'll just help determine which places get a lead and which places fall behind in developing this new technology. There's too much benefit for everyone to forego it everywhere.

[-] pup_atlas@pawb.social 1 points 1 year ago

I’m out and about today, so apologies if my responses don’t contain the level of detail I’d like; As for the law being collective morality, all sorts of religious prohibitions and moral scares HAVE ended up in the law. The idea is that the “collective” is large enough to dispel any niche restrictive beliefs. Whether or not you agree with that strategy aside, that is how I believe the current system works in an ideal sense (even if it works differently in practice), that’s what it is designed to protect from my perspective.

As for anti-AI artists, let me pose a situation for you to illustrate my perspective. As a prerequisite for this situation, a large part of a lawsuit, and the ability to advocate for a law is based on standing, the idea that you personally, or a group you represent has been directly, tangibly harmed by the thing you are trying to restrict. Here is the situation:

I am a furry, and a LARGE part of the fandom is based on art and artists. A core furry experience is getting art commissioned of your character from other artists. It’s commonplace for all these artists to have a very specific, identifiable signature style, so much so that it is trivial for me and other furs to be able to identify artists by their work alone at just a glance. Many of these artists have shifted to making their living full time off of creating art. With the advent of some new generational models, it is now possible to train a model exclusively off of one singular artists style, and generate art indistinguishable from the real thing without ever contacting them. This puts their livelihood directly at risk, and also muddies the waters in terms of subject matter, and what they support. Without laws regulating training, this could take away their livelihood, or even give a (very convincing, and hard to disprove) impression that they support things they don’t, like making art involving political parties, or illegal activities, which I have seen happen already. This almost approaches defamation in my opinion.

One argument you could make is that this is similar to the invention of photography, which may have directly threatened the work of painters. And while there are some comparisons you could draw from that situation, photography didn’t fundamentally replace their work verbatim, it merely provided an alternative that filled a similar role. This situation is distinct because in many cases, it’s not possible, or at least immediately apparent which pieces are authentic, or not. That is a VERY large problem the law needs to solve as soon as possible.

Further, I believe the same, or similar problems exist in LLMs, like they do in the situation involving generative image models above. Sure with enough training, those issues are lessened in impact, but where is the line of what is ok and what isn’t? Ultimately the models themselves don’t contain any copyrighted content, but they (by design) combine related ideas and patterns found in the training data, in a way that will always approximate it, depending on the depth of training data. While “overfitting” might be considered a negative in the industry, it’s still a possibility, and until there is some sort of regulations establishing the fitness of commercially available LLMs, I can envision situations in which management would cut training short once it’s “good enough”, leaving overfitting issues in place.

Lastly, with respect, I’d like to push back on both the notion that I’d like to ban AI or LLMs, as well as the notion that I’m not educated enough on the subject to adequately debate regulations on it. Both are untrue. I’m very much in favor of developing the technology, and exploring all it’s applications. It’s revolutionary, and worthy of the research attention it’s getting. I work on a variety of models across the AI and LLM space professionally, and I’ve seen how versatile it is; That said, I have also seen how over publicized it is. We’re clearly (from my perspective) in a bubble that will eventually pop. We’re claiming products use AI to do this and that across nearly every industry, and while LLMs in particular are amazing, and can be used in a ton of applications, it’s certainly not all of them— and I’m particularly cautious of putting new models in charge of dangerous or risky processes where they shouldn’t be before we develop adequate metrics, regulation, and guardrails. To summarize my position, I’m very excited to work towards developing them further, but I want to publicly express the notion that it’s not a silver bullet, and we need to develop legal frameworks for protecting people now, rather than later.

[-] FaceDeer@kbin.social 2 points 1 year ago

all sorts of religious prohibitions and moral scares HAVE ended up in the law. The idea is that the “collective” is large enough to dispel any niche restrictive beliefs.

I'm rather confused by this. My point is that having the collective's religious prohibitions and moral scares imposed upon the minority is a bad thing, and that it's a flaw in "majority rule" that a rights-based legal system is supposed to attempt to counter. It doesn't always work but that's the idea. So simply having a large number of people pull out pitchforks and demand that the rights of AI trainers be restricted should not automatically result in that actually happening.

With regard to your scenario about furry art: You're simply describing a specific example of the general scenario I already talked about. You're saying that furry artists should have a right to copyright their "style", which is emphatically not the case. Style cannot be copyrighted (and as a furry-adjacent who's seen plenty of furry art over the years, I would also very much disagree that every furry artist has a unique style. They copy off each other all the time). You're also saying that furry artists should have a right to their livelihood, which is also not the case. Civilization changes over time, new technologies and new social movements come along and result in jobs coming and going. Nobody has the right to make a living at some particular career.

You say "A core furry experience is getting art commissioned of your character from other artists." Well, maybe that was a core furry experience. But the times they are a-changing. My avatar image here on the Fediverse was generated by me in large part by AI art generators and I got a much better experience and a much more accurate reflection of what I was going for than I would have got via a commission, and I got it for free. That sucks for the artists but it's great for everyone else.

And while there are some comparisons you could draw from that situation, photography didn’t fundamentally replace their work verbatim, it merely provided an alternative that filled a similar role.

Does AI art actually replace an artist's work verbatim? When I made my avatar image I still did a lot of intermediate fiddling steps in the Gimp. AI is just part of my workflow. An artist could also make use of it. Or they could continue making art the old fashioned way if they want, the mere existence of AI art generators doesn't affect that ability one whit. All it does is change the market, possibly making it so that they can no longer make a living at their old job.

There are still plenty of painters. But when photography came along there were probably a lot of portrait painters who were put out of work. Over the years I've had several family photographs taken in photography studios, but I've never even considered commissioning a painter to paint a portrait of myself.

Ultimately the models themselves don’t contain any copyrighted content

And that's that for basically all the anti-AI legal arguments.

but they (by design) combine related ideas and patterns found in the training data, in a way that will always approximate it, depending on the depth of training data

And there's absolutely nothing wrong with this. People do it all the time, why is it suddenly a huge moral problem when a machine does? Should it be illegal for someone to go to a furry artist and ask for something "in the style of Dark Natasha", or for an artist to pick up some of his personal style from Jay Naylor's work?

I want to publicly express the notion that it’s not a silver bullet, and we need to develop legal frameworks for protecting people now, rather than later.

I actually agree, but the people that I think are most in need of protecting are the people who train and use AI models. There are tons of news stories and personal experiences being posted these days about these people being persecuted in various ways, deplatformed, lied about, and so forth. They're the ones whose rights people are proposing should be restricted.

[-] WarmSoda@lemm.ee 5 points 1 year ago

What specific data are you referring to?

[-] BitOneZero@beehaw.org 2 points 1 year ago

"Terms of service" checkmarks are their reality

[-] anlumo@feddit.de 2 points 1 year ago

Short messages usually aren’t creative enough to be protected by copyright. Exceptions might be poems and similar texts.

[-] BlueEther@no.lastname.nz 8 points 1 year ago

Then all they need to do is spin up an instance of mastodon on any random VPS and scrape away if they really want to get the data.

But the 1,000,000 odd mastodon users would pale to their user base

[-] Powderhorn@beehaw.org 26 points 1 year ago

I'd be shocked if they weren't already harvesting publicly available data "in preparation for" federating. But bluntly, they're going to be scraping publicly available data. As in, they'd be doing this without Threads if there was advertising money to be made, and it's publicly available data.

[-] FaceDeer@kbin.social 24 points 1 year ago

It really annoys me how people react with such shock and alarm at how companies are "stealing" their data, when they put said data up in a public venue explicitly for the purpose of everyone seeing it. And particularly in the case of AI training there isn't even any need for them to save a copy of that data or redistribute it to anyone once the AI has been trained.

[-] FlowVoid@midwest.social 4 points 1 year ago* (last edited 1 year ago)

Making something publicly available does not automatically give everyone unrestricted rights to it.

For example, you do not have permission to make copies of articles in the NYT even when they are available to the public. In fact, a main purpose of IP law is to define certain rights over a work even after it is seen by the public.

In the case of AI, if training requires making a local copy of a protected work then that may be copyright infringement even if the local copy is later deleted. It's no different than torrenting a Disney movie and deleting your copy after you watched it.

[-] FaceDeer@kbin.social 3 points 1 year ago

Making something publicly available does not automatically give everyone unrestricted rights to it.

Of course not. But that's not what's happening here. Only very specific rights are needed, such as the right to learn concepts and styles from what you can see.

In the case of AI, if training requires making a local copy of a protected work then that may be copyright infringement even if the local copy is later deleted.

That's the case for literally everything you view online. Putting it up on your screen requires copying it into your computer's memory and then analyzing it in various ways. Every search engine ever has done this way more flagrantly than any AI trainer has. There have been plenty of lawsuits over this general concept already and it's not a problem.

It’s no different than torrenting a Disney movie and deleting your copy after you watched it.

Except that in this case it's not torrenting a copy that Disney didn't want to have online for you to see. It's looking at stuff that you have deliberately put up online for people to see. That's rather different.

Besides, it's actually not illegal to download a pirated movie. It's illegal to upload a pirated movie. A distinction that people often overlook.

[-] FlowVoid@midwest.social 2 points 1 year ago* (last edited 1 year ago)

Only very specific rights are needed, such as the right to learn concepts and styles from what you can see.

For AI training, you nearly always need a local copy of the data.

That's the case for literally everything you view online. Putting it up on your screen requires copying it into your computer's memory

Yes, and that copy is provided with restrictions. You can view your copy in a browser, but not necessarily use it for other purposes.

Every search engine ever has done this way more flagrantly than any AI trainer has. There have been plenty of lawsuits over this general concept already and it's not a problem.

Those cases have delineated what Google is and is not allowed to do. It can only copy a short snippet of the page as a summary. This was ruled "fair use" largely because a short snippet does not compete against the original work. If anything it advertises the original work, just as movie reviews are allowed to copy short scenes from the movie they are reviewing.

On the other hand, AIs are designed to compete against the authors of the works they downloaded. If so, a fair use defense is unlikely to succeed.

Except that in this case it's not torrenting a copy that Disney didn't want to have online for you to see. It's looking at stuff that you have deliberately put up online for people to see.

Disney does put its work online for people to see. So does the New York Times. That doesn't mean you can make an unrestricted copy of what you see.

Besides, it's actually not illegal to download a pirated movie.

Both are illegal in the US, although copyright holders generally prefer to go after uploaders.

[-] FaceDeer@kbin.social 2 points 1 year ago

Yes, and that copy is provided with restrictions. You can view your copy in a browser, but not use it for other purposes.

No, it's not. I can use it for other purposes. I can't distribute copies, that's all that copyright restricts.

Those cases have delineated what Google is and is not allowed to do. It can only store a short snippet of the page as a summary.

Which is way more than what an AI model retains. Fair use is not even required since nothing copyrighted remains in the first place. You'll first have to show that copyright is being violated before fair use even enters the picture.

Disney does put its work online for people to see. So does the New York Times. That doesn’t mean you can make an unrestricted copy of what you see.

Again, that has nothing to do with all this. AI training doesn't require "making an unrestricted copy." Once the AI has learned from a particular image or piece of text that image or text can be deleted, it's gone. No longer needed. No copy is distributed under any level of restrictiveness.

Both are illegal in the US

I am Canadian. America's laws are not global laws. If they wish to ban AI training this will become starkly apparent.

[-] FlowVoid@midwest.social 2 points 1 year ago* (last edited 1 year ago)

Which is way more than what an AI model retains.

It makes no difference what the AI model retains. The only question is whether you had permission to use your copy in the manner that you did.

So for instance suppose you made a copy of a Disney movie in any fashion (by torrent, by videotaping a screening, by screen-capturing Disney+, etc), then showed it to a classroom in its entirety, and then deleted it immediately thereafter. You infringed copyright, because you did not have permission to use it in that manner even once. It makes no difference how long you retained your copy.

Note that it would also make no difference if there were actually no students in the classroom. Or if the students were actually robots. Or just one robot, or a software AI. Or if you didn't use a screen to show the material, you simply sent the file electronically to the AI. Or if the AI deleted the file shortly after receiving it. You still didn't have permission to use your copy in the manner you did, even once. Which means it was illegal.

America's laws are not global laws.

True. But the GDPR has shown us that a country can take measures to protect its data globally.

If they wish to ban AI training this will become starkly apparent.

In every other field, researchers have long been required to use opt-in databases for their work. They can't just "scrape" your medical records without your consent in order to study a particular disease. That would be wildly unethical.

Yet research, including AI research, has thrived in the US even with such ethical requirements. I am confident future AI researchers in America can be both ethical and successful.

[-] FaceDeer@kbin.social 1 points 1 year ago

The only question is whether you had permission to use your copy in the manner that you did.

The only permission needed is to look at it.

So for instance suppose you made a copy of a Disney movie in any fashion (by torrent, by videotaping a screening, by screen-capturing Disney+, etc), then showed it to a classroom in its entirety, and then deleted it immediately thereafter.

That's a public performance, which is a form of redistribution. That's not relevant to AI training.

Note that it would also make no difference if there were actually no students in the classroom.

[citation needed]

They can’t just “scrape” your medical records without your consent in order to study a particular disease.

The goalposts just swung wildly. Who's posting medical records on the Fediverse?

I am confident future AI researchers in America can be both ethical and successful.

Except for being banned from using public data that non-American AIs are able to use.

Also, the undefined "ethical" term is a new goalpost just brought into this discussion as well. I've found its use to be unhelpful, it always boils down to meaning whatever the person who's using it wants it to mean.

[-] FlowVoid@midwest.social 3 points 1 year ago* (last edited 1 year ago)

That's a public performance, which is a form of redistribution. That's not relevant to AI training.

Copyright law defines whether or not you can make a copy of a work. The person who owns the copyright can deny permission to make any copies, or grant you to make a permission to make a copy only under certain conditions. Those conditions are completely up to the copyright holder. They might prohibit public performance, but by no means is public performance the only thing that the copyright holder can prohibit. It's simply a very common prohibition.

You are trying to trying to generalize from a specific right, viewing the content on a browser, to a general right to "look" at the content, to the right to train an AI. But legally those are not the same at all. You may be granted some, all, or none of those rights.

Suppose you are in a modern art gallery. You have been given the right to "look" at someone's art. You can nevertheless be prohibited from making a photograph of the art, even if the camera is also "looking" at it. The owner of the art can attach whatever conditions they want to your photo, including how long you can keep it and exactly what you do with it.

For example you could be allowed to photograph the art for home use but not for wider distribution. You could be allowed to photograph the art for classroom use, but not for AI training. If you are not willing to follow all of the conditions, then you can't make a photo of the art at all.

The same is true of text. Websites give permission to make a copy of their text for use on your browser. And they can set whatever rules they like for how else your copy may be used.

Except for being banned from using public data that non-American AIs are able to use.

Sure. Of course, America could also ban those non-American AIs from being used in the US. Just as America bans other products that infringe patents/IP.

load more comments (10 replies)
[-] AdminWorker@lemmy.ca 21 points 1 year ago

I said this in a different post's comments about Facebook scraping data:

Can activity pub change it's terms to say that all crawlers that use this must be gnu open sources and all information crawled must be open to the public on gnu open sources software (no crawling to a private enterprise)?

My understanding is all the big tech companies are scared of what happened with router software (openwrt) and they don't want to be forced to let competition be a foss community via gnu licensing.

[-] Radiant_sir_radiant@beehaw.org 12 points 1 year ago

I share your sentiment, but personally I don't like the GPL's Borg-like assimilation of anything it touches.

How about "every crawler using the API must provide the same API free of charge for "?

[-] anlumo@feddit.de 2 points 1 year ago

Meta has no problems providing API access free of charge, since their income comes from other sources.

[-] PrincipleOfCharity@0v0.social 7 points 1 year ago

I have also thought this is a good idea. I think that the ActivityPub standard should have a required field that lists a copyright license. Then a copyleft style copyright should be created that allows storing and indexing for distribution via open-source standards, and disallows using for AI training and data scraping. If every single post has a copyleft license then it would be risky for bigtech to repurpose it because if a whistleblower called them out that could be a huge class action suit.

A good question is if a single post can be copyrighted. I think it could. Perhaps you would consider each post like a collaborative work of art. People keep adding to it, and at the end of the day the whole chain could function as a “work”. Especially since there is a lot of useful value and knowledge in some post threads.

load more comments (1 replies)
load more comments
view more: next ›
this post was submitted on 01 Sep 2023
122 points (100.0% liked)

Technology

37573 readers
206 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS