1
[-] General_Effort@lemmy.world 1 points 5 days ago

I'm changing the order some, because I want to get this off my chest first of all.

Ultimately, I’m not set on any ideology here. I’m regularly more concerned with making things work. And that’s my goal here, too.

That's not what I'm seeing. Here's what I'm seeing:

I wasn’t concerned with copyright here. Let’s say I’m politically active and someone leaks my address and now people start showing up, throwing eggs at my front door and threatening to kill me. Or someone spreads lies about me and that gets ingested. Or I’m a regular person and someone posted revenge porn of me. Or I’m a victim of a crime and that’s always the fist thing that shows up when someone puts in my name and it’s ruining my life. That needs to be addressed/removed. Free of charge. And that has nothing to do with licensing fees for content or celebrities. When companies use data, they need to have a complaints department and that will immediately check whether the complaint is valid and then act accordingly. There needs to be a distinction between harmful content and copyright violations.

First, you start out with a little story. Remember my post about narratives?

You emphasize what "needs" to be achieved. You try to engage the reader's emotions. What's completely missing is any concern with how or if your proposed solution works.

There are reputation management companies that will scrub or suppress information for a fee. People who are professionally famous may also spend much time and effort to manipulate the available information about them. Ordinary people usually do not have the necessary legal or technical knowledge to do this. They may be unwilling to spend the time or money. Well, one could say that this is alright. Ordinary people do not rely on their reputation in the same way as celebrities, business people, and so on.

The fact is that your proposal gives famous and wealthy elites the power to suppress information they do not like. Ordinary people are on their own, limited by their capabilities (think about the illiterate, the elderly, and so on).

AIs generally do not leak their training data. Only fairly well known people feature enough in the training data so that a LLM will be able to answer questions about them. Having to make the data searchable on the net, makes it much more likely that it is leaked with harmful consequences. On balance, I believe your proposal makes things worse for the average person while benefit only certain elites.

It would have been straightforward to say that you wish to hold AI companies accountable for damage caused by their service. That's the case anyway; no additional laws needed. Yet, you make the deliberate choice to put the responsibility on individuals. Why is your first instinct to go this round-about route?


Selling/Buying something is a very common form of contract. In our economy, the parties themselves decide what’s in the contract. I can buy apples, cauliflower or wood screws per piece or per kilogram. That’s down to my individual contract between me and the supermarket (or hardware store) and nothing the government is involved in. It’s similar with licensing, that’s always arbitrary and a matter of negotiation.

But market prices aren't usually arbitrary. People negotiate but they usually come to predictable agreements. Whatever our ultimate goals are, we have rather similar ideas about "a good deal".

I’d do it like with shipments in the industry. If you receive a truck load of nuts and bolts, you take 50 of them out and check them before accepting the shipment and integrating the lot into your products.

All very reasonable ideas. Eventually, the question is what the effect on the economy is, at least as far as I'm concerned.

These tests mean that more labor and effort is necessary. Mistakes are costly. These costs fall on the consumer. The big picture view is that, on average, either people have less free time because more work is demanded, or they make do with less because the work does not produce anything immediately beneficial. So the question is if this work does lead to something beneficial after all, in some indirect way. What do you think?

So I believe we first need to address the blatant piracy before talking about hypothetical scenarios.

No. That is the immediate hands-on issue. As you know, the web is full of unauthorized content.

All the while the internet gets more locked down, enshittified… And everyone who isn’t the big content industry or already a monopolist, loses.

Well? What's your pitch?

See my text above. Even if it was a nice idea, it leads to the opposite in the real world. A few big internet companies “win” in this war with technology, disregarding the idea behind the law, and everyone exept them loses. Cementing monopolies, not helping with them.

That is not happening, though?

And Fair Use now says the labour of the small guy is free of charge for the big company.

You compare intellectual property to physical property. Except here, where it becomes "labor". I don't think you would point at a factory and say that it is the owner's labor. If some worker took some screws home for a hobby project, I don't think you would accuse him of stealing labor. Does it bother you how easily you regurgitate these slogans?

I mean what’s your idea here? I can’t really tell. Let’s say we’re not set on copyright. How do $90,000 arrive at a book author each year so it’s a viable job and they can create something full time? And I’d like a fair solution for society.

Good question. That's an economics question. It requires a bit of an analytical approach. Perhaps we should start by considering if your idea works. You are saying that AI companies should have to buy a copy before being allowed to train on the content. So: How many extra copies will an author sell? What would that mean for their income?

We should probably also extend the question beyond just authors. Publishers get a cut for each copy sold. How many extra copies will a publisher sell and what does that mean for their income?

Actually, the money will go to the copyright owner; often not the same person as the creator. In that way, it is like physical property. Ordinary workers don't own what they produce. A single daily newspaper contains more words than many books. The rights are typically owned by the newspaper corporation and not the author. What does that mean for their income?

33

The most recent South Park episode, featuring a naked Donald Trump, may have violated the law.

65

With Tom Lehrer's passing, I suppose this is a moment to share the story of the prank he played on the National Security Agency, and how it went undiscovered for nearly 60 years.

https://bsky.app/profile/opalescentopal.bsky.social/post/3luxxx2a2f623

22

You fucked with squirrels, Morty!

19
4

An in depth look at a very narrow and specific set of norms, the consequences of which are rarely considered. I love stuff like this.

18
55
Life Goals (lemmy.world)
59

Somewhere in a government building in the UK: We did it, Patrick...

99
5
117
[-] General_Effort@lemmy.world 108 points 1 month ago

List order may not reflect order of importance.

[-] General_Effort@lemmy.world 64 points 2 months ago

Useless article, but at least they link the source: https://localmess.github.io/

We disclose a novel tracking method by Meta and Yandex potentially affecting billions of Android users. We found that native Android apps—including Facebook, Instagram, and several Yandex apps including Maps and Browser—silently listen on fixed local ports for tracking purposes.

These native Android apps receive browsers' metadata, cookies and commands from the Meta Pixel and Yandex Metrica scripts embedded on thousands of web sites. These JavaScripts load on users' mobile browsers and silently connect with native apps running on the same device through localhost sockets. As native apps access programatically device identifiers like the Android Advertising ID (AAID) or handle user identities as in the case of Meta apps, this method effectively allows these organizations to link mobile browsing sessions and web cookies to user identities, hence de-anonymizing users' visiting sites embedding their scripts.

📢 UPDATE: As of June 3rd 7:45 CEST, Meta/Facebook Pixel script is no longer sending any packets or requests to localhost. The code responsible for sending the _fbp cookie has been almost completely removed.

[-] General_Effort@lemmy.world 68 points 10 months ago

To be honest, I wouldn't have been much impressed by the HTML specifications, either. An open source alternative for gopher? Oh, how cute. Be sure to tell all your geek friends.

[-] General_Effort@lemmy.world 60 points 10 months ago

Today, LSD would never be discovered. Guy didn't even use gloves and lived to 102.

[-] General_Effort@lemmy.world 101 points 11 months ago

[French media] said the investigation was focused on a lack of moderators on Telegram, and that police considered that this situation allowed criminal activity to go on undeterred on the messaging app.

Europe defending its citizens against the tech giants, I'm sure.

[-] General_Effort@lemmy.world 88 points 1 year ago

This is a brutally dystopian law. Forget the AI angle and turn on your brain.

Any information will get a label saying who owns it and what can be done with it. Tampering with these labels becomes a crime. This is the infrastructure for the complete control of the flow of all information.

[-] General_Effort@lemmy.world 66 points 1 year ago

The FTC is worried that the big tech firms will further entrench their monopolies. They are doing a lot of good stuff lately; an underappreciated boon of the Biden Presidency. Lina Khan looks to be really set on fixing decades of mistakes.

I guess they just want to know if these deals lock out potential competitors.

[-] General_Effort@lemmy.world 60 points 1 year ago

The wars of the future will not be fought on the battlefield or at sea. They will be fought in space, or possibly on top of a very tall mountain. In either case, most of the actual fighting will be done by small robots. And as you go forth today remember always your duty is clear: To build and maintain those robots.

[-] General_Effort@lemmy.world 75 points 1 year ago

Despite the fact that Nvidia is now almost the main beneficiary of the growing interest in AI, the head of the company, Jensen Huang, does not believe that additional trillions of dollars need to be invested in the industry.

*Because of

You heard it, guys. There's no need to create competition to Nvidia's chips. It's perfectly fine if all the profits go to Nvidia, says Nvidia's CEO.

[-] General_Effort@lemmy.world 85 points 2 years ago

Arrows pointing out from Germany indicating a pointless quest for more space. Why do I feel like I have seen that before?

[-] General_Effort@lemmy.world 58 points 2 years ago

Explanation of how this works.

These "AI models" (meaning the free and open Stable Diffusion in particular) consist of different parts. The important parts here are the VAE and the actual "image maker" (U-Net).

A VAE (Variational AutoEncoder) is a kind of AI that can be used to compress data. In image generators, a VAE is used to compress the images. The actual image AI only works on the smaller, compressed image (the latent representation), which means it takes a less powerful computer (and uses less energy). It’s that which makes it possible to run Stable Diffusion at home.

This attack targets the VAE. The image is altered so that the latent representation is that of a very different image, but still roughly the same to humans. Say, you take images of a cat and of a dog. You put both of them through the VAE to get the latent representation. Now you alter the image of the cat until its latent representation is similar to that of the dog. You alter it only in small ways and use methods to check that it still looks similar for humans. So, what the actual image maker AI "sees" is very different from the image the human sees.

Obviously, this only works if you have access to the VAE used by the image generator. So, it only works against open source AI; basically only Stable Diffusion at this point. Companies that use a closed source VAE cannot be attacked in this way.


I guess it makes sense if your ideology is that information must be owned and everything should make money for someone. I guess some people see cyberpunk dystopia as a desirable future. I wonder if it bothers them that all the tools they used are free (EG the method to check if images are similar to humans).

It doesn’t seem to be a very effective attack but it may have some long-term PR effect. Training an AI costs a fair amount of money. People who give that away for free probably still have some ulterior motive, such as being liked. If instead you get the full hate of a few anarcho-capitalists that threaten digital vandalism, you may be deterred. Well, my two cents.

view more: next ›

General_Effort

joined 2 years ago