List order may not reflect order of importance.
Useless article, but at least they link the source: https://localmess.github.io/
We disclose a novel tracking method by Meta and Yandex potentially affecting billions of Android users. We found that native Android apps—including Facebook, Instagram, and several Yandex apps including Maps and Browser—silently listen on fixed local ports for tracking purposes.
These native Android apps receive browsers' metadata, cookies and commands from the Meta Pixel and Yandex Metrica scripts embedded on thousands of web sites. These JavaScripts load on users' mobile browsers and silently connect with native apps running on the same device through localhost sockets. As native apps access programatically device identifiers like the Android Advertising ID (AAID) or handle user identities as in the case of Meta apps, this method effectively allows these organizations to link mobile browsing sessions and web cookies to user identities, hence de-anonymizing users' visiting sites embedding their scripts.
📢 UPDATE: As of June 3rd 7:45 CEST, Meta/Facebook Pixel script is no longer sending any packets or requests to localhost. The code responsible for sending the _fbp cookie has been almost completely removed.
To be honest, I wouldn't have been much impressed by the HTML specifications, either. An open source alternative for gopher? Oh, how cute. Be sure to tell all your geek friends.
Today, LSD would never be discovered. Guy didn't even use gloves and lived to 102.
[French media] said the investigation was focused on a lack of moderators on Telegram, and that police considered that this situation allowed criminal activity to go on undeterred on the messaging app.
Europe defending its citizens against the tech giants, I'm sure.
This is a brutally dystopian law. Forget the AI angle and turn on your brain.
Any information will get a label saying who owns it and what can be done with it. Tampering with these labels becomes a crime. This is the infrastructure for the complete control of the flow of all information.
The FTC is worried that the big tech firms will further entrench their monopolies. They are doing a lot of good stuff lately; an underappreciated boon of the Biden Presidency. Lina Khan looks to be really set on fixing decades of mistakes.
I guess they just want to know if these deals lock out potential competitors.
The wars of the future will not be fought on the battlefield or at sea. They will be fought in space, or possibly on top of a very tall mountain. In either case, most of the actual fighting will be done by small robots. And as you go forth today remember always your duty is clear: To build and maintain those robots.
Despite the fact that Nvidia is now almost the main beneficiary of the growing interest in AI, the head of the company, Jensen Huang, does not believe that additional trillions of dollars need to be invested in the industry.
*Because of
You heard it, guys. There's no need to create competition to Nvidia's chips. It's perfectly fine if all the profits go to Nvidia, says Nvidia's CEO.
Arrows pointing out from Germany indicating a pointless quest for more space. Why do I feel like I have seen that before?
Explanation of how this works.
These "AI models" (meaning the free and open Stable Diffusion in particular) consist of different parts. The important parts here are the VAE and the actual "image maker" (U-Net).
A VAE (Variational AutoEncoder) is a kind of AI that can be used to compress data. In image generators, a VAE is used to compress the images. The actual image AI only works on the smaller, compressed image (the latent representation), which means it takes a less powerful computer (and uses less energy). It’s that which makes it possible to run Stable Diffusion at home.
This attack targets the VAE. The image is altered so that the latent representation is that of a very different image, but still roughly the same to humans. Say, you take images of a cat and of a dog. You put both of them through the VAE to get the latent representation. Now you alter the image of the cat until its latent representation is similar to that of the dog. You alter it only in small ways and use methods to check that it still looks similar for humans. So, what the actual image maker AI "sees" is very different from the image the human sees.
Obviously, this only works if you have access to the VAE used by the image generator. So, it only works against open source AI; basically only Stable Diffusion at this point. Companies that use a closed source VAE cannot be attacked in this way.
I guess it makes sense if your ideology is that information must be owned and everything should make money for someone. I guess some people see cyberpunk dystopia as a desirable future. I wonder if it bothers them that all the tools they used are free (EG the method to check if images are similar to humans).
It doesn’t seem to be a very effective attack but it may have some long-term PR effect. Training an AI costs a fair amount of money. People who give that away for free probably still have some ulterior motive, such as being liked. If instead you get the full hate of a few anarcho-capitalists that threaten digital vandalism, you may be deterred. Well, my two cents.
I'm changing the order some, because I want to get this off my chest first of all.
That's not what I'm seeing. Here's what I'm seeing:
First, you start out with a little story. Remember my post about narratives?
You emphasize what "needs" to be achieved. You try to engage the reader's emotions. What's completely missing is any concern with how or if your proposed solution works.
There are reputation management companies that will scrub or suppress information for a fee. People who are professionally famous may also spend much time and effort to manipulate the available information about them. Ordinary people usually do not have the necessary legal or technical knowledge to do this. They may be unwilling to spend the time or money. Well, one could say that this is alright. Ordinary people do not rely on their reputation in the same way as celebrities, business people, and so on.
The fact is that your proposal gives famous and wealthy elites the power to suppress information they do not like. Ordinary people are on their own, limited by their capabilities (think about the illiterate, the elderly, and so on).
AIs generally do not leak their training data. Only fairly well known people feature enough in the training data so that a LLM will be able to answer questions about them. Having to make the data searchable on the net, makes it much more likely that it is leaked with harmful consequences. On balance, I believe your proposal makes things worse for the average person while benefit only certain elites.
It would have been straightforward to say that you wish to hold AI companies accountable for damage caused by their service. That's the case anyway; no additional laws needed. Yet, you make the deliberate choice to put the responsibility on individuals. Why is your first instinct to go this round-about route?
But market prices aren't usually arbitrary. People negotiate but they usually come to predictable agreements. Whatever our ultimate goals are, we have rather similar ideas about "a good deal".
All very reasonable ideas. Eventually, the question is what the effect on the economy is, at least as far as I'm concerned.
These tests mean that more labor and effort is necessary. Mistakes are costly. These costs fall on the consumer. The big picture view is that, on average, either people have less free time because more work is demanded, or they make do with less because the work does not produce anything immediately beneficial. So the question is if this work does lead to something beneficial after all, in some indirect way. What do you think?
No. That is the immediate hands-on issue. As you know, the web is full of unauthorized content.
Well? What's your pitch?
That is not happening, though?
You compare intellectual property to physical property. Except here, where it becomes "labor". I don't think you would point at a factory and say that it is the owner's labor. If some worker took some screws home for a hobby project, I don't think you would accuse him of stealing labor. Does it bother you how easily you regurgitate these slogans?
Good question. That's an economics question. It requires a bit of an analytical approach. Perhaps we should start by considering if your idea works. You are saying that AI companies should have to buy a copy before being allowed to train on the content. So: How many extra copies will an author sell? What would that mean for their income?
We should probably also extend the question beyond just authors. Publishers get a cut for each copy sold. How many extra copies will a publisher sell and what does that mean for their income?
Actually, the money will go to the copyright owner; often not the same person as the creator. In that way, it is like physical property. Ordinary workers don't own what they produce. A single daily newspaper contains more words than many books. The rights are typically owned by the newspaper corporation and not the author. What does that mean for their income?