[-] lily33@lemm.ee 11 points 8 months ago

Because judges are people, not robots mindlessly applying legislation. To succeed in such case you need the judges on the trial and all appeals to all decide to maliciously comply with the law.

[-] lily33@lemm.ee 11 points 9 months ago

That said, you can use a third party service only for sending, but receive mail on your self-hosted server.

[-] lily33@lemm.ee 11 points 10 months ago* (last edited 10 months ago)

I disagree with the "limitations" they ascribe to the Turing test - if anything, they're implementation issues. For example:

For instance, any of the games played during the test are imitation games designed to test whether or not a machine can imitate a human. The evaluators make decisions solely based on the language or tone of messages they receive.

There's absolutely no reason why the evaluators shouldn't take the content of the messages into account, and use it to judge the reasoning ability of whoever they're chatting with.

[-] lily33@lemm.ee 10 points 10 months ago* (last edited 10 months ago)

I generally back up the whole ~/.mozilla, and if I restore it after reinstall, everything is as it was. I've not tried isolating only the profile, seems pointlessly complicated.

[-] lily33@lemm.ee 12 points 11 months ago

There's desperate need to a library that's simpler to use than wlroots or smithay - but unless it supports more protocols (later shell, gamma control, session lock), I don't think this is a real a alternative yet.

[-] lily33@lemm.ee 11 points 1 year ago* (last edited 1 year ago)

have no thoughts

True

know no information

False. There's plenty of information stored in the models, and plenty of papers that delve into how it's stored, or how to extract or modify it.

I guess you can nitpick over the work "know", and what it means, but as someone else pointed out, we don't actually know what that means in humans anyway. But LLMs do use the information stored in context, they don't simply regurgitate it verbatim. For example (from this article):

If you ask an LLM what's near the Eiffel Tower, it'll list location in Paris. If you edit its stored information to think the Eiffel Tower is in Rome, it'll actually start suggesting you sights in Rome instead.

[-] lily33@lemm.ee 11 points 1 year ago

Actually, reporting issues is not considered a bad practice in open source. If the corporation expects the dev to work for free, that's a problem. But I found the original bug report, and it's just a normal report. It doesn't read entitled, doesn't demand "Fix it NOW!!!", simply explains an issue.

[-] lily33@lemm.ee 10 points 1 year ago* (last edited 1 year ago)

The irony of the situation still seems distant to the CEO. According to the leaked meeting on August 3, Yuan told employees that Zoom the product does not allow Zoom the company to "build as much trust or be as innovative as in the office."

Of course it doesn't. It allows people to communicate remotely. But it's not a 100% substitute for meeting people in person, and pretending otherwise would be stupid. Of course meeting in person builds more trust than video-chats. And discussions on a real whiteboard can be much more productive than on a video call, depending on the topic.

So why does it even exist

Why does the telephone exist? Zoom exists for the same reason. To let people talk remotely. It has some extra features a telephone doesn't, but that's it. It's not supposed replace meeting other people.


Now,

  • I totally think that in Zoom's case, there's no real reason to bring employees to the office, and this is just a corporate power play.
  • I also think there's no point for Zoom to exist when there are great open source alternatives.

But the particular argument this article lays out just makes no sense.

[-] lily33@lemm.ee 11 points 1 year ago

Nobody is limiting how people can use their pc. This would be regulations targeted at commercial use and monetization.

... Google's proposed Web Integrity API seems like a move in that direction to me.

But that's besides the point, I was trying to establish the principle that people who make things shouldn't be able to impose limitations on how these things are used later on.

A pen is not a creative work. A creative work is much different than something that’s mass produced.

Why should that difference matter, in particular when it comes to the principle I mentioned?

[-] lily33@lemm.ee 11 points 1 year ago

Then you'd get things like "Black is a pejorative word used to refer to black people"

[-] lily33@lemm.ee 10 points 1 year ago* (last edited 1 year ago)

I don't know, I'm much more concerned about the possibility that we develop huge automation capabilities that end up being controlled by very few people.

As for the specific issues in the article - yes, they're real problems. But every advance in communication and information technology makes it easier to surveil or defame, and can be used for bad policing.

Right now there's a push to regulate the internet to "prevent CSAM" by blocking encryption, and I'm afraid a push to regulate AI will not get better results.

Sure, we can ban predictive policing and demands some amounts of transparency (and the EU already wants to do that). But if we try to go further and impose restrictions on the AI models themselves, this will most likely solidify that AI is controlled by few powerful corporations. After all, highly regulated models by definition can't be free and open.

[-] lily33@lemm.ee 12 points 1 year ago

"Iran's oil shipments to China triple in 3 years because of sanctions." - Fixed

view more: ‹ prev next ›

lily33

joined 1 year ago