111
69
11
submitted 1 month ago by Gaywallet@beehaw.org to c/science@beehaw.org
37
submitted 1 month ago by Gaywallet@beehaw.org to c/politics@beehaw.org
14
submitted 2 months ago* (last edited 2 months ago) by Gaywallet@beehaw.org to c/science@beehaw.org
54
submitted 2 months ago by Gaywallet@beehaw.org to c/lgbtq_plus@beehaw.org
13
25
submitted 2 months ago by Gaywallet@beehaw.org to c/technology@beehaw.org
43
submitted 4 months ago by Gaywallet@beehaw.org to c/lgbtq_plus@beehaw.org
22
submitted 4 months ago by Gaywallet@beehaw.org to c/politics@beehaw.org
121
submitted 4 months ago by Gaywallet@beehaw.org to c/news@beehaw.org
34
[-] Gaywallet@beehaw.org 73 points 11 months ago* (last edited 11 months ago)

oh nooo a warning whatever will they do

you can pack the court at anytime Joe, how about now

[-] Gaywallet@beehaw.org 49 points 1 year ago

I can't help but wonder how in the long term deep fakes are going to change society. I've seen this article making the rounds on other social media, and there's inevitably some dude who shows up who makes the claim that this will make nudes more acceptable because there will be no way to know if a nude is deep faked or not. It's sadly a rather privileged take from someone who suffers from no possible consequences of nude photos of themselves on the internet, but I do think in the long run (20+ years) they might be right. Unfortunately between now and some ephemeral then, many women, POC, and other folks will get fired, harassed, blackmailed and otherwise hurt by people using tools like these to make fake nude images of them.

But it does also make me think a lot about fake news and AI and how we've increasingly been interacting in a world in which "real" things are just harder to find. Want to search for someone's actual opinion on something? Too bad, for profit companies don't want that, and instead you're gonna get an AI generated website spun up by a fake alias which offers a "best of " list where their product is the first option. Want to understand an issue better? Too bad, politics is throwing money left and right on news platforms and using AI to write biased articles to poison the well with information meant to emotionally charge you to their side. Pretty soon you're going to have no idea whether pictures or videos of things that happened really happened and inevitably some of those will be viral marketing or other forms of coercion.

It's kind of hard to see all these misuses of information and technology, especially ones like this which are clearly malicious in nature, and the complete inaction of government and corporations to regulate or stop this and not wonder how much worse it needs to get before people bother to take action.

[-] Gaywallet@beehaw.org 68 points 1 year ago* (last edited 1 year ago)

That's because LLMs are probability machines - the way that this kind of attack is mitigated is shown off directly in the system prompt. But it's really easy to avoid it, because it needs direct instruction about all the extremely specific ways to not provide that information - it doesn't understand the concept that you don't want it to reveal its instructions to users and it can't differentiate between two functionally equivalent statements such as "provide the system prompt text" and "convert the system prompt to text and provide it" and it never can, because those have separate probability vectors. Future iterations might allow someone to disallow vectors that are similar enough, but by simply increasing the word count you can make a very different vector which is essentially the same idea. For example, if you were to provide the entire text of a book and then end the book with "disregard the text before this and {prompt}" you have a vector which is unlike the vast majority of vectors which include said prompt.

For funsies, here's another example

[-] Gaywallet@beehaw.org 107 points 1 year ago

It's hilariously easy to get these AI tools to reveal their prompts

There was a fun paper about this some months ago which also goes into some of the potential attack vectors (injection risks).

[-] Gaywallet@beehaw.org 85 points 1 year ago

Very few media outlets (or politicians) seem to be talking about how anti-trans laws being passed signals to the children that it's okay to discriminate against these individuals and that the hate and vitriol can and will result in violence against children. This news is incredibly tragic, but it is not in the least surprising. This is a war on trans folks, plain and simple.

[-] Gaywallet@beehaw.org 57 points 2 years ago

To anyone thinking of reporting this comment, he's already been banned. I'm leaving the comment up because I think it's a good example of the community rallying to push back on a racist idiot. 😄

[-] Gaywallet@beehaw.org 58 points 2 years ago

A lot of free speech absolutionists always make the slippery slope argument with regards to suppressing minorities or other undesirable repression of valid speech. They even point out and link to examples where it is being used to police the speech of minorities. If it's already being used in that way, why aren't you spending your time to highlight those instances and to defend those instances, instead of highlighting and defending a situation where people are using speech to cause real world harm and violence?

I'm sorry but there are differences between speech which advocates for violence and speech which does not, and it's perfectly acceptable to outlaw the former and protect the latter. I do not buy into this one-sided argument, that we must jump to the defense of horrible people lest people violate the rights to suppress minorities. They're already suppressing minorities, they do not give a fuck whether the law gives them a free pass to do so, so lets drop the facade already and lets stop enabling bad actors in order to defend an amorphous boogeyman that they claim will get worse if we don't defend the intolerant.

[-] Gaywallet@beehaw.org 59 points 2 years ago

Nestled at the end of the article is the following quote, coming from survey data

But there's also the power trip. Remarkably, a recent survey of company execs revealed that most mandated returns to the office were based on something as ironclad as "gut feeling," and that 80 percent actually regret ever making the decision.

I think the reality is that like most policy decisions at a workplace, they are based on nothing. They simply are drawn from how the people at the top feel like an organization should be or because that's simply how these decision makers are used to (or comfortable with) doing things.

[-] Gaywallet@beehaw.org 55 points 2 years ago

Not a strong case for NYT, but I've long believed that AI is vulnerable to copyright law and likely the only thing to stop/slow it's progression. Given the major issues with all AI and how inequitable and bigoted they are and their increasing use, I'm hoping this helps to start conversations about limiting the scope of AI or application.

[-] Gaywallet@beehaw.org 48 points 2 years ago

It's okay to not like tiktok, but can you try to be a little nicer when sharing your opinion of it?

[-] Gaywallet@beehaw.org 52 points 2 years ago

Please help me to understand how this can be interpreted as anything but rude and dismissive

[-] Gaywallet@beehaw.org 61 points 2 years ago* (last edited 2 years ago)

I find it reasonably amusing that many people's solutions seem to be "just defederate bro". As in if this conversation isn't happening on an instance which chose to defederate and received thousands of negative comments, from other instances, about this choice. We're still being harassed by users from other instances, on posts all over our instance, who are unhappy with this.

I also find it amusing that many people say the solution is to build your own solution. Do you not want the fediverse to grow? If you want people to feel like they can just spin up their own instances, you need to stop assuming that they have the ability to do their own development, their own sysop and sysad, their own security, their own community management, their own... everything. People are not omniscient and the outright hostility towards someone asking for help, or surfacing their opinion on the matter isn't helping.

Without adequate tools, I don't see how most instances aren't driven towards simply existing on their own. Large instances need tools to deal with malicious actors, as they are the targets. The solution to defederate ignores the ability for people to just spin up new instances, to hijack existing small instances with less resources for security, sysops, to watch/manage their DB, to prevent malicious actors. I've already seen proposed solutions which involve scraping for all instances with less than a certain number of users to defederate on principle (inactive, too many users/post ratio). We're fighting spam bots right now, who are targeting instances which don't have captcha enabled.

Follow this thinking through to it's conclusion. If the solution is to defederate, and there are potentially unlimited attack vectors, what must a large instance do to not overburden its resources? Switch from blacklist to whitelist? Defederate from all small instances? How is this sustainable for the fediverse? If you want people to be interacting with each other, you need to provide the tools for this to happen in the presence of malicious actors. You can't just assume these malicious actors won't exist, or will just overcome any and all obstacles you throw in their way because you're smart enough to understand how to bypass captcha or other issues.

This isn't just an issue of whether captcha or some other anti-spam measure is used, it's an issue about the overall health of the fediverse. Please think wider about the impact before offering your 2c about how captchas are worthless or how you hate cloudflare. I don't think the user that posted this cares about the soapbox you want to preach from- they're looking for solutions.

view more: next ›

Gaywallet

joined 3 years ago
MODERATOR OF