Society is built to distribute wealth, so that everyone can live a decent life.
As a goal, I admire it, but if you intend this as a description of how things are it'd be boundlessly naive.
Society is built to distribute wealth, so that everyone can live a decent life.
As a goal, I admire it, but if you intend this as a description of how things are it'd be boundlessly naive.
Human brains clearly work differently than AI, how is this even a question?
It's not all that clear that those differences are qualitatively meaningful, but that is irrelevant to the question they asked, so this is entirely a strawman.
Why does the way AI vs. the brain learn make training AI with art make it different to a person studying art styles? Both learn to generalise features that allows them to reproduce them. Both can do so without copying specific source material.
The term “learning” in machine learning is mainly a metaphor.
How do the way they learn differ from how humans learn? They generalise. They form "world models" of how information relates. They extrapolate.
Also, laws are written with a practical purpose in mind - they are not some universal, purely philosophical construct and never have been.
This is the only uncontroversial part of your answer. The main reason why courts will treat human and AI actions different is simply that they are not human. It will for the foreseeable future have little to do whether the processes are similar enough to how humans do it.
I interviewed with them once, and they swore up and down that they were cleaning up and divesting of all the harmful stuff, and wanted me to trust they were all about health and a smoke-free future.
Thankfully they were so staggeringly full of bullshit during the interviews that I quickly realized it'd be an absolutely horrifically toxic (groan, yes, sorry) place to work irrespective of my other doubts, and I ended up telling them I didn't want to continue the process and that I was so unhappy with the assorted bullshit during the process that I didn't want to ever be approached by them again.
That's the very long way of saying I'm not the slightest bit surprised it turns out they are in fact still massive asshats, and I'm very happy I caught on early enough.
I remember growing up in the 1980's and 90's when there were still a horrifying amount of people who refused to believe CIA did things like that at all, even in a relatively left-wing country like Norway.
Mandela insisted to the end that turning violent was instrumental to actually getting attention. He went on to say this about how ineffectual their non-violent struggle was:
“The hard facts were that 50 years of non-violence had brought the African people nothing but more and more repressive legislation, and fewer and fewer rights.” --Mandela
They were largely ignored internationally while they were peaceful.
I trust his assessment of it over yours any day.
Put another way: How long do you think most people believe the anti-Apartheid struggle went on?
I'd be willing to bet most people have no idea about the decades of resistance to increasingly repressive laws that preceded the escalation. Even those vaguely aware that Mandela's arrest happened in 1963, after the start of the sabotage operations.
They didn't get much international support until the 1970's, and that support was still fringe until the 1980's, as violence had been ramping up for two decades.
When they say "can't be blocked" I presume they mean "can't be blocked with the block function in X/Twitter". They also say it can't be liked or retweeted.
So far ads have been treated as sort-of regular posts that are just shown according to the ad rules rather than because they belong in the timeline under normal criteria, and you could like, retweet and block them just like any other post.
So this is basically them treating ads as a fully separate thing rather than just a different post type.
Though the article suggests they'll still try to make them look mostly like posts, except without showing a handle etc. though, which is extra scummy
Reggie is great. Very chill. Just wish it wasn't so shy. I've only been allowed to briefly pet Reggie on a couple of occasions over several years.
We would not.
The extra amount you need as life expectancy increases diminishes with each extra year. E.g. let's assume (for each of calculation only; you can just scale it up linearly) that you need 10k/year on top of social security to live off in retirement. If your savings is 100k, and you only get a 5% return every year, you'll run out after about 15 years. Hence a typical lifetime annuity bought at age 65 will be around that in the US because it matches up with current US life expectancy (it won't deviate much elsewhere).
So that's for living to roughly 80. Here's how it'll play out as you approach 120:
85: ~20% more 90: ~38% more 95: ~52% more 100: ~62% more 105: ~70% more 110: ~77% more 115: ~82% more 120: ~86% more
As you can see, the curve flattens out. It flattens out because you're getting closer and closer to have sufficient money that the returns can sustain you perpetually (at a 5% return, which is pretty conservative, at $200k, you can perpetually take out $10k, and no further increase in life expectancy will change that).
Now, that of course is not in any way an insignificant increase, but if we assume 40 working years, $100k is about $850/year additional investment + compounding investment return at 5%. $186k is around $1550/year compounding.
But here's the thing, if you work 10 years longer, you grow it disproportionately much, because you delay starting to take money out, and you need less, while you get the compounding investment return of ten more years, and that drives down the yearly savings you need to make back down to around $850/year.
So an increase of 40 years of life expectancy "just" requires 10 more years of work to fully fund it assuming the same payment in during the later years. But here's the thing: Most people have far higher salaries towards the end of their careers, even inflation-adjusted, so most people would be able to fund 40 more years with far less than 10 extra years of work.
(Note that if you already were on track for your pensions to last you to 90, if you were pre-retirement now, you'd "only" need about 35% extra savings to have enough until 120, because you'd get returns from a higher base, so the extra savings or extra years of work needed over what you managed would be even lower)
These all work on averages btw. - due to differences in health, this is where we really want insurance/state pensions rather than relying on individual contributions.
This doesn't mean there aren't problems to deal with. Especially if the life expectancy grows fast enough that it "outpaces" peoples ability to adjust. But it's thankfully not quite as bad as having to add another 30 years of work.
Ignoring the intentionally esoteric languages, of languages in actual use: J, K. Any descendant of APL, basically, and APL itself, though arguably APL is less obtuse than many of its descendants.
E.g, quicksort in J (EDIT: Note Lemmy seems to still garble it despite the code block; see the Wikipedia page on J for the original and other horrifying examples)
quicksort=: (($:@(<#[), (=#[), $:@(>#[)) ({~ ?@#)) ^: (1<#)
(No, I can not explain it to you)
Do it. It's fun.
My advice is to start small, and look at some simple examples. E.g. I knew I wanted mine to run in a terminal, and I love Ruby, so I started with Femto which is a really tiny Ruby editor. By itself, it's pretty useless (but beautifully written), but it was remarkably quick to get to something that was "tolerable" for light editing, and then I iterated from there.
There are many options for small ones for all kinds of different values of "small" that can serve as inspiration. E.g. Linus Torvalds has his own branch of MicroEmacs (as do many others, it's a popular starting point, and the basis for e.g. Pico, mg, Vile). Antirez (of Redis fame) has Kilo, so named because it was written to be <1k lines excluding comments, and there's an "instruction booklet" on how to write one that's using Kilo to demonstrate approaches to writing one.
The first starting point, I think is deciding how general you want it to be. E.g. I early on decided I don't care at all about being able to use it as my only editor ever, and that meant I could pick and choose use-cases that were out of scope. For example, I just want to edit "human-scale" files, not multi-GB datasets or log files - I'm happy to open that in Emacs if I ever need it - and so that gave me far more flexibility in terms of data structures because I don't need it to scale beyond a few thousand lines, and that saved me a lot of effort.
I'm mostly shocked they've not lost more and faster. I get more engagement on my ~500 follower Mastodon account than my ~50k follower twitter account these days (sure, I could paid the manchild for more exposure, and if he wasn't so insufferable and had actually made the service better, I might've considered that)
If you were to train human children on an endless series of pictures with signatures in the corner, do you seriously think they'd not emulate signatures in the corner?
If you think that, you haven't seen many children's drawings, because children also often pick up that it's normal to put something in the corner, despite the fact that to children pictures with signatures is a tiny proportion of visual input.
People also mimic. We often explicitly learn to mimic - e.g. I have my sons art folder right here, full of examples of him being explicitly taught to make direct copies as a means to learn technique.
We just don't have very good memory. This is an argument for a difference in ability to retain and reproduce inputs, not an argument for a difference in methods.
And again, this is a strawman. It doesn't even begin to try to answer the questions I asked, or the one raised by the person you first responded to.
Neither of those really suggests that all (that diffusion is different to humans learn to generalize images is likely true, what you've described does not provide even the start of any evidence of that), but again that is a strawman.
There was no claim they work the same. The question raised was how the way they're trained is different from how a human learns styles.