128
submitted 6 months ago by kromem@lemmy.world to c/technology@lemmy.world

I often see a lot of people with outdated understanding of modern LLMs.

This is probably the best interpretability research to date, by the leading interpretability research team.

It's worth a read if you want a peek behind the curtain on modern models.

[-] kromem@lemmy.world 110 points 6 months ago

That's a fun variation. The one I test out models with is usually a vegetarian wolf and a carnivorous goat, but the variation to no other objects is an interesting one too.

By the way, here's Claude 3 Opus's answer:

The solution is quite simple:

  1. The man gets into the boat and rows himself and the goat across the river to the other side.
  1. Once they reach the other side, both the man and the goat get out of the boat.

And that's it! Since there are no additional constraints or complications mentioned in the problem, the man and the goat can directly cross the river together using the boat.

[-] kromem@lemmy.world 116 points 7 months ago* (last edited 7 months ago)

For reference as to why they need to try to be so heavy handed with their prompts about BS, here was Grok, Elon's 'uncensored' AI on Twitter at launch which upset his Twitter blue subscribers:

9
submitted 8 months ago by kromem@lemmy.world to c/technology@lemmy.world
79
submitted 8 months ago by kromem@lemmy.world to c/technology@lemmy.world
[-] kromem@lemmy.world 126 points 8 months ago

Your competitors take out contract hits against your whistleblower and you need to have bodyguards to protect them.

And then your head of security and the whistleblower fall in love until at the end of the movie the competitor assassin gets into the court waiting room and the head of security throws themselves into the ninja star's way and dies in the whistleblower's arms as the ultimate sacrifice is made for love and corporate profits.

I tear up just thinking about it.

[-] kromem@lemmy.world 104 points 10 months ago* (last edited 10 months ago)

That's a weird take given the actual numbers and relative results per company, but ok.

Microsoft's price didn't change much at all and is still trading at a 35 P/E ratio (17% higher than Apple's) despite being neck and neck in the race for the largest company in the market and allegedly not having its AI efforts actually change product usage. Clearly the market is still pricing it as if it's going to grow more somehow.

AMD is down, but since when is AMD an "AI company"? That's Nvidia through and through, who is still double digit percentage points up from a month ago, and trading at a 81 P/E ratio. The market losing faith in Nvidia's competition seems more like the opposite of this headline, given it's the key area where Nvidia has a market advantage over AMD.

Google, whose revenue is 90% ads, is down in response to falling short on ad sales. Which if anything may be a result of increased chatbot usage reducing search volume and Google's chat offering being the Bing of AI chatbots.

This is clickbait analysis.

[-] kromem@lemmy.world 157 points 10 months ago

More like we know a lot more people that would have zombie bite parties because they "trust their immune system" and simultaneously don't believe in the zombie hoax.

8
submitted 10 months ago* (last edited 10 months ago) by kromem@lemmy.world to c/technology@lemmy.world

I've been saying this for about a year since seeing the Othello GPT research, but it's nice to see more minds changing as the research builds up.

Edit: Because people aren't actually reading and just commenting based on the headline, a relevant part of the article:

New research may have intimations of an answer. A theory developed by Sanjeev Arora of Princeton University and Anirudh Goyal, a research scientist at Google DeepMind, suggests that the largest of today’s LLMs are not stochastic parrots. The authors argue that as these models get bigger and are trained on more data, they improve on individual language-related abilities and also develop new ones by combining skills in a manner that hints at understanding — combinations that were unlikely to exist in the training data.

This theoretical approach, which provides a mathematically provable argument for how and why an LLM can develop so many abilities, has convinced experts like Hinton, and others. And when Arora and his team tested some of its predictions, they found that these models behaved almost exactly as expected. From all accounts, they’ve made a strong case that the largest LLMs are not just parroting what they’ve seen before.

“[They] cannot be just mimicking what has been seen in the training data,” said Sébastien Bubeck, a mathematician and computer scientist at Microsoft Research who was not part of the work. “That’s the basic insight.”

[-] kromem@lemmy.world 95 points 11 months ago* (last edited 11 months ago)

The number of adults in the US that think Satan is a literal being is way too fucking high.

It started as an editor using 'adversary' in the place of what was probably the goddess Anat appealing the head of the pantheon to kill the son of the protagonist like in the earlier Canaanite A Tale of Aqhat as an intro into what was an adaptation of the also earlier Babylonian Theodicy in Job.

But we couldn't have a polytheistic holdover, so suddenly there was a supernatural 'adversary' ('Satan') in a story.

Which in turn spawned fanfiction during the prophet ages where they referred to the supernatural adversary of Job.

Then Hellenistic ideas around Hades (both the place and figure) get added into the mix, and we get the Enochian literature about fallen angels, where the guided katabasis influenced Virgil which later informs Dante's Inferno.

Then King James messes up translating Isaiah and the Latin for the morning star (Lucifer) gets mistaken for a proper name, further tying the supernatural adversary to being one of the Enochian fallen angels. And we get Milton's Paradise Lost.

It's all just mistranslations and fanfiction.

And yet millions of people believe it's actually a thing so much so that they freak out at the idea of any references to it as literally being dangerous.

In 2022.

An age filled with things beyond the wildest imagination of those in antiquity dreaming up miracles and wonders.

We're so beyond fucked as a species.

[-] kromem@lemmy.world 154 points 11 months ago

Just wait until they find out public schools are giving their children dihydrogen monoxide without asking for parental approval.

[-] kromem@lemmy.world 96 points 1 year ago

It's the board for the non-profit which owns and controls the LLC, and none of the board members have equity in the non-profit.

This wasn't a board of investors/owners like for profit boards.

[-] kromem@lemmy.world 194 points 1 year ago* (last edited 1 year ago)

I've seen a number of misinformed comments here complaining about a profit oriented board.

It's worth keeping in mind that this board was the original non-profit board, that none of the members have equity, and literally part of the announcement is the board saying that they want to be more aligned as a company with the original charter of helping bring about AI for everyone.

There may be an argument around Altman's oust being related to his being too closed source and profit oriented, but the idea that the reasoning was the other way around is pretty ludicrous.

Again - this isn't an investor board of people who put money into the company and have equity they are trying to protect.

205
submitted 1 year ago by kromem@lemmy.world to c/world@lemmy.world
[-] kromem@lemmy.world 123 points 1 year ago

I learned so much over the years abusing Cunningham's.

Could have a presentation for the C-suite for a major company, post some tenuous claim related to what I intended to present on, and have people with PhDs in the subject citing papers correcting me with nuances that would make it into the final presentation.

It's one of the key things I miss about Reddit. The scale of Lemmy just doesn't have the same rate and quality of expertise jumping in to correct random things as a site with 100x the users.

[-] kromem@lemmy.world 109 points 1 year ago

Yeah, because it's not like theater has a longstanding history of having people play characters that are a different sex from the one they were born as or anything...

[-] kromem@lemmy.world 269 points 1 year ago

The bio of the victim from her store's website:

Lauri Carleton's career in fashion began early in her teens, working in the family business at Fred Segal Feet in Los Angeles while attending Art Center School of Design. From there she ran “the” top fashion shoe floor in the US at Joseph Magnin Century City. Eventually she joined Kenneth Cole almost from its inception and remained there for over fifteen years as an executive, building highly successful businesses, working with factories and design teams in Italy and Spain, and traveling 200 plus days a year.

With a penchant for longevity, she has been married to the same man for 28 years and is the mother of a blended family of nine children, the youngest being identical twin girls. She and her husband have traveled the greater part of the US, Europe and South America. From these travels they have nourished a passion for architecture, design, fine art, food, fashion, and have consequently learned to drink in and appreciate the beauty, style and brilliance of life. Their home of thirty years in Studio City is a reflection of this passion, as well as their getaway- a restored 1920's Fisherman's Cabin in Lake Arrowhead. Coveting the simpler lifestyle with family, friends and animals at the lake is enhanced greatly by their 1946 all mahogany Chris-Craft; the ultimate in cultivating a well appreciated and honed lifestyle.

Mag.Pi for Lauri is all about tackling everyday life with grace and ease and continuing to dream…

What a waste. A tragedy for that whole family for literally nothing. No reason at all other than small minded assholes.

view more: next ›

kromem

joined 1 year ago