135
submitted 10 months ago by yesman@lemmy.world to c/technology@lemmy.world

The researchers started by sketching out the problem they wanted to solve in Python, a popular programming language. But they left out the lines in the program that would specify how to solve it. That is where FunSearch comes in. It gets Codey to fill in the blanks—in effect, to suggest code that will solve the problem.

A second algorithm then checks and scores what Codey comes up with. The best suggestions—even if not yet correct—are saved and given back to Codey, which tries to complete the program again. “Many will be nonsensical, some will be sensible, and a few will be truly inspired,” says Kohli. “You take those truly inspired ones and you say, ‘Okay, take these ones and repeat.’”

After a couple of million suggestions and a few dozen repetitions of the overall process—which took a few days—FunSearch was able to come up with code that produced a correct and previously unknown solution to the cap set problem, which involves finding the largest size of a certain type of set. Imagine plotting dots on graph paper. The cap set problem is like trying to figure out how many dots you can put down without three of them ever forming a straight line.

top 33 comments
sorted by: hot top controversial new old
[-] Mahlzeit@feddit.de 49 points 10 months ago

FunSearch (so called because it searches for mathematical functions, not because it’s fun)

I'm probably not the only one who wondered.

[-] jaybone@lemmy.world 15 points 10 months ago

Some people might consider that fun :(

[-] DemBoSain@midwest.social 7 points 10 months ago

I would have called it FunkSearch, to eliminate this misunderstanding.

[-] TheGreenGolem@lemm.ee 7 points 10 months ago

The Funk, the Whole Funk, and Nothing but the Funk

[-] davidgro@lemmy.world 3 points 10 months ago

Gotta have that funk

[-] missingredient@sh.itjust.works 48 points 10 months ago

So, they basically "intelligently" brute forced it? Still cool.

[-] TimeSquirrel@kbin.social 11 points 10 months ago

Isn't that how we learn too? We stop doing the things that don't work in favor of things that do while repeatedly "brute forcing" ourselves (training/practice).

[-] PhantomPhanatic@lemmy.world 11 points 10 months ago* (last edited 10 months ago)

It's more like educated guessing, which is a lot faster than brute forcing. They can use code to check the answers so there is ground truth to verify against. A few days of compute time for an answer to a previously unsolved math problem sounds a lot better than brute forcing.

Generate enough data for good guesses and bad guesses and you can train the thing to make better guesses.

[-] 14th_cylon@lemm.ee 1 points 10 months ago

it is what we (the people) do as well. we look at the data and try to find a pattern. but the computer can process larger amount of data than people can.that's it.

[-] mumblerfish@lemmy.world 7 points 10 months ago

It's not brute forcing though. It is similar to genetic programming, it seems, but with more less understood parts.

[-] jacksilver@lemmy.world 5 points 10 months ago

I mean, I would also call genetic algorithms a form of brute forcing. And just like with genetic algorithms, this approach is going to be severely limited by the range of values that can be updated and the ability to test the outcome.

[-] mumblerfish@lemmy.world 1 points 10 months ago

Why would you not be able to test the outcome fully? And what do you mean by "limited by the range of values that can be updated"?

[-] jacksilver@lemmy.world 1 points 10 months ago

So they configured the experiment so that only certain lines of code were able to be iterared/updated. Maybe you could ask it to start from scratch, but I imagine that would increase the time for it to converge (if it ever does).

Regarding testing, not all mathematical proofs can be verified by example. Here they were trying to prove that there was an even lower bound for the problem, but not all proofs will work with that structure.

[-] thesmokingman@programming.dev 31 points 10 months ago* (last edited 10 months ago)

Buried the fucking lede with misleading garbage. They came up with new, larger cap sets than were previously known. That’s cool, but it doesn’t actually prove anything related to open cap set conjectures. I’d contend this is similar to the early solutions of the four-color map theorem albeit built with a computer coming up with the models to brute force. Pretty fucking neat; not solving an unsolvable problem by any stretch of the imagination. I would expect that kind of hyperbole from the lay press not the fucking MIT Review.

Edit because this shit is really cool: I intentionally linked this to the four color map theorem because that was the first brute force proof (at least via computer). Lots of people got pissed at the authors and said it was invalid because they reduced their special cases to a finite set and had a computer chug through them. imo proof by computer is valid and one of the ways stuff like this can aid math. There are so many problems in combinatorics alone that could benefit from this treatment of just getting new, unknown special cases to get to a general case or handling previously too large finite sets of special cases.

[-] metaStatic@kbin.social 15 points 10 months ago

We have solved the unsolvable problem.

Should probably rename it then.

[-] AbidanYre@lemmy.world 8 points 10 months ago

Lots of problems are unsolvable until they're solved.

[-] victorz@lemmy.world 6 points 10 months ago

Kind of abusing the word there a bit though eh. Maybe call them something like "really feggin hard problems" instead.

[-] thesmokingman@programming.dev 3 points 10 months ago

This is mostly incorrect. There are provably unsolvable problems and unsolved problems. Many times someone will mislabel the latter as the former; that doesn’t make it actually provably unsolvable. Often we suspect unsolved problems might be unsolvable but do not go to the extreme of claiming it until it’s proved impossible to solve.

[-] Senex@reddthat.com 11 points 10 months ago

"Gold among the garbage" sums up AI very nicely.

[-] BoastfulDaedra@lemmynsfw.com 1 points 10 months ago

This gold... smells... funny...

[-] LainOfTheWired@lemy.lol 7 points 10 months ago

Does this mean computers can finally do floating point math!

[-] autotldr@lemmings.world 5 points 10 months ago

This is the best summary I could come up with:


Google DeepMind has used a large language model to crack a famous unsolved problem in pure mathematics.

In a paper published in Nature today, the researchers say it is the first time a large language model has been used to discover a solution to a long-standing scientific puzzle—producing verifiable and valuable new information that did not previously exist.

FunSearch (so called because it searches for mathematical functions, not because it’s fun) continues a streak of discoveries in fundamental math and computer science that DeepMind has made using AI.

Built on top of DeepMind’s game-playing AI AlphaZero, both solved math problems by treating them as if they were puzzles in Go or chess.

It combines a large language model called Codey, a version of Google’s PaLM 2 that is fine-tuned on computer code, with other systems that reject incorrect or nonsensical answers and plug good ones back in.

Terence Tao at the University of California, Los Angeles, who has won many of the top awards in mathematics, including the Fields Medal, called the cap set problem “perhaps my favorite open question” in a 2007 blog post.


The original article contains 821 words, the summary contains 185 words. Saved 77%. I'm a bot and I'm open source!

[-] just_another_person@lemmy.world 5 points 10 months ago

Guarantee this comes out as untrue. Mark me.

[-] BetaDoggo_@lemmy.world 3 points 10 months ago

They basically found a more effective way to brute force the problem. I don't doubt that it's possible. The title calling it unsolvable is nonsense though.

[-] mumblerfish@lemmy.world 4 points 10 months ago

Does it perform better in a significant way to genetic programming?

[-] topinambour_rex@lemmy.world 3 points 10 months ago

Can it divide by zero ?

[-] yesman@lemmy.world 1 points 10 months ago

I thought this was interesting bc it's an instance where a LLM has done something undeniably novel and unique while expanding human understanding. It's a chink in the armor of the idea that a LLM is a "stochastic parrot" that can only regurgitate and never create.

I've been toying with this idea that LLM are showing us that what we thought of as creativity, learning, and problem solving aren't as rarefied as we thought. We know that AI isn't conscious, maybe consciousness isn't as prerequisite to behaviors and cognition as we thought.

[-] jacksilver@lemmy.world 4 points 10 months ago

I'm not so sure, it feels a lot more like the https://en.wikipedia.org/wiki/Infinite_monkey_theorem, but with a model helping limit the outputs so they are mostly usable. As is stated in the article, it took millions of runs and couple of days to get the results. So its more like brute forcing with a slightly modified genetic algorithm than anything else.

I didn't see a link to the full article, so maybe something more creative is happening behind the scenes, but it seems unlikely.

[-] thesmokingman@programming.dev 2 points 10 months ago

Your interpretation is correct. There’s no new logic here, just new special cases of a problem whose general solution is still unknown. I think it’s pretty cool and has a lot of value in places like design theory where the getting examples to try and play around with general solution ideas is really tough. But all it did was creatively crunch numbers.

[-] lemmyvore@feddit.nl -1 points 10 months ago

This approach sounds more like selective breeding to me.

If you do this with cats and select in each generation until you obtain a particularly fluffy cat, the cat doesn't get the credit. Nobody says "wow, how smart are cats for achieving this", they praise the breeder instead.

Which is as it should. The people who seed and select these algorithms and can recognize a breakthrough deserves the credit not the churning machine that goes through millions of permutations blindly.

this post was submitted on 15 Dec 2023
135 points (94.1% liked)

Technology

59020 readers
3017 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS