29
submitted 4 months ago by context@hexbear.net to c/technology@hexbear.net

By using unorthodox "cyclic" strategies—ones that even a beginning human player could detect and defeat—a crafty human can often exploit gaps in a top-level AI's strategy and fool the algorithm into a loss.

preprint of the actual science article summarized in the ars technica piece:

https://arxiv.org/pdf/2406.12843

Prior work found that superhuman Go AIs like KataGo can be defeated by simple adversarial strategies. In this paper, we study if simple defenses can improve KataGo’s worst-case performance. We test three natural defenses: adversarial training on hand-constructed positions, iterated adversarial training, and changing the network architecture. We find that some of these defenses are able to protect against previously discovered attacks. Unfortunately, we also find that none of these defenses are able to withstand adaptive attacks. In particular, we are able to train new adversaries that reliably defeat our defended agents by causing them to blunder in ways humans would not. Our results suggest that building robust AI systems is challenging even in narrow domains such as Go

you are viewing a single comment's thread
view the rest of the comments
[-] context@hexbear.net 6 points 4 months ago

yeah it's a time commitment, and especially starting out many people get overwhelmed by the number of options they have to choose from and make a decision. i never got very good, myself.

[-] Acute_Engles@hexbear.net 4 points 4 months ago

I'm also not good, for the record

this post was submitted on 15 Jul 2024
29 points (100.0% liked)

technology

23313 readers
151 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 4 years ago
MODERATORS