[-] BigMuffin69@awful.systems 9 points 3 months ago

Wild. Just the mention of "the moon" and it starts playing in my head. This place is an info hazard.

[-] BigMuffin69@awful.systems 10 points 3 months ago

A nice exponential curve depicting the infinite future lives saved by whacking a ceo

[-] BigMuffin69@awful.systems 8 points 7 months ago

Delete this now. This is an info hazard.

[-] BigMuffin69@awful.systems 8 points 8 months ago* (last edited 8 months ago)
[-] BigMuffin69@awful.systems 9 points 8 months ago

Article summary:

A close look into the people and entities who helped draft and advocate for California’s controversial AI safety bill (SB 1047) reveals that one of the bill’s co-sponsors could have a significant conflict of interest Dan Hendrycks, an executive at Center for AI Safety — the firm whose lobbying arm helped draft the bill — co-founded a company called Gray Swan that offers AI safety compliance tools that seem positioned to provide the type of auditing data the bill would require; Gray Swan publicly launched on Tuesday The Center's lobbying arm was set up so it could help draft SB 1047 after state senator Scott Wiener approached the Center

[-] BigMuffin69@awful.systems 9 points 10 months ago* (last edited 10 months ago)

I got you homie

⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀

[-] BigMuffin69@awful.systems 9 points 11 months ago

ngl, I'd sooner believe slime mold had mental states than a sequence of matrix multiplications & ReLUs.

[-] BigMuffin69@awful.systems 10 points 1 year ago

he dropped out of school after eighth grade, taught himself calculus

Lmaou, gonna need a citation on this one chief. This the same guy who said we need people monitoring for 'sudden drops' in the loss function? I'm supposed to believe this poser understands what a derivative is now?

[-] BigMuffin69@awful.systems 8 points 1 year ago

brb, modding Skyrim so when you open an Elder Scroll it's a fucking LW post.

At least then it will make sense why you go blind.

[-] BigMuffin69@awful.systems 10 points 1 year ago

Well, obviously an ASI will be able to solve the halting problem, right after it starts solving NP-hard problems in nlog(n) time. After all, if it couldn't, would it really be an ASI? Our puny human brains just can't comprehend the bigness of its IQ.

[-] BigMuffin69@awful.systems 9 points 1 year ago* (last edited 1 year ago)

This is a quality sneer, bravo 👍 . I had randomly encountered this super-recursive article in the wild after unfortunately stumbling upon some 🐀 blog post about how either the CTT lives in your heart or it doesn't (NOT A CULT).

Speaking of hyper computation, reminds my of how ol' Ilya was saying 'obviously' the only reason NNs success could be explained is because they are minimizing Kolmogorov complexity, a literally uncomputabile problem. Try teaching a NN to learn 4-digit addition and tell me if the result looks Kolmogorov optimal.

[-] BigMuffin69@awful.systems 8 points 1 year ago

As in, if your last hope for immortality is brain uploads, you are kinda cornered into believing your sense of self gets shared between the physical and the digital instance, otherwise what’s the point? EY appears to be in this boat, he’s claimed something like there’s no real difference between instances of You existing in different moments in time sharing a self and you sharing a self with a perfect digital copy, so yeah, it’s obviously possible, unavoidable even.

God help me for asking, but do the Yuddites have a response to “every open system implements every finite state automaton” ?

view more: ‹ prev next ›

BigMuffin69

joined 1 year ago