[-] Wololo@lemmy.world 15 points 1 year ago

My first playthrough I sent lae'zel straight to camp, never spoke to her again. At some random point before even act 1 ended, her romance dialogue came up.

My theory is the tadpoles ramp up sex drive to absurd levels, unless you're shadowheart apparently. Illithid orgies must be incredible.

[-] Wololo@lemmy.world 9 points 1 year ago

I've had similar experiences lately. Either that or it decides to review and analyze my code unprompted when I'm trying to troubleshoot a particularly tricky line. Had a few instances where it tried to borderline gaslight me into thinking that it was right and I was wrong about certain solutions. It feels like it happened rather suddenly too, it never used to do that save for the odd exception.

[-] Wololo@lemmy.world 2 points 1 year ago

Actually, I've had a slightly opposite experience. I found that while asking it general programming questions, it initially tried to exain to me how I could achieve what I was looking to do, and lately it has been jumping straight to writing example code (sometimes even asking for my existing code so that it can modify it)

[-] Wololo@lemmy.world 2 points 1 year ago

When you end up resorting to saying things like "wow, this is wonderful, but... It breaks my code into a million tiny pieces" Or "for the love of God do you have any idea what you're actually doing?" It's a sign that perhaps stack overflow is still your best (and only) ally

[-] Wololo@lemmy.world 2 points 1 year ago

It's entirely possible! I remember listening to a podcast on AI, where they mentioned someone once asked the questions "which mammal lays the largest eggs" to which the ai responded with elephants, and proceeded to argue with the user that it was right and he was wrong.

It has become a lot easier as I've learned how to kind of coach it in the direction I want, pointing out obvious errors and showing it what I'm really looking to do.

Ai is a great tool, when it works. As the technology improves I'm sure it will rapidly get better.

[-] Wololo@lemmy.world 2 points 1 year ago

If I remember correctly it should have been gpt-4, of course, there is always a chance it was 3.5

Since then I've learned much better ways to kind of manipulate it into answering my questions more precisely, and that seems to do the trick

[-] Wololo@lemmy.world 44 points 1 year ago

I literally broke down into tears doing this one night. Was running something that would take hours to complete and noticed an issue at maybe 11pm. Tried to troubleshoot and could not for the life of me figured it out. Thought to myself, surely chatgpt can help me figure this out quickly. Fast forward to 3am, work night: "no, as stated several times prior, this will not resolve the issue, it causes it to X, Y, Z, when it should be A, B, C. Do you not understand the issue?"

"I apologize for any misunderstanding. You are correct, this will not cause the program to A, B, C. You should... Inserts the same response it's been giving me for several hours"

It was at that moment that I realized these large language models might not currently be as advanced as people make them out to be

[-] Wololo@lemmy.world 3 points 1 year ago

Says someone was charged for returning fire? I'm curious about that, were they arrested for illegal firearm possession or was it something else?

Wololo

joined 1 year ago