[-] aiccount 1 points 1 week ago

Yeah, just what you need, an even more closed of echo chamber always telling you that you are a genius for being willfully ignorant. You are just increasing your problem.

[-] aiccount 0 points 1 week ago

There is a reason why you point to examples from years ago, that's because that is where you are still stuck.

[-] aiccount 1 points 1 week ago

Yesterday's AI is today's normal technology, this is just what keeps happening. Some people just keep forgetting how rapidly things are changing.

You'll join this "cult" once the masses do, just like you have been doing all along. Some of us are just out here a little bit in the future. You will be one of us when you think it becomes cool, and then you will self-righteously act like you were one of us all along. That's just what weak-minded followers do. They try to seem like they knew all along where the world was headed without ever trying to look ahead and ridiculing anyone who does.

[-] aiccount 1 points 1 week ago

I responded to dickish tones with dickish tone. If that is the only tone some people can understand, then that's what I'll try giving them. It should be made abundantly clear that people who constantly use technology in every aspect of their life being openly anti-technology are fools. It shouldn't be somehow accepted that living in blatant hypocrisy is cool.

[-] aiccount 1 points 5 months ago

I think there may be some confusion about how much energy it takes to respond to a single query or generate boilerplate code. I can run Llama 3 on my computer and it can do those things no problem. My computer would use about 6kWh if I ran it for 24 hours, a person in comparison takes about half of that. If my computer spends 4 hours answering queries and making code then it would take 1kWh, and that would be a whole lot of code and answers. The whole thing about powering a small town is a one-time process when the model is made, so to determine if that it worth it or not it needs to be distributed over everyone who ends up using the model that is produced. The math for that would be a bit trickier.

When compared to the amount of energy it would take to produce a group of people that can do question answering and code writing, I'm very certain that the ai model method is considerably less. Hopefully, we don't start making our decision about which one to produce based on energy efficiency. We may, though, if the people that choose the fate of the masses sees us like livestock, then we may end up having our numbers reduced in the name of efficiency. When cars were invented, horses didn't end up all living in paradise. There were just a whole lot less of them around.

[-] aiccount 1 points 5 months ago

This is an issue with many humans I've hired, though. Maybe they try to cut corners and do a shitty job, but I occasionally check, if they are bad at their job, I warn them, correct them, maybe eventually fire them. For lots of stuff, AI can be interacted with in a very similar way.

This is so similar to many people's complaints with self driving cars. Sure, accidents will still be had, they are not perfect, but neither are human drivers. If we hold AI to some standard that is way beyond people then yes, it's not there, but if we say it just needs to be better than people, then it is there for many applications, but more importantly, it is rapidly improving. Even if it was only as good as people at something, it is still way cheaper and faster. For some things, it's worth it if it isn't even as good as people yet.

I have very little issues with hallucinations anymore, when I use an LLM to get anything involving facts, I always tell it to give sources for everything, and i can have another agent independently verify the sources before i see them. Often times I provide the books or papers that I want it to specifically source from. Even if I am going to check all the sources myself after that, it is still way more efficient then if I did the whole thing myself. The thing is, with the setups I use, I literally never have it make up sources anymore. I remember that kind of thing happening back in the days when AI didn't have internet access, and there really weren't agents yet. I realize some people are still back there, but in the future(that many of us are in) its basically solved. There is still logic mistakes and such, that stuff can't be 100% depended on, but if you have a team of agents going back and forth to find an answer, then you pass it to another team of agents to independently verify the answer, and have it cycle back if a flaw is found, many issues just go away. Maybe some mistakes make it through this whole process, but the same thing happens sometimes with people.

I don't have the link on hand, but there have been studies done that show gpt3.5 working in agentic cycles perform as good or better than gpt4 out of the box. The article I saw that in was saying that basically there are already people using what gpt5 will most likely be just by using teams of agents with the latest models.

[-] aiccount 1 points 5 months ago

This is a really great way to phrase it. I am very curious to see if this difference in phrasing would really be received differently than the more blunt approach, which certainly doesn't seem to work for most people. Hopefully, we will all have AIs soon that can spoon feed anyone who can't connect the dots on their own.

It blows my mind that people can be reminded of the mass slaughter that is happening daily and think that it must somehow be excusing the one-off brutal slaughter of an individual. I always just assume that people hate to be reminded of the implication of their "sustainable" wild caught tuna or whatever.

[-] aiccount 1 points 10 months ago* (last edited 10 months ago)

Alright, no big deal. But yeah, your're gut instinct was correct when you assumed there was a missing /s. I don't really like the /s that much, especially in situations where it is so obvious.

If you had read down through this thread first then you would have seen the obviousness of the /s. I don't think my comment history outside of this thread would have done much since I don't generally talk about this stuff. I just meant if you had looked more than a couple comments in this particular back and forth discussion.

[-] aiccount 1 points 10 months ago

Well then you didn't read very many of my comments. I made this first comment because the post I responded to was so absurd so I just exaggerated the ridiculousness that they said. Of course AI is capable of creativity and intelligence. If you look at the long back and forth that this sparked you would see that this is my stance. After I made this over the top, very sarcastic comment, OP corrected themself to clarify that when they said "AI" they actually only meant the current state of LLMs. They have since admitted that it is indeed true that AI absolutely can be capable of creativity and intelligence.

[-] aiccount 1 points 10 months ago

No, sorry, you are absolutely right, and I genuinely could not be more in agreement with you. I was just annoyed to see this top comment acting like there is something magical about humans that gives them a monopoly on creativity, so I was just reiterating what they said in the hopes that people would think about it for a sec. Obviously machines can be just as creative/intelligent as humans, and most likely will be more so in the not terribly distant future.

[-] aiccount 1 points 10 months ago

Even those future "real" AIs are going to be taking in human input and regurgitating it back to us. The only difference is that the algorithms processing the data will continue to get better and better. There is not some cutoff where we go from 100% unintelligent chatbot to 100% intelligent AI. It is a gradual spectrum.

[-] aiccount 1 points 1 year ago

Hey, your name is an emotional, right? I tried to change my display name to 🤹‍♂️ but it didn't seem to work, is there something special I have to do?

view more: ‹ prev next ›

aiccount

joined 1 year ago