[-] Shelena@feddit.nl 1 points 1 week ago

Dank je wel! :-)

[-] Shelena@feddit.nl 1 points 3 months ago

I think for most people it is not the best advice. In most cases, there are many other factors at play than just willpower and "calories in vs calories out". Obesity should be viewed and treated more like a disease, because it is. If you are interested, I can link you some interesting papers on this.

[-] Shelena@feddit.nl 1 points 3 months ago

It is not as simple as just calories in vs calories out. Your body has a setting point for what weight it thinks it should be. Once you are overweight, your setting point will be higher and your body wants to get back to that higher weight. It will start working actively against you. This might mean your appetite will increase and your metabolism will slow down. I think that is what you are describing here.

Trying to push yourself to lose more weight despite your body working against you can cause rebound weight gain if you are not able to keep the diet (which might become increasingly difficult due to increasing appetite). The most important thing is to keep a healthy diet that does not reduce your quality of life too much and is doable on the long term, I think. If you are struggling everyday, then it might be better to eat a little bit more and stay on a higher weight a bit longer to ensure that you will maintain the weight loss.

Maybe this is already what you meant. But the phrase "calories in vs calories out" and stating that nothing else matters made me want to respond. I think it is a popular oversimplification that causes a lot of unnecessary suffering for people trying to lose weight.

[-] Shelena@feddit.nl 1 points 3 months ago

That is just really sad. Evil is defined by a lack of empathy and this way of thinking clearly shows a lack of empathy.

[-] Shelena@feddit.nl 1 points 8 months ago

That is true. I try to stay optimistic. I think they will never really change their minds probably, no matter what happens. But they might lose interest at some point and stop voting. Maybe when they get confronted with real issues in their lifes instead of the imaginary ones they read about on Facebook.

[-] Shelena@feddit.nl 1 points 10 months ago

He looks eel!

[-] Shelena@feddit.nl 1 points 11 months ago

I did not know that. That is actually a really good explanation for that. Shows how old the tradition is.

[-] Shelena@feddit.nl 1 points 1 year ago

I agree we need a definition. But there always has been disagreement about what definition should be used (as is the case with almost anything in most fields of science). There traditionally have been four types of definitions of (artificial) intelligence, if I remember correctly they are: thinking like a human, thinking rationally, behaving like a human, behaving rationally. I remember having to write an essay for my studies about it and ending it with saying that we should not aim to create AI that thinks like a human, because there are more fun ways to create new humans. ;-)

I think the new LLMs will pass most forms of the Turing test and are thus able to behave like a human. According to Turing, we should therefore assume that they are conscious, as we do the same for humans, based on their behaviour. And I think he has a point from a rational point of view, although it seems very counterintuitive to give ChatGPT rights.

I think the definitions fitting in the category of behaving rationally always had the largest following, as it allows for rationality that is different from human's. And then, of course, rationality often is ill-defined. I am not sure whether the goal posts have been changed as this was the dominant idea for a long time.

There used to be a lot of discussion about whether we should focus on developing weak AI (narrow, performance on a single or few tasks) or strong AI (broad, performance on a wide range of tasks). I think right now, the focus is mainly on strong AI and it has been renamed to Artificial General Intelligence.

Scientists, and everyone else, have always been bad at predicting what will happen in the future. In addition, disagreement about what will be possible and when always has been at the center of the discussions in the field. However, if you look at the dominant ideas of what AI can do and in what time frame, it is not always the case that researchers underestimate developments. I started studying AI in 2006 (I feel really old now) and based on my experience, I agree with you the the technological developments often are underestimated. However, the impact of AI on society seems to be continuously overestimated.

I remember that at the beginning of my studies there was a lot of talk about automated reasoning systems being able to do diagnosis better than doctors and therefore that they would replace them. Doctors would have only a very minor role as a human would need to take responsibility, but that was that. When I go to my doctor, that still has not happened. This is just an example. But the benefits and dangers of AI have been discussed from the beginning of the field and what you see in practice is that the role of AI has grown, but is still much, much smaller than in practice.

I think the liquid neural networks are very neat and useful. However, they are still neural networks. It is still an adaptation of the same technology, with the same issues. I mean, you can get an image recognition system off the rails by just showing an image with a few specific pixels changed. The issue is that it is purely pattern-based. These lack an basic understanding of concepts that humans have. This type of understanding is closer to what is developed in the field of symbolic AI, which has really fallen out of fashion. However, if we could combine them, we could really make some new advancements, I believe. Not just adaptations of what we already have, but a new type of system that really can go beyond what LLMs do right now. Attempts to do so have been made, but they have not been really successful. If this happens and the results are as big as I expect, maybe I will start to worry.

As for the rights of AI, I believe that researchers and other developers of AI should be very vocal about this, to make sure the public understands this. This might put pressure on the people in power. It might help if people experience behaviour of AI that suggests consciousness, or even if we let AI speak for itself.

We should not just try to control the AI. I mean, if you have a child, you do not teach it how to become a good human by just controlling it all the time. It will not learn to control itself and it will likely follow your example of being controlling. We will need to be kind to it, to teach it kindness. We need to be the same towards the AI, I believe. And just like a child that does not have emotions might behave like a psychopath, AI without emotions might as well. So we need to find a way to make it have emotions as well. There has been some work on that also, but also very limited.

I think the focus is still too much only on ML for AGI to be created.

[-] Shelena@feddit.nl 1 points 1 year ago

Yes, you are right! It is weird, because the interface looks exactly the same as of Stable Diffusion and it is the first result you get if you search it with DuckDuckGo. Fell for that one.

[-] Shelena@feddit.nl 1 points 1 year ago

I think what happens with me is that I want to wake up and I can't and then I panic and everything becomes scary. So, it is probably just that I want to wake up and I cannot.

[-] Shelena@feddit.nl 1 points 1 year ago

I think all interesting stories should count ;-)

view more: ‹ prev next ›

Shelena

joined 1 year ago