22
submitted 8 months ago* (last edited 8 months ago) by awesome_guy@lemmy.ml to c/asklemmy@lemmy.ml

What are pros and cons of doing this? What impact it will have on the personality / mind of the person down the line after say 10 yrs?

all 35 comments
sorted by: hot top controversial new old
[-] AnarchistArtificer@slrpnk.net 29 points 8 months ago

A friend of mine is a French teacher, and I was discussing with her an idea for how to incorporate Chat-GPT into the curriculum. Specifically, her idea was to explore its limitations as a tool, by having a lesson in the computer suite where students actively try to answer GCSE (exams for 15/16 year olds) French questions using Chat-GPT, and then peer mark them, with the goal of "catching out" their peers.

The logic was that when she was learning French in school, Google translate was still fairly new, and whilst many of her teachers desperately tried to ignore Google Translate, one teacher took the time to look at how one should (and shouldn't) use this new tool. She said that it was useful to actually be able to evaluate the limitations of online translators, rather than just saying they're always wrong and should never be used.

We tried out a few examples to see whether her idea with Chat-GPT had merit and we found that it was pretty easy to generate errors that'd be hard to spot if you're a student looking for a quick solution. Stuff like "I can't answer that because I'm a large language model" or whatever, but in French.

[-] JungleJim@sh.itjust.works 18 points 8 months ago

That's a great teacher. Refusing to teach a technology only leads to poor use. Even if one thinks it's a poor technology, teach THAT instead of just black boxing the topic. The bottle is open, the genie is out. Better to teach how to make legally airtight wishes than to ban wishmaking.

[-] user224@lemmy.sdf.org 22 points 8 months ago* (last edited 8 months ago)

It's just AI chatbot, I don't see how it would be dangerous.

And I am also pretty sure a 16 year old knows to expect inaccurate results from it, unless they've been living restricted from the outside world until now.

The only negative thing I see from it so far is kids using it to create essays, but it's not like there wasn't a countless number of them available on the internet before. It was just easier to detect as you could search up the text and see if you can find it online.

Anyway, for just playing around it gets boring after 15 minutes.
Why don't you try?

[-] LWD@lemm.ee 10 points 8 months ago* (last edited 8 months ago)

Something that appears more human is more likely to elicit them sending their private data. And that data is then sold, obviously without consent, and used however the buyers feel.

Instead of being scared to share information with it, you will volunteer your data...

– Vladimir Prelovac, CEO of Kagi AI and Search

Remember Replika, the AI chatbot that sexually harassed minors and SA victims, and (allegedly) repeated the contents of other people's messages verbatim?

It might not be as mind-rotting as TikTok but it's not good.

[-] lemontree@lemm.ee 21 points 8 months ago

I would go back a few years and ask: Should i let a 16 year old use search engines?

Probably not too different

[-] PrinceWith999Enemies@lemmy.world 9 points 8 months ago

That’s exactly my perspective.

I came of age with the birth of the web. I was using systems like Usenet, gopher, wais, and that sort of thing. I was very much into the whole cypherpunk, “information wants to be free” philosophy that thought that the more information people had, the more they could talk to each other, the better the world would be.

Boy, was I wrong.

But you can’t put the genie back into the bottle. So now, in addition to having NPR online, we have kids eating tide pods and getting recruited into fascist ideologies. And of course it’s not just kids. It’s tough to see how the anti-vax movement or QAnon could have grown without the internet (which obviously has search engines as a major driver of traffic).

I think you’re better off teaching critical thinking, and even demonstrating the failings of ChatGPT by showing them how bad it is at answering questions. There’s plenty of resources you can find that should give you a starting point. Ironically, you can find them using a search engine.

[-] rufus@discuss.tchncs.de 3 points 8 months ago* (last edited 8 months ago)

I think that's a good take on things.

Ultimately it still holds true. Information does want to be free. You just can't mix that with misinformation, have everything on the same level and a general audience completely oblivious to the fact and uneducated.

Things have changed. Back in those times it was a small elite on the internet. People who could afford computers and an internet connection and make some use out of it. You needed some amount of intelligence because you had to put some effort in to get online, learn about the tools because that wasn't easy or provided to you. So you'd generally be at least somewhat intelligent if you ended up on the internet. And that's beneficial when it comes to receiving unfiltered information. Combined with the fact that there were comparatively more academics and students, because that was the origin of the internet.

And it wasn't that common to push your agenda there or advertise for your skewed political views in the way people do it nowadays. Due to the nature of the internet and the amount of people there, it wasn't worth the effort. You'd be better off focusing somewhere else where you could influence more people. So the dynamics were just different due to history and circumstances.

Things have changed. Nowadays everyone is online all the time. It's the place to influence people and make money. And that's the other part of the problem. The actual people, connecting them and providing information to them (or to each other) isn't what's most of the internet is about, anymore. Motivations are gathering data about people and selling them, making people become addicted to your platform so they spend more time there and you can make more money. Everyone is competing for attention. And bad, emotional stories are what works best. Giving people the "simple truths" they seek instead of an intellectual and nuanced view. Factuality just gets in the way of all of that.

I sometimes like to compare that to the Age of Reason / Enlightenment. Back then it was monarchs, bad dynamics and missing education. Now it's big tech companies, bad dynamics and insufficient education. People need to get emancipated, educated and leave the current "immature state of ignorance" (to quote Kant.)

Information and education are key. And the internet, algorithms and AI are just tools. They can be used for progress, or to enslave us. At least the internet has the potential (and was build) to connect people and provide a level playing field to everyone. But it can be used for a variety of different things. And choosing the right things isn't something that can be solved by technology alone.

[-] AlwaysNowNeverNotMe@kbin.social 12 points 8 months ago

The context of the word "let" is interesting here.

I would recommend a collaborative approach, it's not as if they can't use it because you tell them no. They don't need a credit card or a driver's license or even a computer.

[-] amio@kbin.social 12 points 8 months ago

It's not even a good idea to let quite a lot of adults use ChatGPT. People don't know how it works, don't treat the answers with anything close to appropriate skepticism, and often ask about things they don't have the knowledge/skills to verify. And anything it tells you, you likely will need to verify.

It's quite unlikely to affect their personality, but it might make them believe a bunch of weird shit that some unknowable, undebuggable computer program hallucinated up. If you've done an uncommonly great job with their critical thinking skills, great. If not, better get started. That is not specific to "AI" though.

[-] NoiseColor@startrek.website 4 points 8 months ago

People don't know how TV works and we are hardly gonna tell people not to use it.

As long as people are aware that some responses might be made up it should be fine for anyone to use it.

[-] Saigonauticon@voltage.vn 12 points 8 months ago

I think it would be a bad idea to do otherwise. Children need to learn about useful tools, and the shortcomings of those tools.

16 year old me would have had a great time getting an AI to teach me things that my teachers in school did not have expertise in. Sure, it would be wrong some of the time, but so were my teachers at that age. It would have given me such a head start on university!

[-] TheEntity@kbin.social 10 points 8 months ago

You cannot let or forbid a 16yo to use stuff. You can only decide whether they will do it in the open or in hiding. Personally I'd rather have them talk to me about it than hide it from me.

[-] AFKBRBChocolate@lemmy.world 10 points 8 months ago

I agree with those saying you can't/shouldn't forbid it. As someone in computer science, the important thing to me is to make sure they understand what those LLMs are and aren't. Specifically, the 'M' in "LLM" is for "model" - they're a detailed model of what a conversation should look like, especially what a response to a question should look like. But looking right is different from being correct. You can ask one for a mathematical proof and it will give you one that looks right, but it probably won't be.

The other thing I'd try to get them to understand is that the leaning part of school is much more important than the grade part, especially if they're going to go on to college. They could use an LLM to help them create a term paper, but if they didn't learn anything it's going to catch up to them and cause problems down the road.

[-] scrubbles@poptalk.scrubbles.tech 6 points 8 months ago

Yeah having an open and honest conversation seems like the best thing, but that requires Parent here also understanding it, but that's a good time for both of them to look into it and learn more.

The biggest thing is going to be something along the lines of "I know you're going to want to use this for homework, but I want to ask you to please not just use it as a way to get the answer. This very well may be a tool that helps you understand problems better, we all learn differently and maybe this can help explain confusing questions to you in a way that you can understand, but if you just ask for the answer you won't learn anything".

Being honest with them on why you don't want them to just plain use it will go a long way. Teens are (in some ways) smart, they know if you're just forbidding it that there's probably a reason but that you don't want to explain it, and so they'll rebel and use it more. Being honest and explaining your reasoning will usually sit longer with them. Sure, they'll probably say "What is X?", but maybe they'll adjust it to say "Can you show me how to solve for X?"

[-] FaceDeer@fedia.io 4 points 8 months ago

Yeah, I would heartily recommend LLMs being used in education as a tutor and "homework buddy". I find their interactive nature to be really useful for learning stuff - I'm always able to ask "wait, what did that bit mean?" or "Walk me through this part." The LLM isn't always right, but with that back-and-forth it's quick to catch errors.

[-] AFKBRBChocolate@lemmy.world 3 points 8 months ago

I agree with all of that, well said.

[-] Dirk@lemmy.ml 9 points 8 months ago

To use as a tool? Yes.
To use as a friend? No.

A person using a tool for a longer time will become better in using said tool.

[-] theywilleatthestars@lemmy.world 8 points 7 months ago

For fun? It's probably fine. As a substitute for human interaction and learning? No, no one should use chatgpt that way

[-] driving_crooner@lemmy.eco.br 2 points 7 months ago

I use ChatGPT for help in my MBA in actuarial sciences everyday. I always start with "pretend you are an statistics/probability/finances professor and I'm and advanced student. We are working on {topic} and I need help with {thing I didn't fully understood in class}" l. It have been fantastic. Even fkg Terence Tao is using ChatGPT as help in the most advanced mathematics is out there.

[-] yokonzo@lemmy.world 7 points 8 months ago

I'll be honest, if I got it at 16, I would fuck around with it for a few weeks and then get bored

[-] CanadaPlus@lemmy.sdf.org 7 points 7 months ago

At 16 they should be just as capable of understanding the limitations as anyone. Just be sure to explain that it has no interest in truth, but only writing convincingly.

I doubt it will have personality impacts. The one thing that could be an issue is if they use it as a replacement for real human friends.

[-] PeepinGoodArgs@reddthat.com 6 points 8 months ago

Have you asked ChatGPT? Jk lol

Honestly, whatever they use ChatGPT for is probably fine. If you feel like they're going to cheat on their homework or something, you can just ask them to do a small sample in front of you. Plus, it's not like ChatGPT is going away, no matter how much the NYT and Disney complain. Best bet is for them to get familiar with the technology now.

Also, there's literally no way to the long-term effects of AI. I strongly suspect that if people use it as a crutch, it will create intellectually and creatively stunted people. But it's not like we don't have that now...

[-] LWD@lemm.ee 1 points 8 months ago* (last edited 8 months ago)
[-] Omega_Haxors@lemmy.ml 4 points 7 months ago

Treat it like a calculator that has a 75% chance of giving the wrong answer.

[-] InputZero@lemmy.ml 4 points 8 months ago

I don't think there will be any change in personality or cognition just by using ChatGPT. The only concern I can think of is over reliance. Especially if your child intends to goto post secondary school. Universities are very strict regarding plagiarism and view AI generation as such. If they can use it responsibly there no downside, if they're going to use it to start to do their homework for them it'll be a problem.

[-] FaceDeer@fedia.io 4 points 8 months ago

Well, lets take a look at how the 16-year-olds who got to use ChatGPT ten years ago have turned out...

In seriousness, as others have been pointing out, the big online AI assistants are all super neutered these days. I think it's probably fine, and indeed given how these tools are going to likely become more widespread in the future I think it's a good idea for kids to get used to using them. At 16 I'd say they're too old to sit them down and give them a lecture about "it's not really aware, it doesn't feel emotions or have memories, and if you go to it with any sort of medical questions definitely double-check those with another source" - lectures at that age are probably going to backfire from what I've seen. Instead, suggest that they research those things themselves. Just put those questions out there and hopefully it'll motivate them to be curious.

[-] NoiseColor@startrek.website 4 points 8 months ago

Not to be contraversial, but likely chatgpt would be the most benign conversation a 16 year old will have in a day. 16 year old! Thats a crazy age.

The public models are so neutered today that basically all they put out is happy shiny good thoughts information.

[-] rufus@discuss.tchncs.de 4 points 8 months ago* (last edited 7 months ago)

Kids should use their own creativity, practice reading, creating something. Play outside, get dirty. Do sports, maybe learn a musical instrument. And do their homework themselves.

I'd say many things are alright in the proper dose. I mean ChatGPT is part of the world they're growing in to...

And 16 isn't a kid anymore. They can handle some responsibility. I don't see a one-size-fits every 16 yo solution. I think you should allow them and decide individually.

I'd say at 16, give them some responsibility and let them practice handling it. But that means supervised. You can't just give them anything and hop they'll cope on their own. And AI has some non-obvious consequences / traps you can run into. Not even most of the adults can handle or understand it properly. So your focus should be teaching them the how and why, in my opinion. Alike you'd teach your kid how to use the circular saw at some point that age. As a parent you should lokk at them and see if they're ready for it and how much supervision is appropriate.

[-] wathek@discuss.online 3 points 8 months ago* (last edited 8 months ago)

ChatGPT is overly safe in terms of personality and the worldview it presents when asked. it's a great tool to learn, more so than a teacher because you can freely ask it very specific questions in your own words and it will give an understandable answer. I think it's actually a perfect tool for someone that age. Once the topics get too advanced, the results become less reliable though.

It doesnt make things up anymore as much as it used to. It still does sometimes with topics that are less commonly discussed in the dataset it's trained on (this is similar with websearch). It will however confidently claim that it's answer is correct sometimes. As long as you understand that it's not always correct and have the sense to verify things that seem off, you'll be fine.

You'll get the best results from the paid GPT4 subscription (20 dollars a month), which i would recommend.

The only real risk i see is overreliance on it. I notice this in myself too, it's almost like i forgot googling things is an option, so when i'm stuck rather than trying another approaxh, i just keep throwing prompts at GPT-4 until i give up and find the solution elsewhere, often within minutes. The way things are going, classic web search is becoming obsolete (unreliable result because of AI written content and fake news) while AI actively tries to be unbiased.

tldr: Yes, it's extremely useful, make sure they don't forget how to do things without chatgpt too.

[-] fratermus@lemmy.sdf.org 2 points 7 months ago* (last edited 7 months ago)

Like any other automated tool, I'd want them to master the manual skills first.

With math and calculators first we show we can do it longhand then get the calc. Show you can search and assess sources first then incorporate AI.

[-] bilboswaggings@sopuli.xyz 2 points 8 months ago

Why would you want that?

AI does not know things, it's answers depend on the wording of the guestion. I guess it could be used if limited (teaching how to use it responsibly and showing how they make mistakes even in very simple situations)

Much like a calculator both are more effective if you know what is happening so you can catch the mistakes and fix them

[-] NoiseColor@startrek.website 3 points 8 months ago

Ai know things. They are a collection of knowledge. Not everything they respond with is made up.

[-] bilboswaggings@sopuli.xyz 2 points 8 months ago

If it doesn't understand what it's saying can you really say it knows it? It has access to a lot of training data so it can get many things correct, but it's effectively just generating the most likely answer from the training data

[-] NoiseColor@startrek.website 2 points 8 months ago

Well obviously it doesn't "know" know, it's not alive.

We are all generating the most likely answer from the training data. But going back to the original question : what do you fear chatgpt would say that would be detrimental to a 16 year old?

this post was submitted on 14 Mar 2024
22 points (77.5% liked)

Asklemmy

43822 readers
894 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 5 years ago
MODERATORS