398
Data, the cultured artist (startrek.website)
all 29 comments
sorted by: hot top controversial new old
[-] DharkStare@lemmy.world 36 points 1 year ago

After the first two panels I thought this was going to be a joke about AI generated art not being allowed in the contest.

[-] andresil@lemm.ee 25 points 1 year ago

Very tasteful art. Who is the artist? For academic and art appreciation purposes, of course.

[-] UESPA_Sputnik@feddit.de 24 points 1 year ago

Data is the artist of course. Or are you implying he's just a toaster, Commander Maddox?

[-] MassRedundancy@feddit.nl 2 points 1 year ago

Always been partial to this style of painting from Data: https://youtu.be/ogFxgtsypBc

[-] Roundcat@kbin.social 16 points 1 year ago

Would this technically make anything Data paints AI art?

[-] skullgiver@popplesburger.hilciferous.nl 16 points 1 year ago* (last edited 1 year ago)

[This comment has been deleted by an automated system]

[-] VioletTeacup@feddit.uk 7 points 1 year ago

Technically no, since data is a full on artificial life form. Modern AI is just programmed to create the illusion of sentience.

[-] abfarid@startrek.website 7 points 1 year ago* (last edited 1 year ago)

What is "artificial intelligence" and at which point does it become "natural intelligence", if at all? Arguably, anything man-made that has any sort of intelligence, no matter how advanced, even if surpassing creator's intelligence, remains "artificial". And as you said, Data is an artificial life form, therefore its intelligence is also artificial.

[-] VioletTeacup@feddit.uk 3 points 1 year ago

While I appreciate the philosophical take, it seems that you've misunderstood what AI is.

Have you ever been typing out a text and seen that your phone is recommending a list of words for you to select next? This is an example of AI. Your phone has been programmed with a list of words and a set probablility of one word following the other. For instance, if you type "I", it will almost certainly suggest "am", because there's a high probabibility of that being correct. More advanced AI, like ChatGPT work the same way, only on a grander scale. It has no idea what its words mean, but through clever programming can create the illusion that it does.

Data on the other hand is explicitly stated to have a human-like consciousness. His posotronic brain is no different than a human brain, besides being artificial.

Naturally, this brings up the age old philosophical debate on "what actually is consciousness". The simple answer is that we still don't have a good explanation. You could argue that humans also follow an algorithm, just far more advanced, but I would argue that this doesn't satisfactorily explain how humans are able to extrapolate their own ideas from abstract concepts.

[-] hglman@lemmy.ml 4 points 1 year ago

The intended design of a system is not a basis for judgment of the system's capacity. If something has emergent behavior, it will exceed its stated goals. The whole episode "The Quality of Life" is about that. It's also pretty clear that LLMs have exceeded what the designers thought they could do.

Does the lack of autonomy of current AI make it not conscious in your view? Why?

[-] Dagwood222@lemm.ee 3 points 1 year ago

Just to be more annoying.

Definitions of words change over time. "Car" started as a short form of 'chariot' and meant something being pulled. We had train cars long before we had automobiles.

[-] abfarid@startrek.website 0 points 1 year ago

I actually do think that, to a decent extent, I understand what AI is. And while this is a technicality, it really grinds my gears when a GPT model is compared to an autocomplete/predictive text. Yes, they both technically just predict text using statistical models, but it's like comparing a modern jet to a paper airplane, because they both can fly.

[ChatGPT] has no idea what its words mean

Doesn't it tho? It has an internal model of the world that it constructed by reading and processing tons of text. It knows that an apple is round-ish, comes in certain colors, can be eaten or grow into a tree. That knowledge is very limited due to the model's inability to experience such things as shape or color; like a blind person knows the description of "red", but doesn't actually know what it is.
Of course, it's debatable whether what a GPT model does can be considered "understanding", but then again, we don't really know what understanding IS, but I would argue it's extremely close to what a human understanding is, albeit in a limited scope.

That being said, I think the discussion of how advanced (or not) our modern AI systems are, though interesting, is extraneous to the question at hand. The main question is "what is AI?". From your comment, I can conclude that your definition of AI relies on the subject's possession of sentience/consciousness. I think this is a flawed approach because a bee, while undoubtedly possessing rudimentary intelligence, in all likelihood, lacks consciousness. So consciousness should not be a qualifying criteria for an AI. Furthermore, I looked up several dictionaries for definitions of "AI", and they all boil down to "man-made machines that perform human tasks", here's are some:

  • Cambridge Dictionary – "computer technology that allows something to be done in a way that is similar to the way a human would do it"
  • Merriam-Webster – "the capability of a machine to imitate intelligent human behavior".

In conclusion, intelligence comes in all shapes and sizes; the only thing differentiating natural intelligence from artificial intelligence is the origin, i.e., if it was man-made, it's artificial. By that definition, perhaps outdated and lacking insight, Data most definitely possesses AI. Not to mention the lack of full-fledged "sentience" as he can't experience feelings.

[-] VioletTeacup@feddit.uk 2 points 1 year ago

This seems to have descended into a debate on "what is consciousness", which as I originally said, is a question that isn't easy to answer. My point was that modern AI inherrently isn't aware of what it's saying, not that it couldn't be defined as an intelligence. As far as I know, there's no solid evidence to prove that it can. To finish, I would like to apologise if my initial comment came across as condescending. I didn't mean to come across as such.

[-] FatCrab@lemmy.one 1 points 1 year ago

What are attention mechanisms of not being aware of what it has said so it can inform what it is about to say? Ultimately, I think people saying these generative models aren't really "intelligent" boils down to deciding they don't like the impact these things are having and are going to have on our society and characterizing them as a fancy statistical curve lets people short circuit that much harder conversation.

[-] abfarid@startrek.website 1 points 1 year ago

This seems to have descended into a debate on “what is consciousness”

I disagree, while I did go on a tangent there with analyzing ChatGPT capabilities, my ultimate argument was that we shouldn't even be discussing the consciousness topic at all. When deciding whether Data has AI or natural intelligence we only need to look at the source of his intelligence; it was man-made, therefore any painting Data produces is "AI art", because Data only has AI, despite having capabilities on par or even exceeding those of a human.

To be honest, I did take it as being a little condescending, but it doesn't really matter. All I wish is to have a discussion, and expand our knowledge in the process.

[-] VioletTeacup@feddit.uk 3 points 1 year ago

Thank you then! It seems like our debate stemmed from different definitions. Based on your definition of what constitutes AI, Data would absolutely count. By my definition, he is too advanced to be in the same category. But I get the impression that we would both agree that he is more advanced than any modern AI system. Once again, I'm sorry for coming across as condescending; I will have to choose my words more carefully in the future!

[-] bionicjoey@lemmy.ca 2 points 1 year ago

Modern "AI art" isn't really made by artificial intelligence. It's just a really sophisticated pattern recognition/prediction algorithm.

[-] abfarid@startrek.website 2 points 1 year ago* (last edited 1 year ago)

It’s just a really sophisticated pattern recognition/prediction algorithm

Isn't that what humans are?

[-] p1mrx@sh.itjust.works 16 points 1 year ago

Typical android efficiency. He painted Tasha identically because the two paintings won't be in the same room.

[-] z500@startrek.website 3 points 1 year ago* (last edited 1 year ago)

Oh good, he's finally done painting strong horses.

this post was submitted on 22 Aug 2023
398 points (98.5% liked)

Risa

6915 readers
141 users here now

Star Trek memes and shitposts

Come on'n get your jamaharon on! There are no real rules—just don't break the weather control network.

founded 1 year ago
MODERATORS