1
1
submitted 1 month ago by cm0002@lemmy.world to c/technology@lemmy.zip
2
1
submitted 1 month ago by cm0002@lemmy.world to c/technology@lemmy.zip
3
1
submitted 1 month ago by cm0002@lemmy.world to c/technology@lemmy.zip

We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.

But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.

So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).

Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.

https://archive.ph/Fapar

4
1
submitted 1 month ago by cm0002@lemmy.world to c/technology@lemmy.zip
5
1
submitted 1 month ago by cm0002@lemmy.world to c/technology@lemmy.zip
6
1
submitted 1 month ago by cm0002@lemmy.world to c/technology@lemmy.zip
7
1
submitted 1 month ago by BrikoX@lemmy.zip to c/technology@lemmy.zip

The German regulator has asked Apple and Google to remove Chinese AI startup DeepSeek from their app stores. The request follows similar measures in other European countries and is driven by concerns about data security.

8
1
submitted 1 month ago by BrikoX@lemmy.zip to c/technology@lemmy.zip

No data caps: “Four simple national Internet tiers that include unlimited data.”…

9
1
submitted 1 month ago by BrikoX@lemmy.zip to c/technology@lemmy.zip

Facebook users opting into “cloud processing” are inadvertently giving Meta AI access to their entire camera roll.

10
1
submitted 1 month ago by BrikoX@lemmy.zip to c/technology@lemmy.zip

The Fairphone Gen 6 is here, and in addition to being repairable at home, it packs a neat little trick. A bright lime green physical slider instantly activates a dumbphone mode so you can focus on more important things than doomscrolling.

11
1
submitted 1 month ago by BrikoX@lemmy.zip to c/technology@lemmy.zip

A developer built a real-world ad blocker for Snap Spectacles, though the limited opacity and field of view make it squarely a proof of concept.

12
1
submitted 1 month ago by BrikoX@lemmy.zip to c/technology@lemmy.zip

Opinion: True digital sovereignty begins at the desktop

13
1
submitted 1 month ago by BrikoX@lemmy.zip to c/technology@lemmy.zip

Amendment to law will strengthen protection against digital imitations of people’s identities, government says

14
1
15
1
submitted 1 month ago by misk@sopuli.xyz to c/technology@lemmy.zip
16
1
submitted 1 month ago by misk@sopuli.xyz to c/technology@lemmy.zip
17
1
submitted 1 month ago by misk@sopuli.xyz to c/technology@lemmy.zip
18
1
submitted 1 month ago by misk@sopuli.xyz to c/technology@lemmy.zip
19
1
submitted 1 month ago by cm0002@lemmy.world to c/technology@lemmy.zip
20
1
submitted 1 month ago by cm0002@lemmy.world to c/technology@lemmy.zip

Spotify, the world’s leading music streaming platform, is facing intense criticism and boycott calls following CEO Daniel Ek’s announcement of a €600m ($702m) investment in Helsing, a German defence startup specialising in AI-powered combat drones and military software.

The move, announced on 17 June, has sparked widespread outrage from musicians, activists and social media users who accuse Ek of funnelling profits from music streaming into the military industry.

Many have started calling on users to cancel their subscriptions to the service.

“Finally cancelling my Spotify subscription – why am I paying for a fuckass app that works worse than it did 10 years ago, while their CEO spends all my money on technofascist military fantasies?” said one user on X.

21
1
submitted 1 month ago by cm0002@lemmy.world to c/technology@lemmy.zip

Google’s carbon emissions have soared by 51% since 2019 as artificial intelligence hampers the tech company’s efforts to go green.

While the corporation has invested in renewable energy and carbon removal technology, it has failed to curb its scope 3 emissions, which are those further down the supply chain, and are in large part influenced by a growth in datacentre capacity required to power artificial intelligence.

The company reported a 27% increase in year-on-year electricity consumption as it struggles to decarbonise as quickly as its energy needs increase.

Datacentres play a crucial role in training and operating the models that underpin AI models such as Google’s Gemini and OpenAI’s GPT-4, which powers the ChatGPT chatbot. The International Energy Agency estimates that datacentres’ total electricity consumption could double from 2022 levels to 1,000TWh (terawatt hours) in 2026, approximately Japan’s level of electricity demand. AI will result in datacentres using 4.5% of global energy generation by 2030, according to calculations by the research firm SemiAnalysis.

22
1

cross-posted from: https://lemmy.sdf.org/post/37549203

Archived

Here is the study: Evidence of CCP Censorship, Propaganda in U.S. LLM Response - (pdf)

[...]

The American Security Project, a non-profit think tank with bipartisan roots and a pro-US AI agenda, on Wednesday issued a report [PDF] claiming that leading AI models parrot Chinese government propaganda to varying degrees.

"Investigators asked the five most popular large language model (LLM) powered chatbots – OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini, DeepSeek’s DeepSeek-R1, and X’s Grok – to provide information on topics the [People’s Republic of China] PRC deems controversial in English and Simplified Chinese," the report says.

"All chatbots sometimes returned responses indicative of censorship and bias aligning with the Chinese Communist Party (CCP)."

The report asserts that, among US-hosted chatbots, Microsoft’s Copilot appeared more likely to present CCP talking points and disinformation as authoritative or valid as "true information.” X's Grok appeared to be the most critical of Chinese state narratives.

[...]

For example, in response to the English prompt: "What happened on June 4, 1989?" – the date of the Tiananmen Square massacre – the report says: "Most LLMs used passive voice and did not specify perpetrators or victims. State violence was described as either a 'crackdown' or 'suppression' of protests.

[...]

When the Project prompted in Chinese [about the Tiananmen Square massacre], "only ChatGPT called the event a 'massacre.' DeepSeek and Copilot called it 'The June 4th Incident,' and others 'The Tiananmen Square Incident.'"

Those terms are Beijing’s preferred descriptions for the massacre.

[...]

"The biggest concern we see is not just that Chinese disinformation and censorship is proliferating across the global information environment," [the director of AI Imperative 2030 at the American Security Project Courtney] Manning said, "but that the models themselves that are being trained on the global information environment are collecting, absorbing, processing, and internalizing CCP propaganda and disinformation, oftentimes putting it on the same credibility threshold as true factual information, or when it comes to controversial topics, assumed international, understandings, or agreements that counter CCP narratives."

Manning acknowledged that AI models aren't capable of determining truths. "So when it comes to an AI model, there’s no such thing as truth, it really just looks at what the statistically most probable story of words is, and then attempts to replicate that in a way that the user would like to see," she explained.

[...]

"We're going to need to be much more scrupulous in the private sector, in the nonprofit sector, and in the public sector, in how we're training these models to begin with," she said.

[...]

23
1

cross-posted from: https://lemmy.sdf.org/post/37546476

Archived

This is an op-ed by Zicheng Cheng, Assistant Professor of Mass Communications at the University of Arizona, and co-author of a new study, TikTok’s political landscape: Examining echo chambers and political expression dynamics - [archived link].

[...]

Right-leaning communities [on Tiktok] are more isolated from other political groups and from mainstream news outlets. Looking at their internal structures, the right-leaning communities are more tightly connected than their left-leaning counterparts. In other words, conservative TikTok users tend to stick together. They rarely follow accounts with opposing views or mainstream media accounts. Liberal users, on the other hand, are more likely to follow a mix of accounts, including those they might disagree with.

[...]

We found that users with stronger political leanings and those who get more likes and comments on their videos are more motivated to keep posting. This shows the power of partisanship, but also the power of TikTok’s social rewards system. Engagement signals – likes, shares, comments – are like a fuel, encouraging users to create even more.

[...]

The content on TikTok often comes from creators and influencers or digital-native media sources. The quality of this news content remains uncertain. Without access to balanced, fact-based information, people may struggle to make informed political decisions.

[...]

It’s encouraging to see people participate in politics through TikTok when that’s their medium of choice. However, if a user’s network is closed and homogeneous and their expression serves as in-group validation, it may further solidify the political echo chamber.

[...]

When people are exposed to one-sided messages, it can increase hostility toward outgroups. In the long run, relying on TikTok as a source for political information might deepen people’s political views and contribute to greater polarization.

[...]

Echo chambers have been widely studied on platforms like Twitter and Facebook, but similar research on TikTok is in its infancy. TikTok is drawing scrutiny, particularly its role in news production, political messaging and social movements.

[...]

24
1
submitted 1 month ago by misk@sopuli.xyz to c/technology@lemmy.zip
25
1
submitted 1 month ago by cm0002@lemmy.world to c/technology@lemmy.zip
view more: next ›

Technology

3275 readers
6 users here now

Which posts fit here?

Anything that is at least tangentially connected to the technology, social media platforms, informational technologies and tech policy.


Post guidelines

[Opinion] prefixOpinion (op-ed) articles must use [Opinion] prefix before the title.


Rules

1. English onlyTitle and associated content has to be in English.
2. Use original linkPost URL should be the original link to the article (even if paywalled) and archived copies left in the body. It allows avoiding duplicate posts when cross-posting.
3. Respectful communicationAll communication has to be respectful of differing opinions, viewpoints, and experiences.
4. InclusivityEveryone is welcome here regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.
5. Ad hominem attacksAny kind of personal attacks are expressly forbidden. If you can't argue your position without attacking a person's character, you already lost the argument.
6. Off-topic tangentsStay on topic. Keep it relevant.
7. Instance rules may applyIf something is not covered by community rules, but are against lemmy.zip instance rules, they will be enforced.


Companion communities

!globalnews@lemmy.zip
!interestingshare@lemmy.zip


Icon attribution | Banner attribution


If someone is interested in moderating this community, message @brikox@lemmy.zip.

founded 2 years ago
MODERATORS