123
submitted 8 months ago by Bebo@literature.cafe to c/technology@beehaw.org

Emotion artificial intelligence uses biological signals such as vocal tone, facial expressions and data from wearable devices as well as text and how people use their computers, to detect and predict how someone is feeling. It can be used in the workplace, for hiring, etc. Loss of privacy is just the beginning. Workers are worried about biased AI and the need to perform the ‘right’ expressions and body language for the algorithms.

you are viewing a single comment's thread
view the rest of the comments
[-] farsinuce@feddit.dk 24 points 8 months ago* (last edited 8 months ago)

Interesting timing. The EU has just passed the Artificial Intelligence Act, setting a global precedent for the regulation of AI technologies.

A quick rundown of what it entails and why it might matter in the US:What is it?

  • The EU AI Act is a comprehensive set of rules aimed at ensuring AI systems are developed and used ethically, with respect for human rights and safety.
  • The Act targets high-risk AI applications, including those in employment, healthcare, and policing, requiring strict compliance with transparency, data governance, and non-discrimination.

Key Takeaways:

  • Prohibited Practices: Certain uses of AI, like manipulative behavior manipulation or unfair surveillance, are outright banned.
  • High-Risk Regulation: AI systems with significant implications for people's rights must undergo rigorous assessments.
  • Transparency and Accountability: AI providers must be transparent about how their systems work, particularly when processing personal data.

Why Does This Matter in the US?

  • Brussels Effect: Similar to how GDPR set a new global standard for data protection, the EU AI Act could influence international norms and practices around AI, pushing companies worldwide to adopt higher standards.
  • Cross-Border Impact: Many US companies operate in the EU and will need to comply with these regulations, which might lead them to apply the same standards globally.
  • Potential for US Legislation: The EU's move could catalyze similar regulatory efforts in the US, promoting a broader discussion on the ethical use of AI technologies.

Emotion-tracking AI is covered:

Banned applications: The new rules ban certain AI applications that threaten citizens’ rights, including biometric categorisation systems based on sensitive characteristics and untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases. Emotion recognition in the workplace and schools, social scoring, predictive policing (when it is based solely on profiling a person or assessing their characteristics), and AI that manipulates human behaviour or exploits people’s vulnerabilities will also be forbidden.


Sources:

[-] DarkThoughts@fedia.io 8 points 8 months ago

Definitely a good start. Surveillance (or ""tracking"") is one of those areas where ""AI"" is actually dangerous, unlike some of the more overblown topics in the media.

[-] melmi@lemmy.blahaj.zone 2 points 8 months ago* (last edited 8 months ago)

Did you use AI to write this? Kinda ironic, don't you think?

[-] farsinuce@feddit.dk 10 points 8 months ago

I spent the better half of 45 minutes writing and revising my comment. So thank you sincerely for the praise, since English is not my first language.

[-] melmi@lemmy.blahaj.zone 2 points 8 months ago

If you wrote this yourself, that's even more ironic, because you used the same format that ChatGPT likes to spit out. Humans influence ChatGPT -> ChatGPT influences humans. Everything's come full circle.

I ask though because on your profile you've used ChatGPT to write comments before.

this post was submitted on 13 Mar 2024
123 points (100.0% liked)

Technology

37742 readers
512 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS