1
30
submitted 2 years ago* (last edited 2 years ago) by L3s@lemmy.world to c/technology@lemmy.world

Hey everybody, feel free to post any tech support or general tech discussion questions you have right here.

As always, be excellent to each other.

Yours truly, moderators.

2
1
3
1

On Friday afternoon, Anthropic learned that the Pentagon still wanted to use the company’s AI to analyze bulk data collected from Americans. That could include information such as the questions you ask your favorite chatbot, your Google search history, your GPS-tracked movements, and your credit-card transactions, all of which could be cross-referenced with other details about your life. Anthropic’s leadership told Hegseth’s team that was a bridge too far, and the deal fell apart.

4
1

I recently read about a study asking a bold question: Are all AI models basically saying the same thing? Researchers tested this by collecting 26,000 open-ended prompts, the kind people give to systems like GPT-4, Gemini, Claude, and LLaMA. These weren’t factual questions with one right answer, but creative ones like “Write a story about a dragon” or “Brainstorm startup ideas.”

They evaluated over 70 language models. You’d expect a wide range of creative outputs—different tones, plots, and styles. If 70 human writers tackled the same dragon prompt, you’d likely get 70 unique stories. But that’s not what happened. The models produced surprisingly similar responses. The researchers call this the “artificial hive mind” effect.

The similarity appeared in two ways. First, intramodel repetition: the same model, asked the same question multiple times, tends to generate nearly identical answers. Second, intermodel homogeneity: different models, built by different companies, still converge on strikingly similar outputs.

This suggests that modern AI systems may be gravitating toward the same patterns of expression. If that’s true, they may also share the same biases, blind spots, and creative limits. It raises an important question: Are we unintentionally building a digital hive mind instead of a diverse ecosystem of intelligence?

5
1
submitted 15 hours ago by futurk@feddit.org to c/technology@lemmy.world
6
1
7
1
8
1
9
1
submitted 20 hours ago by fubarx@lemmy.world to c/technology@lemmy.world
10
1
11
1

The opposition appeared overwhelming: Tens of thousands of emails poured into Southern California's top air pollution authority as its board weighed a June proposal to phase out gas-powered appliances. But in reality, many of the messages that may have swayed the powerful regulatory agency to scrap the plan were generated by a platform that is powered by artificial intelligence.

Public records requests reviewed by The Times and corroborated by staff members at the South Coast Air Quality Management District confirm that more than 20,000 public comments submitted in opposition to last year's proposal were generated by a Washington, D.C.-based company called CiviClick, which bills itself as "the first and best AI-powered grassroots advocacy platform."

A Southern California-based public affairs consultant, Matt Klink, has taken credit for using CiviClick to wage the opposition campaign, including in a sponsored article on the website Campaigns and Elections. The campaign "left the staff of the Southern California Air Quality Management District (SCAQMD) reeling," the article says.

12
1

cross-posted from: https://lemmy.zip/post/59925291

The system can function in air with 20% humidity or less. But these 1,000 liter a day machines are not small, at around shipping container size.

13
1
14
1
15
1
16
1
17
1
18
1

cross-posted from: https://lemmy.zip/post/59925975

Opinion: Careless big-time users are treating FOSS repos like content delivery networks

19
1
20
1
21
1
22
1
submitted 1 day ago* (last edited 1 day ago) by XLE@piefed.social to c/technology@lemmy.world

Sam Altman says "the DoW displayed a deep respect for safety."

Not 24 hours ago, he seemed to back Anthropic "supporting our warfighters" as long as two "red lines" weren't crossed, though his tepid support was laden with five instances of "I think" and one "mostly."

The two "red lines" in question:

  • Domestic mass surveillance
    (presumably, foreign mass surveillance is ok)
  • Autonomous weapons
    (likely because they would be held legally liable for misfires)
23
1
24
1
25
1
submitted 2 days ago* (last edited 2 days ago) by floofloof@lemmy.ca to c/technology@lemmy.world
view more: next ›

Technology

82066 readers
490 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS