1
30
submitted 2 years ago* (last edited 2 years ago) by L3s@lemmy.world to c/technology@lemmy.world

Hey everybody, feel free to post any tech support or general tech discussion questions you have right here.

As always, be excellent to each other.

Yours truly, moderators.

2
1
submitted 8 hours ago by Allah@lemm.ee to c/technology@lemmy.world
3
1
submitted 9 hours ago* (last edited 8 hours ago) by Pro@programming.dev to c/technology@lemmy.world

Hey everyone, this is Olga, the product manager for the summary feature again. Thank you all for engaging so deeply with this discussion and sharing your thoughts so far.

Reading through the comments, it’s clear we could have done a better job introducing this idea and opening up the conversation here on VPT back in March. As internet usage changes over time, we are trying to discover new ways to help new generations learn from Wikipedia to sustain our movement into the future. In consequence, we need to figure out how we can experiment in safe ways that are appropriate for readers and the Wikimedia community. Looking back, we realize the next step with this message should have been to provide more of that context for you all and to make the space for folks to engage further. With that in mind, we’d like to take a step back so we have more time to talk through things properly. We’re still in the very early stages of thinking about a feature like this, so this is actually a really good time for us to discuss here.

A few important things to start with:

  1. Bringing generative AI into the Wikipedia reading experience is a serious set of decisions, with important implications, and we intend to treat it as such.
  2. We do not have any plans for bringing a summary feature to the wikis without editor involvement. An editor moderation workflow is required under any circumstances, both for this idea, as well as any future idea around AI summarized or adapted content.
  3. With all this in mind, we’ll pause the launch of the experiment so that we can focus on this discussion first and determine next steps together.

We’ve also started putting together some context around the main points brought up through the conversation so far, and will follow-up with that in separate messages so we can discuss further.

4
1
submitted 9 hours ago by Allah@lemm.ee to c/technology@lemmy.world
5
1

Link to the article without the paywall

https://archive.ph/ieq1H

6
1
submitted 10 hours ago by bimbimboy@lemm.ee to c/technology@lemmy.world
7
1
submitted 10 hours ago by bimbimboy@lemm.ee to c/technology@lemmy.world

Text to avoid paywall

The Wikimedia Foundation, the nonprofit organization which hosts and develops Wikipedia, has paused an experiment that showed users AI-generated summaries at the top of articles after an overwhelmingly negative reaction from the Wikipedia editors community.

“Just because Google has rolled out its AI summaries doesn't mean we need to one-up them, I sincerely beg you not to test this, on mobile or anywhere else,” one editor said in response to Wikimedia Foundation’s announcement that it will launch a two-week trial of the summaries on the mobile version of Wikipedia. “This would do immediate and irreversible harm to our readers and to our reputation as a decently trustworthy and serious source. Wikipedia has in some ways become a byword for sober boringness, which is excellent. Let's not insult our readers' intelligence and join the stampede to roll out flashy AI summaries. Which is what these are, although here the word ‘machine-generated’ is used instead.”

Two other editors simply commented, “Yuck.”

For years, Wikipedia has been one of the most valuable repositories of information in the world, and a laudable model for community-based, democratic internet platform governance. Its importance has only grown in the last couple of years during the generative AI boom as it’s one of the only internet platforms that has not been significantly degraded by the flood of AI-generated slop and misinformation. As opposed to Google, which since embracing generative AI has instructed its users to eat glue, Wikipedia’s community has kept its articles relatively high quality. As I recently reported last year, editors are actively working to filter out bad, AI-generated content from Wikipedia.

A page detailing the the AI-generated summaries project, called “Simple Article Summaries,” explains that it was proposed after a discussion at Wikimedia’s 2024 conference, Wikimania, where “Wikimedians discussed ways that AI/machine-generated remixing of the already created content can be used to make Wikipedia more accessible and easier to learn from.” Editors who participated in the discussion thought that these summaries could improve the learning experience on Wikipedia, where some article summaries can be quite dense and filled with technical jargon, but that AI features needed to be cleared labeled as such and that users needed an easy to way to flag issues with “machine-generated/remixed content once it was published or generated automatically.”

In one experiment where summaries were enabled for users who have the Wikipedia browser extension installed, the generated summary showed up at the top of the article, which users had to click to expand and read. That summary was also flagged with a yellow “unverified” label.

An example of what the AI-generated summary looked like.

Wikimedia announced that it was going to run the generated summaries experiment on June 2, and was immediately met with dozens of replies from editors who said “very bad idea,” “strongest possible oppose,” Absolutely not,” etc.

“Yes, human editors can introduce reliability and NPOV [neutral point-of-view] issues. But as a collective mass, it evens out into a beautiful corpus,” one editor said. “With Simple Article Summaries, you propose giving one singular editor with known reliability and NPOV issues a platform at the very top of any given article, whilst giving zero editorial control to others. It reinforces the idea that Wikipedia cannot be relied on, destroying a decade of policy work. It reinforces the belief that unsourced, charged content can be added, because this platforms it. I don't think I would feel comfortable contributing to an encyclopedia like this. No other community has mastered collaboration to such a wondrous extent, and this would throw that away.”

A day later, Wikimedia announced that it would pause the launch of the experiment, but indicated that it’s still interested in AI-generated summaries.

“The Wikimedia Foundation has been exploring ways to make Wikipedia and other Wikimedia projects more accessible to readers globally,” a Wikimedia Foundation spokesperson told me in an email. “This two-week, opt-in experiment was focused on making complex Wikipedia articles more accessible to people with different reading levels. For the purposes of this experiment, the summaries were generated by an open-weight Aya model by Cohere. It was meant to gauge interest in a feature like this, and to help us think about the right kind of community moderation systems to ensure humans remain central to deciding what information is shown on Wikipedia.”

“It is common to receive a variety of feedback from volunteers, and we incorporate it in our decisions, and sometimes change course,” the Wikimedia Foundation spokesperson added. “We welcome such thoughtful feedback — this is what continues to make Wikipedia a truly collaborative platform of human knowledge.”

“Reading through the comments, it’s clear we could have done a better job introducing this idea and opening up the conversation here on VPT back in March,” a Wikimedia Foundation project manager said. VPT, or “village pump technical,” is where The Wikimedia Foundation and the community discuss technical aspects of the platform. “As internet usage changes over time, we are trying to discover new ways to help new generations learn from Wikipedia to sustain our movement into the future. In consequence, we need to figure out how we can experiment in safe ways that are appropriate for readers and the Wikimedia community. Looking back, we realize the next step with this message should have been to provide more of that context for you all and to make the space for folks to engage further.”

The project manager also said that “Bringing generative AI into the Wikipedia reading experience is a serious set of decisions, with important implications, and we intend to treat it as such, and that “We do not have any plans for bringing a summary feature to the wikis without editor involvement. An editor moderation workflow is required under any circumstances, both for this idea, as well as any future idea around AI summarized or adapted content.”

8
1
9
1
submitted 12 hours ago* (last edited 12 hours ago) by bimbimboy@lemm.ee to c/technology@lemmy.world

Text to avoid paywall

The Food and Drug Administration is planning to use artificial intelligence to “radically increase efficiency” in deciding whether to approve new drugs and devices, one of several top priorities laid out in an article published Tuesday in JAMA.

Another initiative involves a review of chemicals and other “concerning ingredients” that appear in U.S. food but not in the food of other developed nations. And officials want to speed up the final stages of making a drug or medical device approval decision to mere weeks, citing the success of Operation Warp Speed during the Covid pandemic when workers raced to curb a spiraling death count.

“The F.D.A. will be focused on delivering faster cures and meaningful treatments for patients, especially those with neglected and rare diseases, healthier food for children and common-sense approaches to rebuild the public trust,” Dr. Marty Makary, the agency commissioner, and Dr. Vinay Prasad, who leads the division that oversees vaccines and gene therapy, wrote in the JAMA article.

The agency plays a central role in pursuing the agenda of the U.S. health secretary, Robert F. Kennedy Jr., and it has already begun to press food makers to eliminate artificial food dyes. The new road map also underscores the Trump administration’s efforts to smooth the way for major industries with an array of efforts aimed at getting products to pharmacies and store shelves quickly.

Some aspects of the proposals outlined in JAMA were met with skepticism, particularly the idea that artificial intelligence is up to the task of shearing months or years from the painstaking work of examining applications that companies submit when seeking approval for a drug or high-risk medical device.

“I don’t want to be dismissive of speeding reviews at the F.D.A.,” said Stephen Holland, a lawyer who formerly advised the House Committee on Energy and Commerce on health care. “I think that there is great potential here, but I’m not seeing the beef yet.”

10
1
submitted 12 hours ago by Pro@programming.dev to c/technology@lemmy.world

A study from Profound of OpenAI's ChatGPT, Google AI Overviews and Perplexity shows that while ChatGPT mostly sources its information from Wikipedia, Google AI Overviews and Perplexity mostly source their information from Reddit.

11
1
submitted 13 hours ago by Pro@programming.dev to c/technology@lemmy.world

Around 10 French clients with leases on Teslas are suing the US carmaker, run by Elon Musk, because they consider the vehicles to be "extreme-right" symbols, the law firm representing them said on Wednesday.

12
1
submitted 15 hours ago* (last edited 15 hours ago) by Pro@programming.dev to c/technology@lemmy.world

Cambridge researchers urge public health bodies like the NHS to provide trustworthy, research-driven alternatives to platforms driven by profit.

Women deserve better than to have their menstrual tracking data treated as consumer data - Prof Gina Neff

Smartphone apps that track menstrual cycles are a “gold mine” for consumer profiling, collecting information on everything from exercise, diet and medication to sexual preferences, hormone levels and contraception use.

This is according to a new report from the University of Cambridge’s Minderoo Centre for Technology and Democracy, which argues that the financial worth of this data is “vastly underestimated” by users who supply profit-driven companies with highly intimate details in a market lacking in regulation.

The report’s authors caution that cycle tracking app (CTA) data in the wrong hands could result in risks to job prospects, workplace monitoring, health insurance discrimination and cyberstalking – and limit access to abortion.

They call for better governance of the booming ‘femtech’ industry to protect users when their data is sold at scale, arguing that apps must provide clear consent options rather than all-or-nothing data collection, and urge public health bodies to launch alternatives to commercial CTAs.

13
1
submitted 17 hours ago by moe90@feddit.nl to c/technology@lemmy.world
14
1
15
1
16
1
17
1

cross-posted from: https://lemmy.world/post/31160697

Skip Timestamps and Generated Summary below:


Skip Timestamps:

  1. 0:00.000 - 0:07.000 Intermission
  2. 25:24.000 - 27:27.022 Sponsor

Generated Summary:

  • Main Topic: The video discusses Palantir Technologies and its increasing role in centralizing and managing US government data, raising concerns about privacy and potential abuse of power.

  • Key Points:

    • An executive order in 2025 aimed to eliminate information silos within the government, centralizing data access.
    • Palantir is heavily involved in building databases for government agencies, including immigration enforcement (Doge project), the IRS, CDC and Homeland Security.
    • The speaker draws parallels to the Patriot Act and post-9/11 surveillance expansions, arguing that justifications for data collection often lead to broader applications beyond the initial stated purpose.
    • Concerns are raised about the lack of oversight and potential for misuse of centralized data by Palantir, a private company.
    • The video highlights the historical context of Palantir's founding, linking it to the "Total Information Awareness" initiative and figures like John Poindexter.
    • The speaker emphasizes the irony of a private company now holding the kind of data that caused public outcry when the NSA was revealed to be collecting it.
  • Highlights:

    • The Trump administration's motto: "Everyone is converting to Palantir."
    • Palantir's involvement in managing sensitive data across multiple federal agencies, including health and financial information.
    • The connection between Palantir's founders and the "Total Information Awareness" program.
    • The comparison of Palantir's current role to the NSA's controversial data collection practices revealed by Edward Snowden.
    • The video ends with a call for more attention to Palantir's activities and its potential impact on civil liberties.

About Channel:

Independent, Unencumbered Analysis and Investigative Reporting, Captive to No Dogma or Faction.

18
1
19
1
submitted 1 day ago* (last edited 1 day ago) by Pro@programming.dev to c/technology@lemmy.world

Using public information and making small tweaks, an alpha-seeking AI fund manager outperformed 93% of mutual fund managers by an average of 600%.

20
1
21
1
22
1
23
1
24
1
Android 16 is here (blog.google)
submitted 1 day ago* (last edited 1 day ago) by Pro@programming.dev to c/technology@lemmy.world
25
1
view more: next ›

Technology

71272 readers
2325 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS