1
8
submitted 6 hours ago by yogthos@lemmy.ml to c/technology@lemmy.ml
2
10
Home - EU Open Source Week (opensourceweek.eu)
submitted 9 hours ago by Zerush@lemmy.ml to c/technology@lemmy.ml

Every year, at the end of January and the beginning of February, thousands of people from Europe and around the world gather in Brussels to discuss open source and open technologies. The main attraction is FOSDEM, Europe’s largest open source conference, which has inspired a range of side events, social activities, and workshops. For those interested in open technology, digital policy, and EU developments, OpenForum Europe’s EU Open Source Policy Summit brings together open source leaders and policymakers. Together, these events make up the EU Open Source Week.

3
12
submitted 22 hours ago by yogthos@lemmy.ml to c/technology@lemmy.ml
4
15
submitted 1 day ago by yogthos@lemmy.ml to c/technology@lemmy.ml
5
4
submitted 1 day ago by yogthos@lemmy.ml to c/technology@lemmy.ml
6
85

Suppliers of parts for Nvidia’s H200 have paused production after Chinese customs officials blocked shipments of the newly approved artificial intelligence processors from entering China, according to a report.

Nvidia had expected more than one million orders from Chinese clients, the report said, adding that its suppliers had been operating around the clock to prepare for shipping as early as March.

Chinese customs authorities this week told customs agents that Nvidia’s H200 chips were not permitted to enter the country, Reuters reported.

Sources have also said government officials summoned domestic tech firms to warn them against buying the chips unless it was necessary.

7
14
The A in AGI stands for Ads (ossa-ma.github.io)
submitted 1 day ago by yogthos@lemmy.ml to c/technology@lemmy.ml
8
6
submitted 2 days ago by yogthos@lemmy.ml to c/technology@lemmy.ml
9
-2
10
16
submitted 3 days ago by yogthos@lemmy.ml to c/technology@lemmy.ml

The paper argues that we have been wasting a lot of expensive GPU cycles by forcing transformers to relearn static things like names or common phrases through deep computation. Standard models do not have a way to just look something up so they end up simulating memory by passing tokens through layer after layer of feed forward networks. DeepSeek introduced a module called Engram which adds a dedicated lookup step for local N-gram patterns. It acts like a new way to scale a model that is separate from the usual compute heavy Mixture of Experts approach.

The architecture uses multi head hashing to grab static embeddings for specific token sequences which are then filtered through a context aware gate to make sure they actually fit the current situation. They found a U shaped scaling law where the best performance happens when you split your parameter budget between neural computation and this static memory. By letting the memory handle the simple local associations the model can effectively act like it is deeper because the early layers are not bogged down with basic reconstruction.

One of the best bits is how they handle hardware constraints by offloading the massive lookup tables to host RAM. Since these lookups are deterministic based on the input tokens the system can prefetch the data from the CPU memory before the GPU even needs it. This means you can scale to tens of billions of extra parameters with almost zero impact on speed since the retrieval happens while the previous layers are still calculating.

The benchmarks show that this pays off across the board especially in long context tasks where the model needs its attention focused on global details rather than local phrases. It turns out that even in math and coding the model gets a boost because it is no longer wasting its internal reasoning depth on things that should just be in a lookup table. Moving forward this kind of conditional memory could be a standard part of sparse models because it bypasses the physical memory limits of current hardware.

11
12
The Resonant Computing Manifesto (resonantcomputing.org)
submitted 3 days ago by yogthos@lemmy.ml to c/technology@lemmy.ml
12
8
submitted 3 days ago by yogthos@lemmy.ml to c/technology@lemmy.ml
13
20
submitted 4 days ago by yogthos@lemmy.ml to c/technology@lemmy.ml
14
19
submitted 4 days ago by chobeat@lemmy.ml to c/technology@lemmy.ml
15
12
submitted 4 days ago by yogthos@lemmy.ml to c/technology@lemmy.ml
16
37
submitted 5 days ago by yogthos@lemmy.ml to c/technology@lemmy.ml
17
57
submitted 5 days ago by Salamence@lemmy.zip to c/technology@lemmy.ml

cross-posted from: https://hexbear.net/post/7329892

cross-posted from: https://news.abolish.capital/post/19564

Elon Musk, the world's richest man and the owner of the social media app X, has faced a mountain of outrage in recent weeks as his platform's artificial intelligence chatbot "Grok" has been used to generate sexualized deepfake images of nonconsenting women and children, and Musk himself has embraced open white nationalism.

But none of this seems to be of particular concern to Defense Secretary Pete Hegseth. Despite the swirl of scandal, he announced on Monday that Musk's chatbot would be given intimate access to reams of military data as part of what the department described as its new "AI acceleration strategy."

During a speech at the headquarters of SpaceX, another company owned by Musk, Hegseth stood alongside the billionaire and announced that later this month, the department plans to “make all appropriate data” from the military’s IT systems available for “AI exploitation,” including “combat-proven operational data from two decades of military and intelligence operations.”

As the Associated Press noted, it's a departure from the more cautious approach the Biden administration took toward integrating AI with the military, which included bans on certain uses "such as applications that would violate constitutionally protected civil rights or any system that would automate the deployment of nuclear weapons."

While it's unclear if those bans remain in place under President Donald Trump, Hegseth said during the speech he will seek to eschew the use of any AI models "that won't allow you to fight wars" and will seek to act "without ideological constraints that limit lawful military applications," before adding that the Pentagon's AI will not be "woke” or “equitable.”

He added that the department “will unleash experimentation, eliminate bureaucratic barriers, focus our investments, and demonstrate the execution approach needed to ensure we lead in military AI. He added that ”we will become an ‘AI-first’ warfighting force across all domains.

— (@)

Hegseth's embrace of Musk hardly comes as a surprise, given his role in the Trump administration's dismantling of the administrative state as head of its so-called "Department of Government Efficiency" (DOGE) last year, and his record $290 million in support for the president's 2024 election campaign.

But it is quite noteworthy given the type of notoriety Grok has received of late after it introduced what it called “spicy mode” for the chatbot late last year, which “allows users to digitally remove clothing from images and has been deployed to produce what amounts to child pornography—along with other disturbing behavior, such as sexualizing the deputy prime minister of Sweden,” according to a report last month from MS NOW (formerly MSNBC).

It's perhaps the most international attention the bot has gotten, with the United Kingdom's media regulator launching a formal investigation on Monday to determine whether Grok violated the nation's Online Safety Act by failing to protect users from illegal content, including child sexual abuse material.

The investigation could result in fines, which, if not followed, could lead to the chatbot being banned, as it was over the weekend in Malaysia and Indonesia. Authorities in the European Union, France, Brazil, and elsewhere are also reviewing the app for its spread of nonconsensual sexual images, according to the New York Times.

One example of how Grok is being used to target women. Swedish Deputy Prime Minister Ebba Busch being sexualised, degraded, and humiliated step-by-step by Grok. All the images accurately reflect the prompts provided.

[image or embed]
— Eliot Higgins (@eliothiggins.bsky.social) January 5, 2026 at 12:37 PM

It's only the latest scandal involving the Grok, which Musk pitched as an "anti-woke" and "truth-seeking" alternative to applications like ChatGPT and Google's Gemini.

At several points last year, the chatbot drew attention for its sudden tendency to launch into racist and antisemitic tirades—praising Adolf Hitler, accusing Jewish people of controlling Hollywood and the government, and promoting Holocaust denial.

Before that, users were baffled when the bot began directing unrelated queries about everything from cats to baseball back to discussions about Musk's factually dubious pet theory of "white genocide" in South Africa, which the chatbot later revealed it was "instructed" to talk about.

Hegseth’s announcement on Monday also comes as Musk has completed his descent into undisguised support for a white nationalist ideology over the past week.

The billionaire's steady lurch to the far-right has been a years-long process—capped off last year, with his enthusiastic support for the neofascist Alternative for Germany Party and apparent Nazi salute at Trump's second inauguration.

But his racist outlook was left impossible to deny last week when he expressed support for a pair of posts on X stating that white people must "reclaim our nations" or "be conquered, enslaved,removedd, and genocided" and that "if white men become a minority, we will be slaughtered," necessitating "white solidarity."

— (@)

While details about the expansiveness of Grok’s use by the military remain scarce, Musk's AI platform, xAI, announced in July that it had inked a deal with the Pentagon worth nearly $200 million (notably just a week after the bot infamously referred to itself as “MechaHitler”).

In September, reportedly following direct pressure from the White House to roll it out "ASAP," the General Services Administration announced a "OneGov" agreement, making Grok available to every federal agency for just $0.42 apiece.

That same month, Sen. Elizabeth Warren (D-Mass.) sent a letter to Hegseth warning that Musk, who'd also used Grok extensively under DOGE to purge disloyal government employees, was "gaining improper advantages from unique access to DOD data and information." She added that Grok's propensity toward "inaccurate outputs and misinformation" could "harm DOD's strategic decisionmaking."

Following this week's announcement, JB Branch, the Big Tech accountability advocate at Public Citizen, said on Tuesday that, "allowing an AI system with Grok’s track record of repeatedly generating nonconsensual sexualized images of women and children to access classified military or sensitive government data raises profound national security, civil rights, and public safety concerns."

"Deploying Grok across other areas of the federal government is worrying enough, but choosing to use it at the Pentagon is a national security disgrace," he added. "If an AI system cannot meet basic safety and integrity standards, expanding its reach to include classified data puts the American public and our nation’s safety at risk.”


From Common Dreams via This RSS Feed.

18
78
submitted 6 days ago* (last edited 6 days ago) by geneva_convenience@lemmy.ml to c/technology@lemmy.ml

Jan 14 (Reuters) - Chinese authorities have told domestic companies to stop using cybersecurity software made by roughly a dozen firms from the U.S. and Israel due to national security concerns, two people briefed on the matter said.

Broadcom-owned VMware, Palo Alto Networks Fortinet, are among the U.S. firms whose cybersecurity software has been banned, while Check Point Software Technologies is among the Israeli companies, they said.

19
13
20
7
submitted 5 days ago* (last edited 5 days ago) by mooneska@lemmy.ml to c/technology@lemmy.ml

I recently bought a Dell Latitude 7430 with an i7-1265u, 10 cores, 1.8Ghz, 16gb of (I think) DDR4 RAM, and a 256GB SSD for 250$. I still have time to return the machine. I was wondering whether I got a good deal here or not.

My purpose is mostly for general school stuff. Spreadsheets, docs, Zoom meetings, and the like. I might be getting into the world of CS, but I'm not at a point yet where I would need much power.

Still, the 256GB of storage worry me. And unfortunately it can't be upgraded. Still, if I'm not doing much besides all of the basic tasks expected of a work laptop, do I really need more?

Should I consider returning it and try to get another deal? Keep it? Or something else altogether?

21
25
submitted 6 days ago by yogthos@lemmy.ml to c/technology@lemmy.ml
22
9
submitted 5 days ago by yogthos@lemmy.ml to c/technology@lemmy.ml
23
13
submitted 6 days ago by yogthos@lemmy.ml to c/technology@lemmy.ml
24
8
submitted 6 days ago by yogthos@lemmy.ml to c/technology@lemmy.ml

Most people in the field know that models usually fall apart after a few hundred steps because small errors just keep adding up until the whole process is ruined. The paper proposes a system called MAKER which uses a strategy they call massively decomposed agentic processes. Instead of asking one big model to do everything they break the entire task down into the smallest possible tiny pieces so each microagent only has to worry about one single move.

For their main test they used a twenty disk version of the Towers of Hanoi puzzle which actually requires over a million individual moves to finish. They found that even small models can be super reliable if you set them up correctly. One of the main tricks they used is a voting system where multiple agents solve the same tiny subtask and the system only moves forward once one answer gets a specific number of votes more than the others. This acts like a safety net that catches random mistakes before they can mess up the rest of the chain.

Another interesting part of their approach is red flagging which is basically just throwing away any response that looks suspicious or weird. If a model starts rambling for too long or messes up the formatting they just discard that attempt and try again because those kinds of behaviors usually mean the model is confused and likely to make a logic error. By combining this extreme level of task breakdown with constant voting and quick discarding of bad samples they managed to complete the entire million step process with zero errors.

And it turns out that you do not even need the most expensive or smartest models to do this since relatively small ones performed just as well for these tiny steps. Scaling up AI reliability might be more about how we organize the work rather than just making the models bigger and bigger. They even did some extra tests with difficult math problems like large digit multiplication and found that the same recursive decomposition and voting logic worked there as well.

25
11
submitted 6 days ago by yogthos@lemmy.ml to c/technology@lemmy.ml
view more: next ›

Technology

40893 readers
159 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 6 years ago
MODERATORS