1
32
2
88

Archived version

After five years of pioneering research into the abuse of social platforms, the Stanford Internet Observatory is winding down. Its founding director, Alex Stamos, left his position in November. Renee DiResta, its research director, left last week after her contract was not renewed. One other staff member's contract expired this month, while others have been told to look for jobs elsewhere, sources say.

Some members of the eight-person team might find other jobs at Stanford, and it’s possible that the university will retain the Stanford Internet Observatory branding, according to sources familiar with the matter. But the lab will not conduct research into the 2024 election or other elections in the future.

The shutdown comes amid a sustained and increasingly successful campaign among Republicans to discredit research institutions and discourage academics from investigating political speech and influence campaigns.

SIO and its researchers have been sued three times by conservative groups alleging that its researchers colluded illegally with the federal government to censor speech, forcing Stanford to spend millions of dollars to defend its staff and students.

In parallel, Republican House Judiciary Chairman Jim Jordan and his Orwellian “Subcommittee on the Weaponization of the Federal Government” have subpoenaed documents at Stanford and other universities, selectively leaked fragments of them to friendly conservative outlets, and misrepresented their contents in public statements.

And in an actual weaponization of government, Jordan’s committee has included students — both undergraduates and graduates — in its subpoena requests, publishing their names and putting them at risk of threats or worse.

The remnants of SIO will be reconstituted under Jeff Hancock, the lab’s faculty sponsor. Hancock, a professor of communication, runs a separate program known as the Stanford Social Media Lab. SIO’s work on child safety will continue there, sources said.

Two of SIO’s major initiatives — the peer-reviewed Journal of Online Trust and Safety and its Trust and Safety Research Conference — will also continue. (The journal is funded through a separate grant from the Omidyar Network.)

But in quietly dismantling SIO, the university seems to have calculated that the lab had become more trouble than it is worth.

In a statement emailed after publication, Stanford strongly disputed the fact that SIO is being dismantled. "The important work of SIO continues under new leadership, including its critical work on child safety and other online harms, its publication of the Journal of Online Trust and Safety, the Trust and Safety Research Conference, and the Trust and Safety Teaching Consortium," a spokesperson wrote. "Stanford remains deeply concerned about efforts, including lawsuits and congressional investigations, that chill freedom of inquiry and undermine legitimate and much needed academic research – both at Stanford and across academia."

3
99
submitted 1 day ago by 0x815@feddit.de to c/technology@beehaw.org

Mozilla has reinstated certain add-ons for Firefox that earlier this week had been banned in Russia by the Kremlin.

The browser extensions, which are hosted on the Mozilla store, were made unavailable in the Land of Putin on or around June 8 after a request by the Russian government and its internet censorship agency, Roskomnadzor.

Among those extensions were three pieces of code that were explicitly designed to circumvent state censorship – including a VPN and Censor Tracker, a multi-purpose add-on that allowed users to see what websites shared user data, and a tool to access Tor websites.

The day the ban went into effect, Roskomsvoboda – the developer of Censor Tracker – took to the official Mozilla forums and asked why his extension was suddenly banned in Russia with no warning.

"We recently noticed that our add-on is now unavailable in Russia, despite being developed specifically to circumvent censorship in Russia," dev zombbo complained. "We did not violate Mozilla's rules in any way, so this decision seems strange and unfair, to be honest."

Another developer for a banned add-on chimed in that they weren't informed either.

The internet org's statement at the time mentioned the ban was merely temporary. It turns out wasn't mere PR fluff, as Mozilla tells The Register that the ban has now been lifted.

"In alignment with our commitment to an open and accessible internet, Mozilla will reinstate previously restricted listings in Russia," the group declared. "Our initial decision to temporarily restrict these listings was made while we considered the regulatory environment in Russia and the potential risk to our community and staff.

"We remain committed to supporting our users in Russia and worldwide and will continue to advocate for an open and accessible internet for all."

Lifting the ban wasn't completely necessary for users to regain access to the add-ons – two of them were completely open source, and one of the VPN extensions could be downloaded from the developer's website.

4
132
5
23
Router scan (discuss.tchncs.de)

Today I scanned my router with routersploit. The scan ended and showed one vulnerability: eseries_themoon_rce.

I searched the internet and found that this is a vulnerability in Linksys E-Series routers. But I am not on linksys at all. And I didn't find anything about getting rid of it.

I'm wondering if someone knows how to make this vulnerability eliminate?

6
25
7
92
8
29
submitted 1 day ago* (last edited 1 day ago) by recursive_recursion@programming.dev to c/technology@beehaw.org

oh shit!

this is gonna be a good one✨

9
34

Some of the world’s poorest countries have been investing heavily in digital ID systems which it is claimed will deliver democratic and development dividends. Africa has been at the forefront of this push supported by the World Bank, UN agencies and the international community. Some of Africa’s most fragile states have been encouraged to spend billions of dollars on biometric systems from national IDs to voting systems.

While Africa has become a lucrative market for multinational tech vendors, the promised benefits of trustworthy election results and a revolutionising of the way that states deliver vital services is far harder to discern.

At the 2024 ID4Africa trade fair in South Africa, the promises kept coming: economic growth, empowering individuals, reducing government spending, enabling trust and being a key tool in solving humanitarian crises.

The conference sponsors include a who’s who of companies that have benefited from contracts meant to confer legitimacy on electoral processes and unlock the potential of Africa’s demographic advantage over other ageing continents.

A legal identity is among the UN’s sustainable development goals, where it is defined as a fundamental human right. The drive to meet this goal has seen near-bankrupt states prioritise the capture and storage of biometric data from iris scans and fingerprints to facial images.

We set out to investigate what has become of the blockbuster deals struck in sub-Saharan Africa. What has actually been delivered? Who has benefited? How have they been financed? And how have people on the ground in those countries been affected?

Methods

As well as exploring the biometrics industry and how it has courted customers in a “frontier market” our investigation focused on a representative cross section of African countries where big tech investments have gone in three distinct directions.

In Uganda, where supposedly democratic elections have failed to deliver a change of government in four decades, we explored how a Chinese tech vendor provided biometric systems which have become the foundations for a surveillance state.

In Mozambique, we probe the worsening conduct of elections in a fragile democracy. The gas-rich nation is beset by rising poverty and a brutal counter insurgency, but its ballooning biometrics costs have failed to breed confidence in democracy.

In the Democratic Republic of Congo, we investigate a succession of phantom biometrics deals which have seen billions of dollars committed on paper but have so far failed to deliver a national population registry or any functioning ID cards across successive governments.

Working with partners, Bloomberg, over the course of nine months, the team combined in-depth ground reporting with expert interviews and accounts from confidential sources to reconstruct deals in the three countries from tender process to societal fallout. In support of these testimonies, we analysed thousands of pages of documents, ranging from bank records and business registries to unpublished contracts and correspondence between governments, vendors and middle men.

The result is the most detailed account yet of the failed promise of biometric technologies and one that looks at the accompanying harms for affected communities, as well as wrongdoing by several companies and individuals.

Storylines

In Uganda, where a national ID system ought to be a success story, we find it feeding a sweeping surveillance state built in cooperation with China’s Huawei. Nick Opiyo, one of East Africa’s leading human rights lawyers, who has defended victims of government crackdowns, has been a victim of widespread digital surveillance.

A succession of biometric tools have become central to many of the day to day functions of the state and also a powerful mechanism for surveilling politicians, journalists, human rights defenders and ordinary citizens.

A $126 million deal with Huawei has given Uganda the capacity to deploy facial and number plate recognition technology, as well as AI capabilities. Sensitive personal data, required to register a SIM card or make a bank transaction, can be accessed at will by state actors with no due process.

“There’s almost no confidentiality in my work any more,” Opiyo told Bloomberg. “There’s pervasive fear and self censorship.”

The second and third stories in the series will follow.

10
57

If Apple aren't paying OpenAI and OpenAI aren't playing Apple, it means that consumers are paying both.

11
56

Elon is the gift that keeps on giving. He's decided that because it's Friday, we should all have a pile in.

On a less scornful and more serious note. If he could get a working prototype up, it would be a good thing. Though I suspect that he along with all the other stupidly rich people would go out of their way to vote against providing parachute policies for the economy such as UBI for all the displaced employees.

12
281

As soon as Apple announced its plans to inject generative AI into the iPhone, it was as good as official: The technology is now all but unavoidable. Large language models will soon lurk on most of the world’s smartphones, generating images and text in messaging and email apps. AI has already colonized web search, appearing in Google and Bing. OpenAI, the $80 billion start-up that has partnered with Apple and Microsoft, feels ubiquitous; the auto-generated products of its ChatGPTs and DALL-Es are everywhere. And for a growing number of consumers, that’s a problem.

Rarely has a technology risen—or been forced—into prominence amid such controversy and consumer anxiety. Certainly, some Americans are excited about AI, though a majority said in a recent survey, for instance, that they are concerned AI will increase unemployment; in another, three out of four said they believe it will be abused to interfere with the upcoming presidential election. And many AI products have failed to impress. The launch of Google’s “AI Overview” was a disaster; the search giant’s new bot cheerfully told users to add glue to pizza and that potentially poisonous mushrooms were safe to eat. Meanwhile, OpenAI has been mired in scandal, incensing former employees with a controversial nondisclosure agreement and allegedly ripping off one of the world’s most famous actors for a voice-assistant product. Thus far, much of the resistance to the spread of AI has come from watchdog groups, concerned citizens, and creators worried about their livelihood. Now a consumer backlash to the technology has begun to unfold as well—so much so that a market has sprung up to capitalize on it.


Obligatory "fuck 99.9999% of all AI use-cases, the people who make them, and the techbros that push them."

13
20

Microsoft President Brad Smith fielded questions about the tech giant's security practices and ties to China at a House homeland security panel on Thursday, a year after alleged China-linked hackers spied on federal emails by hacking the firm.

The hackers accessed 60,000 U.S. State Department emails by breaking into Microsoft's systems last summer, while Russia-linked cybercriminals separately spied on Microsoft's senior staff emails this year, according to the company's disclosures.

The congressional hearing comes amid increasing federal scrutiny over Microsoft, the world's biggest software-maker, which is also a key vendor to the U.S. government and national security establishment. Microsoft's business accounts for around 3% of the U.S. federal IT budget, Smith said at the hearing.

Lawmakers grilled Microsoft for its inability to prevent both the Russian and Chinese hacks, which they said put federal networks at risk despite not using sophisticated means.

The company emails Russian hackers accessed also "included correspondence with government officials," Democrat Bennie Thompson said.

"**Microsoft is one of the federal government's most important technology and security partners, **but we cannot afford to allow the importance of that relationship to enable complacency or interfere with our oversight," he added.

Lawmakers drew on the findings of a scathing report in April by the Cyber Safety Review Board (CSRB) - a group of experts formed by U.S. Secretary of Homeland Security Alejandro Mayorkas - which slammed Microsoft for its lack of transparency over the China hack, calling it preventable.

"We accept responsibility for each and every finding in the CSRB report," Smith said at the hearing, adding that Microsoft had begun acting on a majority of the report's recommendations.

"We're dealing with formidable foes in China, Russia, North Korea, Iran, and they're getting better," said Smith. "They're getting more aggressive ... They're waging attacks at an extraordinary rate."

Thompson criticised Smith's company for failing to detect the hack, which was discovered instead by the U.S. State Department. Smith responded saying: "That's the way it should work. No one entity in the ecosystem can see everything."

But Congressman Thompson was not convinced.

"It's not our job to find the culprits. That's what we're paying you for," Thompson said.

Panel members also probed Smith for details on Microsoft's business in China, noting that it had invested heavily in setting up research incentives there.

"Microsoft's presence in China creates a mix of complex challenges and risks," said Congressman Mark Green from Mississippi, who chaired the panel.

Microsoft earns around 1.5% of its revenue from China and is working to reduce its engineering presence there, said Smith.

The company has faced heightened criticism from its security industry peers over the past year over the breaches and lack of transparency.

Smith's responses at the hearing earned praise from some on the panel, such as Republican Congresswoman Marjorie Taylor Greene. "You said you accept a responsibility, and I just want to commend you for that," Greene told him.

Following the board's criticisms, Microsoft had said it was working on improving its processes and enforcing security benchmarks. In November it launched a new cybersecurity initiative and said it was making security the company's top priority "above all else - over all other features."

14
131

"So the cop was tracking random people off social media using this incredibly invasive technology, on a pretty regular basis."

"That's bad."

"But, an audit detected his abuse of the system and he was slated for termination."

"That's good!"

"But the system still exists, and can be used for nefarious purposes as long as those are state-approved uses backed by a case number, which is honestly a bigger deal and concern than one random officer using it for, presumably, stalking."

"That's bad."

"And, from the description of the nature of their auditing, it would be pretty easy for an officer to use the system abusively as long as they were more careful to disguise the nature of their access than this guy was."

"That's... also bad."

"And, it's notable that the auditing in question was done by his department, not ClearView itself. It sounds like it's up to each individual law enforcement agency to make sure its officers are using it ethically, without centralized oversight from ClearView let alone any type of judicial or legal oversight, which sounds like a recipe for abuse even leaving aside the issue of state-sanctioned abuse of the system and the general increase in police powers it represents."

"... Can I go now?"

15
28
submitted 2 days ago by 0x815@feddit.de to c/technology@beehaw.org

Left unchecked, the technique, which weaponizes emotional data for political gain, could erode the foundations of a fair and informed society.

Aram Sinnreich, Chair of Communication Studies at American University • Jesse Gilbert, former founding Chair of the Media Technology department at Woodbury University.

One of the foundational concepts in modern democracies is what’s usually referred to as the marketplace of ideas, a term coined by political philosopher John Stuart Mill in 1859, though its roots stretch back at least another two centuries. The basic idea is simple: In a democratic society, everyone should share their ideas in the public sphere, and then, through reasoned debate, the people of a country may decide which ideas are best and how to put them into action, such as by passing new laws. This premise is a large part of the reason that constitutional democracies are built around freedom of speech and a free press — principles enshrined, for instance, in the First Amendment to the U.S. Constitution.

Like so many other political ideals, the marketplace of ideas has been more challenging in practice than in theory. For one thing, there has never been a public sphere that was actually representative of its general populace. Enfranchisement for women and racial minorities in the United States took centuries to codify, and these citizens are still disproportionately excluded from participating in elections by a variety of political mechanisms. Media ownership and employment also skews disproportionately male and white, meaning that the voices of women and people of color are less likely to be heard. And, even for people who overcome the many obstacles to entering the public sphere, that doesn’t guarantee equal participation; as a quick scroll through your social media feed may remind you, not all voices are valued equally.

Above and beyond the challenges of entrenched racism and sexism, the marketplace of ideas has another major problem: Most political speech isn’t exactly what you’d call reasoned debate. There’s nothing new about this observation; 2,400 years ago, the Greek philosopher Aristotle argued that logos (reasoned argumentation) is only one element of political rhetoric, matched in importance by ethos (trustworthiness) and pathos (emotional resonance). But in the 21st century, thanks to the secret life of data, pathos has become datafied, and therefore weaponized, at a hitherto unimaginable scale. And this doesn’t leave us much room for logos, spelling even more trouble for democracy.

An excellent — and alarming — example of the weaponization of emotional data is a relatively new technique called neurotargeting. You may have heard this term in connection with the firm Cambridge Analytica (CA), which briefly dominated headlines in 2018 after its role in the 2016 U.S. presidential election and the UK’s Brexit vote came to light. To better understand neurotargeting and its ongoing threats to democracy, we spoke with one of the foremost experts on the subject: Emma Briant, a journalism professor at Monash University and a leading scholar of propaganda studies.

Modern neurotargeting techniques trace back to U.S. intelligence experiments examining brains exposed to both terrorist propaganda and American counterpropaganda.

Neurotargeting, in its simplest form, is the strategic use of large datasets to craft and deliver a message intended to sideline the recipient’s focus on logos and ethos and appeal directly to the pathos at their emotional core. Neurotargeting is prized by political campaigns, marketers, and others in the business of persuasion because they understand, from centuries of experience, that provoking strong emotional responses is one of the most reliable ways to get people to change their behavior. As Briant explained, modern neurotargeting techniques can be traced back to experiments undertaken by U.S. intelligence agencies in the early years of the 21st century that used functional magnetic resonance imaging (fMRI) machines to examine the brains of subjects as they watched both terrorist propaganda and American counterpropaganda. One of the commercial contractors working on these government experiments was Strategic Communication Laboratories, or the SCL Group, the parent company of CA.

A decade later, building on these insights, CA was the leader in a burgeoning field of political campaign consultancies that used neurotargeting to identify emotionally vulnerable voters in democracies around the globe and influence their political participation through specially crafted messaging. While the company was specifically aligned with right-wing political movements in the United States and the United Kingdom, it had a more mercenary approach elsewhere, selling its services to the highest bidder seeking to win an election. Its efforts to help Trump win the 2016 U.S. presidential election offer an illuminating glimpse into how this process worked.

As Briant has documented, one of the major sources of data used to help the Trump campaign came from a “personality test” fielded via Facebook by a Cambridge University professor working on behalf of CA, who ostensibly collected the responses for scholarly research purposes only. CA took advantage of Facebook’s lax protections of consumer data and ended up harvesting information from not only the hundreds of thousands of people who opted into the survey, but also an additional 87 million of their connections on the platform, without the knowledge or consent of those affected. At the same time, CA partnered with a company called Gloo to build and market an app that purported to help churches maintain ongoing relationships with their congregants, including by offering online counseling services. According to Briant’s research, this app was also exploited by CA to collect data about congregants’ emotional states for “political campaigns for political purposes.” In other words, the company relied heavily on unethical and deceptive tactics to collect much of its core data.

Once CA had compiled data related to the emotional states of countless millions of Americans, it subjected those data to analysis using a psychological model called OCEAN — an acronym in which the N stands for neuroticism. As Briant explained, “If you want to target people with conspiracy theories, and you want to suppress the vote, to build apathy or potentially drive people to violence, then knowing whether they are neurotic or not may well be useful to you.”

CA then used its data-sharing relationship with right-wing disinformation site Breitbart and developed partnerships with other media outlets in order to experiment with various fear-inducing political messages targeted at people with established neurotic personalities — all, as Briant detailed, to advance support for Trump. Toward this end, CA made use of a well-known marketing tool called A/B testing, a technique that compares the success rate of different pilot versions of a message to see which is more measurably persuasive.

Armed with these carefully tailored ads and a master list of neurotic voters in the United States, CA then set out to change voters’ behaviors depending on their political beliefs — getting them to the polls, inviting them to live political events and protests, convincing them not to vote, or encouraging them to share similar messages with their networks. As Briant explained, not only did CA disseminate these inflammatory and misleading messages to the original survey participants on Facebook (and millions of “lookalike” Facebook users, based on data from the company’s custom advertising platform), it also targeted these voters by “coordinating a campaign across media” including digital television and radio ads, and even by enlisting social media influencers to amplify the messaging calculated to instill fear in neurotic listeners. From the point of view of millions of targeted voters, their entire media spheres would have been inundated with overlapping and seemingly well-corroborated disinformation confirming their worst paranoid suspicions about evil plots that only a Trump victory could eradicate.

Although CA officially shut its doors in 2018 following the public scandals about its unethical use of Facebook data, parent company SCL and neurotargeting are still thriving. As Briant told us, “Cambridge Analytica isn’t gone; it’s just fractured, and [broken into] new companies. And, you know, people continue. What happens is, just because these people have been exposed, it then becomes harder to see what they’re doing.” If anything, she told us, former CA employees and other, similar companies have expanded their operations in the years since 2018, to the point where “our entire information world” has become “the battlefield.”

Unfortunately, Briant told us, regulators and democracy watchdogs don’t seem to have learned their lesson from the CA scandal. “All the focus is about the Russians who are going to ‘get us,’” she said, referring to one of the principal state sponsors of pro-Trump disinformation, but “nobody’s really looking at these firms and the experiments that they’re doing, and how that then interacts with the platforms” with which we share our personal data daily.

Unless someone does start keeping track and cracking down, Briant warned, the CA scandal will come to seem like merely the precursor to a wave of data abuse that threatens to destroy the foundations of democratic society. In particular, she sees a dangerous trend of both information warfare and military action being delegated to unaccountable, black-box algorithms, and “you no longer have human control in the process of war.” Just as there is currently no equivalent to the Geneva Conventions for the use of AI in international conflict, it will be challenging to hold algorithms accountable for their actions via international tribunals like the International Court of Justice or the International Criminal Court in The Hague.

Even researching and reporting on algorithm-driven campaigns and conflicts will become nearly impossible.

Even researching and reporting on algorithm-driven campaigns and conflicts — a vital function of scholarship and journalism — will become nearly impossible, according to Briant. “How do you report on a campaign that you cannot see, that nobody has controlled, and nobody’s making the decisions about, and you don’t have access to any of the platforms?” she asked. “What’s going to accompany that is a closing down of transparency … I think we’re at real risk of losing democracy itself as a result of this shift.”

Briant’s warning about the future of algorithmically automated warfare (both conventional and informational) is chilling and well-founded. Yet this is only one of many ways in which the secret life of data may further erode democratic norms and institutions. We can never be sure what the future holds, especially given the high degree of uncertainty associated with planetary crises like climate change. But there is compelling reason to believe that, in the near future, the acceleration of digital surveillance; the geometrically growing influence of AI, Machine Learning, and predictive algorithms; the lack of strong national and international regulation of data industries; and the significant political, military, and commercial competitive advantages associated with maximal exploitation of data will add up to a perfect storm that shakes democratic society to its foundations.

The most likely scenario, this year, is the melding of neurotargeting and generative AI. Imagine a relaunch of the Cambridge Analytica campaign from 2016, but featuring custom-generated, fear-inducing disinformation targeted to individual users or user groups in place of A/B tested messaging. It’s not merely a possibility; it’s almost certainly here, and its effects on the outcome of the U.S. presidential election won’t be fully understood until we’re well into the next presidential term.

Yet we can work together to prevent its most dire consequences, by taking care what kinds of social media posts we like and reshare, doing the extra work to check the provenance of the videos and images we’re fed, and holding wrongdoers publicly accountable when they’re caught seeding AI-generated disinformation. It’s not just a dirty trick, it’s an assault on the very foundations of democracy. If we’re going to successfully defend ourselves from this coordinated attack, we’ll need to reach across political and social divides to work in our common interest, and each of us will need to do our part.

16
119

Archived version

  • Former employee Andrew Harris says the software giant dismissed his warnings about a critical flaw because it feared losing government business. Russian hackers later used the weakness to breach the National Nuclear Security Administration, among others.

  • Harris said he pleaded with the company for several years to address the flaw in the product. But at every turn, Microsoft dismissed his warnings, telling him they would work on a long-term alternative — leaving cloud services around the globe vulnerable to attack in the meantime.

  • He scrambled to alert some of the company’s most sensitive customers about the threat and personally oversaw the fix for the New York Police Department. Frustrated by Microsoft’s inaction, he left the company in August 2020.

  • Within months, his fears became reality. U.S. officials confirmed reports that a state-sponsored team of Russian hackers had carried out SolarWinds, one of the largest cyberattacks in U.S. history. They used the flaw Harris had identified to vacuum up sensitive data from a number of federal agencies, including the National Nuclear Security Administration, which maintains the United States’ nuclear weapons stockpile, and the National Institutes of Health, which at the time was engaged in COVID-19 research and vaccine distribution.

  • The Russians also used the weakness to compromise dozens of email accounts in the Treasury Department, including those of its highest-ranking officials. One federal official described the breach as “an espionage campaign designed for long-term data collection".

  • From the moment the hack surfaced, Microsoft insisted it was blameless. Microsoft President Brad Smith assured Congress in 2021 that “there was no vulnerability in any Microsoft product or service that was exploited” in SolarWinds.

  • The Microsoft manager also said customers could have done more to protect themselves.

  • Harris said they were never given the chance. "The decisions are not based on what’s best for Microsoft’s customers but on what’s best for Microsoft,” he said.

17
243
18
55

Archived version

Microsoft president Brad Smith will tell lawmakers on Capitol Hill Thursday that the company is responsible for "each and every one of the issues" that a government advisory board uncovered while investigating a recent China hack, according to prepared remarks.

Why it matters: Lawmakers, administration officials and regulators have started to lose trust in the tech giant's ability to secure its products after a series of nation-state cyberattacks.

Driving the news: Microsoft has faced two notable nation-state cyberattacks in the last year that has put federal agencies' communications in jeopardy.

  • Microsoft disclosed last July that a China-backed hacking group had broken into the email accounts of several organizations, including federal offices. Commerce Secretary Gina Raimondo and several State officials were affected.

  • Russian intelligence hackers also stole several federal agencies' emails after breaching Microsoft, the Cybersecurity and Infrastructure Security Agency said earlier this year.

The big picture: Ever since these incidents, Microsoft has faced a mountain of scrutiny in Washington from lawmakers and competitors.

  • The Cyber Safety Review Board (CSRB) said in an April report that the Chinese espionage campaign, in particular, was "preventable and should never have occurred."

  • Senators are pushing back against the Pentagon's reported plans to upgrade its suite of Microsoft products as part of its zero-trust transition.

  • And eager competitors have gone on a campaign to woo Microsoft's government customers.

The other side: Microsoft has been briefing federal security leaders and their teams on a new set of security principles it's been implementing internally, known as the Secure Future Initiative.

-The plan ties executives' pay to improving cybersecurity and calls on teams to prioritize security investments over fast product development.

Zoom in: In his remarks to the House Homeland Security Committee, Smith will tell lawmakers that he sees the advisory board's recommendations as good advice for all corporations to follow as they face "more prolific, well-resourced, and sophisticated cyberattacks."

  • Smith plans to lay out how the new Secure Future Initiative will help address each issue in the advisory board's report, per his remarks published Wednesday.

  • "We acknowledge that we can and must do better, and we apologize and express our deepest regrets to those who have been impacted," Smith will say.

  • Microsoft has invited the Cybersecurity and Infrastructure Security Agency (CISA) to its headquarters for a "detailed technical briefing" on the initiative, according to the published remarks.

Between the lines: Compared to past hearings about cyberattacks, Thursday's congressional hearing will hit close to home for lawmakers given the federal government's heavy reliance on Microsoft's products.

  • Many agencies rely on Microsoft as their sole operating system, email provider, cybersecurity product vendor and office software provider.

  • The Software & Information Industry Association — a trade group that represents software vendors — sent a letter Wednesday to agency leaders urging them to find ways to diversify beyond Microsoft.

What we're watching: Smith will need to provide bulletproof reassurances and transparency about Microsoft's security plans to lawmakers and regulators to regain their trust in Washington.

19
59
submitted 3 days ago by hedge@beehaw.org to c/technology@beehaw.org
20
100

Company he works at eternos.life

21
88
submitted 3 days ago by 0x815@feddit.de to c/technology@beehaw.org

Mozilla, the maker of the popular web browser Firefox, said it received government demands to block add-ons that circumvent censorship.

The Mozilla Foundation, the entity behind the web browser Firefox, is blocking various censorship circumvention add-ons for its browser, including ones specifically to help those in Russia bypass state censorship. The add-ons were blocked at the request of Russia’s federal censorship agency, Roskomnadzor — the Federal Service for Supervision of Communications, Information Technology, and Mass Media — according to a statement by Mozilla to The Intercept.

“Following recent regulatory changes in Russia, we received persistent requests from Roskomnadzor demanding that five add-ons be removed from the Mozilla add-on store,” a Mozilla spokesperson told The Intercept in response to a request for comment. “After careful consideration, we’ve temporarily restricted their availability within Russia. Recognizing the implications of these actions, we are closely evaluating our next steps while keeping in mind our local community.”

“It’s a kind of unpleasant surprise because we thought the values of this corporation were very clear in terms of access to information.”

Stanislav Shakirov, the chief technical officer of Roskomsvoboda, a Russian open internet group, said he hoped it was a rash decision by Mozilla that will be more carefully examined.

“It’s a kind of unpleasant surprise because we thought the values of this corporation were very clear in terms of access to information, and its policy was somewhat different,” Shakirov said. “And due to these values, it should not be so simple to comply with state censors and fulfill the requirements of laws that have little to do with common sense.”

Developers of digital tools designed to get around censorship began noticing recently that their Firefox add-ons were no longer available in Russia.

On June 8, the developer of Censor Tracker, an add-on for bypassing internet censorship restrictions in Russia and other former Soviet countries, made a post on the Mozilla Foundation’s discussion forums saying that their extension was unavailable to users in Russia.

The developer of another add-on, Runet Censorship Bypass, which is specifically designed to bypass Roskomnadzor censorship, posted in the thread that their extension was also blocked. The developer said they did not receive any notification from Mozilla regarding the block.

Two VPN add-ons, Planet VPN and FastProxy — the latter explicitly designed for Russian users to bypass Russian censorship — are also blocked. VPNs, or virtual private networks, are designed to obscure internet users’ locations by routing users’ traffic through servers in other countries.

The Intercept verified that all four add-ons are blocked in Russia. If the webpage for the add-on is accessed from a Russian IP address, the Mozilla add-on page displays a message: “The page you tried to access is not available in your region.” If the add-on is accessed with an IP address outside of Russia, the add-on page loads successfully.

Supervision of Communications

Roskomnadzor is responsible for “control and supervision in telecommunications, information technology, and mass communications,” according to the Russia’s federal censorship agency’s English-language page.

In March, the New York Times reported that Roskomnadzor was increasing its operations to restrict access to censorship circumvention technologies such as VPNs. In 2018, there were multiple user reports that Roskomnadzor had blocked access to the entire Firefox Add-on Store.

According to Mozilla’s Pledge for a Healthy Internet, the Mozilla Foundation is “committed to an internet that includes all the peoples of the earth — where a person’s demographic characteristics do not determine their online access, opportunities, or quality of experience.” Mozilla’s second principle in their manifesto says, “The internet is a global public resource that must remain open and accessible.”

The Mozilla Foundation, which in tandem with its for-profit arm Mozilla Corporation releases Firefox, also operates its own VPN service, Mozilla VPN. However, it is only available in 33 countries, a list that doesn’t include Russia.

The same four censorship circumvention add-ons also appear to be available for other web browsers without being blocked by the browsers’ web stores. Censor Tracker, for instance, remains available for the Google Chrome web browser, and the Chrome Web Store page for the add-on works from Russian IP addresses. The same holds for Runet Censorship Bypass, VPN Planet, and FastProxy.

“In general, it’s hard to recall anyone else who has done something similar lately,” said Shakirov, the Russian open internet advocate. “For the last few months, Roskomnadzor (after the adoption of the law in Russia that prohibits the promotion of tools for bypassing blockings) has been sending such complaints about content to everyone.”

22
108

cross-posted from: https://lazysoci.al/post/14579120

YouTube is currently experimenting with server-side ad injection. This means that the ad is being added directly into the video stream.

This breaks sponsorblock since now all timestamps are offset by the ad times.

For now, I set up the server to detect when someone is submitting from a browser with this happening and rejecting the submission to prevent the database from getting filled with incorrect submissions.

23
8

The Steves converge....

24
72
25
47
view more: next ›

Technology

37212 readers
230 users here now

Rumors, happenings, and innovations in the technology sphere. If it's technological news or discussion of technology, it probably belongs here.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS