As far as people I’d trust to not just make shit up, I’d say Librarian, aka, professional fucking researcher is high on the list.
For your last two questions, the counterpoint is, if even Microsoft can’t stop a dedicated nation state, how can any other major service provider say they haven’t been compromised?
The standard now is, assume breach. While unfortunate, the industry average for MTTD is in months. Microsoft was at least good enough to detect it within six.
Can Broadcom or Palo Alto say the same? Amazon, Google, Apple, Cisco?
Isn’t there a filter set for this in uBlock already? Annoyances filter?
Link to source article. The linked article steals the text and images verbatim.
Original Doom was not GPU accelerated.
BlackRock, for one, which shouldn’t make you feel any better.
NeXT was a mediocre BSD front end and a few interesting Objective-C libraries. Apple’s board of directors pretty much crawled back to Jobs hat in hand after the disasters of Sculley and Spindler.
Or, the real sign of gentrification is that the Google Maps car drives by your neighborhood more than once every five years. Guarantee that’s not happening in the projects.
For what country?
In the US, at least, the long term average is 3.10%, including the post-1913 Great Depression and the Oil Crisis/Great Inflation of the 1970s. From 1990-2020, the average has been 2.2%, just slightly worse than the stated goal of current US economic policy, which is to maintain long term inflation at a rate of 2%.
Meaning, 3% beats inflation significantly more than half of the time, especially since 1990.
And, specifically, Trump thinks he can get the same deal passed while he is in office. In other words, what is important to Trump now is denying Biden a bipartisan “victory” that he thinks he will be able to achieve, instead.
Bingo. If, at the limit, the purpose of a generative AI is to be indistinguishable from human content, then watermarking and AI detection algorithms are absolutely useless.
The ONLY means to do this is to have creators verify their human-generated (or vetted) content at the time of publication (providing positive proof), as opposed to attempting to retroactively trying to determine if content was generated by a human (proving a negative).
Found the problem!