32
Microsoft is making every Windows 11 PC an AI PC
(blogs.windows.com)
This is a most excellent place for technology news and articles.
Does anybody actually believe that 68% of consumers use or even want Copilot? But they included a source for this very generous assertion at the bottom of the page:
Oh yeah, that's compelling: US consumers, 13 years old and older. An entire thousand of them!
So the only question I have left is which junior high principal Microsoft "compensated" for this survey, and what happened to the 320 summer school attendees who said fuck you, no anyway.
When google shoves their ai to the top of search results, its hard not to read it. I've been spoiled by ublock and I am no longer used to ignoring the first few things that come up.
I've been using Duckduckgo with uBlock for years, so I had no real problems with anything like the hell of Google "sponsored content" until Duckduckgo started putting up their own AI search assistant. Since then I've gone from start.duckduckgo.com to noai.duckduckgo.com because I got tired of turning their search assist off and couldn't reliably block it with uBlock because they kept changing it. (I delete all cookies after every browser session and do not maintain individual app accounts, so their AI settings options were never gonna work for me.)
Because of the way my brain works, I literally don't even want to see what AI says until I've done my own looking. Yet I never failed to turn it off, because I just can't rely on it.
Usually when I'm looking for something I'm in a hurry, so it's less trouble for me to just pick my own sources, preferably older than 2023 if possible, and read a bit myself than to spend time getting blithely lied to, or even just suspect hallucination/omission to the point that I think I need to verify it before I can rely on it.
It's not an exaggeration to say that for me, it is literally faster to skim three or four completely different primary sources than it is to try to verify the assertions in a single search assist paragraph: one is just light reading, the other is point by point comparison of the AI offering against multiple independent sources. So I read.
I've never regretted summarizing a topic myself, but I've definitely gotten some rotten eggs from AI, both in blatant non-truths AND in holes of omission you could drive a truck through. I won't make that mistake again. So for me, AI summaries are well worth staying wary of for now.
My favorite is when AI summary answers a question, then the links from the search below contradict that answer. It's shit for biomedical research.
they are equating "AI support" with "I want AI copilot integrated into my OS"
and that's a big leap
They got that 68% usage number likely by counting everyone accidentally using it after a search swap or similar trick.
Yeah, I’d believe it. Outside of anti-AI circlejerks people like AI, especially ones like ChatGPT, and especially if it is available right at their fingertips. It’s quickly becoming a part of everyday life and processes.
The anti-AI people need to start accepting that today and every day after it is going be the day that AI plays the smallest part in humanity’s future. The genie is out of the bottle and it’s never going back in. The sooner they can accept that and let go of the hate and see it for what it is - a useful tool to help you - the better and less angry their lives will be.
How useful is it really? I constantly hear about it being wrong and I’m not so stupid that I can’t handle a search through Wikipedia on my own.
Why should accept this thing that is of such little benefit to my life? Why should I accept this thing that is constantly wrong? Why should I accept this thing that just allows uncreative and insecure people to fill the internet full of garbage?
If you need AI as it is to help you do things then I pity you greatly.
You’re constantly hearing negative stuff, exaggerations, and lies most likely - especially if you are hearing it on places like this.
Ok but we know that it’s very often wrong and tries too hard to make you feel good instead of actually giving correct answers. It makes up reasons for made-up sayings, often struggles with math, and has a whole host of other issues while acting fully confident in its infallibility. We have several studies that seem to show that its use is having a negative affect on ohr critical thinking skills as well. After all that it doesn’t even provide a service that’s worth anything even if it didn’t come with all those downsides. Using a search engine just isn’t that difficult and AI “art” is a goddamn cancer.
It’s terrible for us and we don’t even need it! No, fuck “AI”. We have a big enough problem with people trying to find the easy way out to such a degree that they refuse to learn how anything works and slapping a big “do it for me” button on everything is just insane. I’m not saying that everything needs to difficult but we are so averse to even the slightest challenge that it leaves us with nothing but a complete lack of basic skills and an assload of insecurity.
AI is a tool. It’s not a person, it’s not a be-all-end-all of anything. Just like a person can use excel and come up with the wrong numbers, people can use AI and come up with the wrong answer.
Just like with every tool, there are people who can’t use them properly, there are people who are good enough to get modest results, and there are people who are experts at their craft who can do amazing things with them. AI is no different.
If you want a calculator, use a calculator - not AI. Use the right tool for the job and you’ll get the best result.
Studies can be made to say anything, and I know the ones you are talking about - they’re bogus.
Except that anyone who can use it properly can also just do the job without it, and the amount of damage it is doing because it’s freely available to everyone is insane.
You’re completely ignoring all my arguments. This sorta makes sense since your original reply was very “just ignore the bad stuff and it’s good!” but you’re going to have to address those things. I meanc, you did say “they’re bogus” and then not elaborate at all, but I’m assuming that if you have the energy to continuing writing comments then you would also have the energy to do the far more efficient thing and show me why those studies are bogus, right?
No I'm not, I addressed them. LLMs not being able to do maths/spelling is a known shortcoming. Anyone using it to do that is literally using it wrong. The studies you talk about were ridiculous, I know the ones you're talking about. Of course people that don't learn something won't know how to do it, for example - but the fact that they can do it with AI is a positive. Obviously getting AI to write an essay means that the person will feel less "proud" of their work, as one of the studies said - but that's not a "bad" thing. Just like how people don't need to learn how to hunt and gather anymore doesn't mean that it's a bad thing - the world as it is, and as it always will be from here on out, means we don't need to know that unless we want to do it.
Again - AI is a tool, and idiots being able to use it to great effect doesn't mean that the tool is bad. If anything that's a showing of how good the tool is.
Those studies aren’t about them feeling less proud, they’re about the degradation of critical thinking skills.
I have repeatedly said that isn’t worth anything largely because it doesn’t do anything I can’t do with relative ease. Why do you think it’s so great? What do you honestly use it for?
As one example I built an MCP server that lets LLMs access a reporting database, and made a Copilot Agent and integrated it into Teams, so now the entire business can ask a chat bot questions, using natural language in teams, about business data. It can run reports for them on demand, pulling in new columns/tables. It can identify when there might be something wrong as it also reads from our logs.
These people don’t know databases. They don’t know how to read debug/error logs.
I also use GitHub copilot.
But sure, it can’t be of any help to anyone ever lol
I’ll take your word for it to not just be saying “no” but I still have to wonder why it needs “AI” and if people are going to build up a reliance on it to the point where they start to not be able to find that info on their own. I mean, hell, like you say they already can’t handle the databases so why are they even fucking around in there anyway/why aren’t they learning how to use them if they’re so important for their jobs?
Because in teams you could type (or say) "how many customers are still awaiting their refunds for their services that were cancelled last week?" and it will go and do its little AI magic and respond with the answer.
But they can never find it on their own - it's in a database, they have to use some tool to get it. Why can't that tool be AI?
They're not! That's the point. This way it gives them access to information that they would usually have to put in a support ticket, or run multiple reports and then try and compile them together, for example, to get. Now they can just ask a bot in teams a question and they get the answer.
Because their job isn't to access the production database.
So you can’t have a foolproof spreadsheet that just has an option for “refund given” with a date range? Why go through all this AI nonsense? All it’s doing is adding points of failure and giving people the ability to fuck up their prompts.
A spreadsheet? No, sales go through the database. That was also just an example. You could ask it to see which state has the most sales of product X between dates Y and Z for customers between age 18 and 25, as another example. You can ask it anything you can think of to do with the data.
It’s basically a reporting engine that can create ad-hoc reports at will.
It’s a lot easier to write a prompt for a report than it is to query the database, especially when you don’t know SQL etc - or even have access to the database.
Product X > filter by state > date range. Why is this difficult? Gimme another, it’s mildly entertaining even if it’s not exactly difficult.
What product are you using to get that data from a live Azure database?
You literally told you built something which would allow an LLM to access the data. In order to be reliable enough the data would have to be appropriately sorted already and there would need to be an interface which the LLMs could use. So you built all this stuff to let the LLM thing work and now you’re looking at me stupid like building an extreme simple filter is some sorta crazy thing and we need a product to do it.
What the hell were people doing before you built your little chatbot? Just neatly sorting information into a black box and throwing into the ocean?
Ah ok, so you have no idea what you're talking about then lol. In a nutshell you go "here is your database connection details, now be a good little AI and answer my questions about the database".
"an extreme simple filter" lol. It could be pulling data from 30 different tables, views, stored procedure results, etc from the database and making insanely complex queries and reports, then cross referencing those with external logs from a third party logging service to provide even more data. You seem to think that you pretty much have to build all the queries and reports and services and then the LLM just calls them with some parameters lol.
You very clearly have zero experience in this area, and have not done even the most basic of research.
Hey dude, I was responding to your incredibly shitty examples. You give me no information and blame for not having information well, that’s a you problem. But I suppose if you understood that concept you’d also understand the problems I’m talking about.
Now, again, if the AI can have access to all that information and identify it correctly then why is it impossible to do what I’m asking? It has to be able to tell the difference somehow, right? And with LLMs being known to have hallucinations and serious misunderstandings it seems rather ridiculous to rely on it for something that you say is so complex that a person cannot do it. You also haven’t answered me, I don’t think, on the topic of what people were doing before the LLM.
There are a lot of key elements you’re dodging here and before you start talking shit maybe start addressing them.
We put the leaded gasoline genie back into its bottle, time to put the AI slop genie back into its bottle too!
I think the more important thing, is for people to push to make AI a public good, rather than a corporate hegemony. If corporations are the sole creators and holders of AI, they will do all sorts of terrible things with their mastery. Publicly developed and open-sourced AI that is free for anyone to use, is important.
The refusal for the public to truly make AI their own, would be akin to letting corporations to control every single printing press.
You make a good point, and the end of this movie remains to be seen (though I agree that right now it looks like AI is here to stay).
I use AI pretty regularly to check for holes on some extremely long compliance documents for work, and the results in terms of not missing parts and reducing the time of the task is amazing, to say the least.
However, this is very different from having an agent controlled by MicroShit seeing everything you do in what is supposed to be YOUR computer, and giving it all to MicroShit to do God knows what with your data.
Yes, AI is currently the new smartphone boom, but there are many ways to use it without showing up completely naked in front of these assholes, especially since you're not even given an option to cover yourself.