3
top 28 comments
sorted by: hot top controversial new old
[-] SnotFlickerman@lemmy.blahaj.zone 1 points 1 month ago

This map shows readings from about 770,000 home sensors, with red zones indicating areas with the most distorted power.

Bloomberg News analyzed data from about 770,000 Ting sensors from Whisker Labs, which are plugged into homes across the country, to better understand the distribution and severity of an important power-quality measure known as total harmonic distortion (THD). A lower THD is better.

Good large source of data, but possibly misleading about the severity of the problem (as well as the source being somewhat dubious as it's from a private company).

I'm from Washington and was actually surprised at how small the problem is in the Seattle area and surrounding compared to the rest of the country. We have explosive data center growth here that seems ill represented by this map.

Further, a lot of the massive data centers in Washington are actually on the Eastern side of the state, particularly in Wenatchee, which on this map is just basically entirely black. The small line of spots on the East side of the state seems more in line with Yakima/Tri-Cities/Spokane while not really including the more rural Wenatchee/Chelan area. I wish you could zoom in more on this map so I could do a proper overlay to see what areas are being missed.

Is that because it's mostly rural and not a lot of the rural residents have the money to be adding home-sensors to be testing their energy and whether its "clean?" Like seriously, that seems more like a wealthy-people service, I had never heard of Whisker Labs or Ting before now. So not only is the data going to be limited to bigger cities (so like so many maps its really just a fucking population map), but it's going to miss every area that isn't as wealthy.

So Wenatchee is sparsely populated, shows up as basically black on the map, but is also home to some of the largest bitcoin mining datacenters in the state, if not the largest. Part of the reason they set up there is the cheap electricity due to close proximity to hydroelectric power. Because the population is small, more rural, and generally poorer, there's fewer sensors showing higher THD in the area.

So anyway, a lot of words to say that this problem may be even more serious than this map shows, because there's a lot that this map isn't showing including the explosion of data centers in more rural areas with cheap electricity, where there not be as many rich folks with Ting sensors.

[-] silence7@slrpnk.net 1 points 1 month ago

Power use by the Washington/Oregon data center cluster was almost entirely covered by a local surplus of hydropower until a couple years ago. That might be why it looks different from elsewhere.

[-] SeaJ@lemm.ee 1 points 1 month ago* (last edited 1 month ago)

I was going to point out that Seattle's electricity usage is small on the map but the datacenters are going to Wenatchee and Quincy. The state has plans to remove dams and switch more to wind but the massive investment in datacenters for AI is going to derail that.

[-] Hackworth@lemmy.world 1 points 1 month ago

All of the data centers in the US combined use 4% of total electric load.

[-] ReCursing@lemmings.world 0 points 1 month ago

Once again, not the faulty of the technology. Don't blame your shitty infrastructure on ai

[-] silence7@slrpnk.net 2 points 1 month ago

If you're doing a massive load increase, build out emissions-free generation to match. Some mix of wind, solar, batteries, nuclear, and geothermal would do fine. Otherwise, don't do the big load increase.

[-] frezik@midwest.social 1 points 1 month ago

I'll go the opposite way. The fact that there are serious plans to spin up nuclear reactors to run nothing but AI datacenters is ridiculous.

[-] queermunist@lemmy.ml 0 points 1 month ago

Nuclear reactors take a decade+ to spin up, so by the time these reactors are online the AI bubble will have long since popped...

[-] Eyekaytee@aussie.zone 0 points 1 month ago

as someone who uses ai daily i’m not sure what could replace it, ecosia search results are sometimes ok (i haven’t used google in years) but a lot of the time the questions i ask have bot style “articles” with the exact same page layout anyway, so no use there or i don’t get my question answered

When I want real world opinions on a product or thing i used to pop site:reddit.com on the end but now i use https://thegigabrain.com/ as it does a far better job with searching and summarising the posts into useful information

then i use usually a 7b or 14b local llm using gpt4all, lately i use reasoner which has a built in javascript sandbox

https://www.nomic.ai/blog/posts/gpt4all-scaling-test-time-compute

basically you can watch ai fix any errors that it generates in real time and produce better coding results which helps me code and i have a home battery powered by solar so no grid usage there

finally i use https://chat.mistral.ai/chat when generating random ai images which i think are funny or interesting or i’m not at my pc

i’m probably 75% ecosia 25% ai but that 25% gets me answers and is invaluable, not to mention the answers are getting better every week as opposed to web searches which appear to be getting worse

[-] queermunist@lemmy.ml -1 points 1 month ago* (last edited 1 month ago)

The fact that web searches are getting worse is biasing your ability to objectively evaluate AI searches. Ironically, the bot articles are being written by the AI that you're defending. AI is making web searches less useful by flooding the internet with AI-generated garbage. Also? Unless I can cite the results of a search it's useless to me. Do you actually trust the shit the AI feeds you?

[-] Eyekaytee@aussie.zone 0 points 1 month ago* (last edited 1 month ago)

The fact that web searches are getting worse is biasing your ability to objectively evaluate AI searches

Web searches were getting worse long before AI came along, SEO spam has been a thing since forever, maybe we're rose tinting our own glasses because Google was so much better than Dogpile and Altavista?

Ironically, the bot articles are being written by the AI that you’re defending

I know and I think the search engines should do something about them (however I suspect they won't as it'll make their results even worse somehow), if I want AI results I will use AI, I wish wikipedia had a health portal that was more personalised? like something to replace all the health websites like webmd/healthline/verywellhealth which now that I look at them closer appear to be slightly done up AI websites anyway, eg. just summarising research papers... so now that I think about it they might be next to go so long as ai is quoting sources which:

Unless I can cite the results of a search it’s useless to me

Gigabrain (already linked) and Perplexity does this:

https://www.perplexity.ai/search/what-is-lemmy-ml-Q_mHphL3T.i2dA16PDKtAw

When using social it's summarising reddit, when using Academic it uses academic sources:

You can also use AI for language learning:

https://morpheem.org/

To quote Mistral 7b:

A bubble in finance is when the price of an asset or security rises far above its true value due to speculation and hype, fueled by investors buying with the expectation of selling at a higher price. Prices rise based on market sentiment rather than fundamental value, creating a self-reinforcing cycle until enough investors realize the bubble's unsustainability and sell, leading to a sharp decline in price and potential losses for those who bought during the bubble phase.

I'm certain there are plenty of companies that have latched onto AI and gotten a temporary stock price boost, Nvidia is doing extremely well based on its hardware being king for AI, out of this but I'm not sure where the dot com style bubble is?

In crypto it's easy to point out, the whole thing is practically a bubble that never seems to pop, but where is the bubble in AI? Is it not a financial bubble you're talking about but a hype one?

Maybe some AI companies will go broke (maybe openai? or claude? or mistral? maybe?) but we still have all the open source models so the tech will still be here, it ain't going anywhere

https://huggingface.co/models?sort=trending

Not only that but from all the examples I've given you, AI to me provides a ton of genuine value, it is valuable to me as a programmer, it does provide search results that I find useful, it does generate images that I think are useful, people are using it to make music videos that are popular (11 million views in a month):

The Drill https://www.youtube.com/watch?v=TbXZoMocpM8

Songs: AI Took My Job https://suno.com/song/14572e0f-a446-4625-90ff-3676a790a886

It's hard to say it's a bubble when the value is clearly present, whether you can make a ton of money off that value is something else, but the value is definitely there

Do you actually trust the shit the AI feeds you?

About as much as I trust anything on the internet or reddit, if I'm not sure, I just search a bit more, there's no limit to searching, I can search all day ^^

[-] queermunist@lemmy.ml -1 points 1 month ago* (last edited 1 month ago)

I’m not sure where the dot com style bubble is?

They're investing into huge, energy intensive compute resources that aren't going to pay off for at least a decade, and meanwhile investors are going to want returns on those investments ASAP. They need to fill warehouses with compute and power them with nuclear reactors, but there's no profitability model. That means stranded assets, especially if investment dries up and they can't pay or if demand shifts away from their models. This is set up to be a massive crash.

NVIDIA will probably be fine though.

Gigabrain (already linked) and Perplexity does this:

Yeah, and what they'll do is invent sources from thin air or draw made up conclusions from real sources. They're just LLMs, no matter how much data you feed them and how much the results are tinkered with they only regurgitate a statistically likely answer. Perplexity is a bullshit machine. It's fine if you don't really care about the answer and are just kind of curious, but no serious researcher should ever rely on a chatbot.

[-] Eyekaytee@aussie.zone 0 points 1 month ago* (last edited 1 month ago)

This is set up to be a massive crash.

For who? Who is going to crash massively? Google? Microsoft? Amazon? Are you are expecting these massively diversified trillion dollar companies to fail due to AI?

Yeah, and what they’ll do is invent sources from thin air

The sources are right there next to it? You click on them and it takes you to the source, could you maybe try it for 5 seconds and then get back to me before you just make stuff up? what are you, an AI?

or draw made up conclusions from real sources

This feels like I'm having a conversation with a boomer talking about wikipedia.

Yeah, it's always best to check the original sources and not just believe everything you read on the internet, no different than clicking on results in google and getting a page full of misinformation which people are doing every minute of every hour of every day, and don't even get me started on social media.

[-] queermunist@lemmy.ml 0 points 1 month ago

For who? Who is going to crash massively? Google? Microsoft? Amazon? Are you are expecting these massively diversified trillion dollar companies to fail due to AI?

Open AI is going to implode after it goes for-profit. As for the others they'll weather the storm, they have enough diversity in their assets to handle the AI bubble popping, but there will be big tech layoffs and lots of assets will get sold off to private equity.

You click on them and it takes you to the source, could you maybe try it for 5 seconds and then get back to me before you just make stuff up?

So what's the point?

This feels like I’m having a conversation with a boomer talking about wikipedia.

Rude. Wikipedia is, at least, peer reviewed by wikipedia editors. Chatbots don't have that. They will just make shit up and you have to manually double check their sources yourself. At that point, why are you even using AI? It saved you no time or effort.

This feels like having a conversation with someone inside a hype bubble. If Wikipedia already exists, what purpose does AI fulfill? It's just a more expensive, more energy intensive way to do the exact same thing. There's no profitability case. It's useful, but it isn't more useful than the much cheaper and much less energy/resource intensive alternatives. So, what's the point?

Yeah, it’s always best to check the original sources and not just believe everything you read on the internet, no different than clicking on results in google and getting a page full of misinformation which people are doing every minute of every hour of every day, and don’t even get me started on social media.

Okay, but then, why is AI useful? If you're going to look at sources anyway, what's the point? You're just using a massive amount of energy and compute for something that can be done much more efficiently.

The only useful product I've seen come out of this is hype bubble is text-to-image models. Being able to tell a bot to generate an image is really interesting and useful for people without skills in creating or editing their own images. That's an actual use case that could maybe justify the amount of resources being poured into it, it could maybe even be profitable.

The rest? It's wasteful and it won't last.

[-] Eyekaytee@aussie.zone 1 points 1 month ago

Okay, but then, why is AI useful? If you’re going to look at sources anyway, what’s the point?

Because it summarises the results, it's like a search engine but better

The rest? It’s wasteful and it won’t last.

I'm using it for coding in a way that it isn't going anywhere, I'm using LM Studio with Qwen2.5 Coder and Mistral 7b, these are offline models so even if Alibaba or Mistral go broke they'll continue to work.

Example of what it looks like:

It seems like lots of people are using it in a similar way, no longer searching the web and clicking on sometimes 100 results trying to figure out a problem but instead using AI to answer questions:

While originally it was constantly making mistakes there's now Chain of Thought and code sandboxing, it has gotten so much better so quickly

So now I've got: web search summarisation, a far better reddit/forum search and summarisation, text to image generation and personal coding assistant, each of these in and of themselves would be an amazing program used by millions and that's ignoring using it for assistance with language learning:

https://blog.duolingo.com/duolingo-max/

song making

https://suno.com/explore

etcetc

If it wasn't for the web being an absolute social media shithole with no moderation resulting in AI slop being pasted all over the place, AI would genuinely be the greatest tech revolution I've seen since the iphone.

[-] queermunist@lemmy.ml 1 points 1 month ago

I have heard that these LLMs are really good as coding assistants, so good point. I shouldn't dismiss that. I don't think they're good at music, and really the art isn't that good either, but I'm sure people without artistic training like being able to make images and songs. Not sure it's worth the cost, since it's all built on plagiarism and so massively wasteful.

As for web searches, really? I don't think they're trustworthy. They can, and do, make shit up. No, that's not the same as the boomerism of saying "anyone can edit Wikipedia so you can't trust it" because Wikipedia has quality control. LLMs don't. There's literally nothing stopping it from spitting out lies and so it's up to the user to double check whatever the LLM spits out, which means I might as well just search through results myself. And if you don't always double check, it will bite you in the ass eventually. Good luck with that.

If it wasn’t for the web being an absolute social media shithole with no moderation resulting in AI slop being pasted all over the place, AI would genuinely be the greatest tech revolution I’ve seen since the iphone.

If it wasn't for the web being a monetized SEO algo shithole we could still just search the web! AI summarization is only "useful" in the sense that the search engines have destroyed themselves in their search for profitability, google is garbage now and we don't need to build acres of compute powered by nuclear reactors to fix the problem.

So really, the problems that are causing AI slop to pollute search results are the same problems that made search engines so bad over the past ten years.

If we demonetized and de-enshitified the search engines by nationalizing google I don't think AI result summaries would be useful at all.

[-] Eyekaytee@aussie.zone 1 points 1 month ago* (last edited 1 month ago)

And if you don’t always double check, it will bite you in the ass eventually. Good luck with that.

When did the web ever present itself as a completely factual and never wrong? There's plenty of evidence of wikipedia being wrong on wikipedia

https://en.wikipedia.org/wiki/Wikipedia:List_of_hoaxes_on_Wikipedia

Do I get things wrong? Sure, never said I was perfect either, if someone tells me I got a stat or a figure or something wrong, great!

The question for me is: is it wrong enough to make the results completely unreliable, and the answer to that is no, more often than not it provides accurate information.

If it wasn’t for the web being a monetized SEO algo shithole we could still just search the web!

That's not accurate to me, AI/SEO search results are still a minority of results that I get, most of the time I get close to what I'm looking for, but AI search summarisation is essentially the next level of search for me:

Dogpile/Altavista/AskJeeves > Google > AI powered search summarisation

I get essentially what I'm looking for directly, why click on a page with 47 ads, a video pop up or something else when all I'm looking for is:

https://www.perplexity.ai/search/do-you-have-a-basic-egg-on-toa-pkpsq9WwSMm5G8ICsmDnbw#0

Is it a complete replacement? Not yet, Ecosia is still my daily driver having used it 25,000+ times in the last year but AI is making a serious dent in how often I use it.

we don’t need to build acres of compute powered by nuclear reactors to fix the problem.

I would keep an eye on that, the gains in AI have been massive in the last few years, and we're starting to potentially see a turning point with DeepSeekv3 being created on a fraction of the cost and power of other models

DeepSeek (Chinese AI co) making it look easy today with an open weights release of a frontier-grade LLM trained on a joke of a budget (2048 GPUs for 2 months, $6M).

For reference, this level of capability is supposed to require clusters of closer to 16K GPUs...

https://techcrunch.com/2024/12/26/deepseeks-new-ai-model-appears-to-be-one-of-the-best-open-challengers-yet/

*This could turn out to be wrong hence why I'm keeping an eye on it **I'm absolutely certain a whole lot of execs are stunned right now they're spending billions when something that cost millions came up right next to them

[-] Grimy@lemmy.world 0 points 1 month ago

As much as I think this is a great solution and should be written into law, the anti-ai crowd only asks it from one industry and it's a clear sign of bias.

Not to mention that the big companies are literally doing it, either building new nuclear plants or restarting old ones. They aren't the one holding green energy back, the oil cartel and their corrupt politicians are.

[-] Infynis@midwest.social 0 points 1 month ago

One industry? People are so mad at AI because it's just another industry, a new one with massive environmental impact, and basically no real use outside of generating misinformation and stealing from artists. It's the absolute worst face of the tech sector, and totally deserving of all the hate it receives.

[-] Bronzebeard@lemm.ee 2 points 1 month ago

and basically no real use outside of generating misinformation and stealing from artists

This shows you think all AI are LLMs or generative art. Those are only the most visible faces of the tech, and you're showing your name ignorance of the field.

[-] Infynis@midwest.social -2 points 1 month ago* (last edited 1 month ago)

If you want to talk about machine learning in general, that's a different conversation. Like it or not, colloquially, AI is LLMs and chatbots

[-] Fades@lemmy.world 1 points 1 month ago* (last edited 1 month ago)

How exactly is the rest of AI a different conversation???? Were talking about the power requirements of running AI at scale and somehow you think it’s not only correct but implied that this convo should just be about colloquial parts AI and anything else is a totally different topic in regards to power consumption?

totally deserving of the hate it gets

Yeah so breakthroughs in chemistry and other sciences for example, deserving of hate eh?

Nothing good comes from AI… when all you know about AI is colloquial lmao

[-] Grimy@lemmy.world 2 points 1 month ago

basically no real use

"The horse is here to stay, but the automobile is only a novelty — a fad.”

Also, will you get mad at the next new industry? I highly doubt it.

[-] taladar@sh.itjust.works -1 points 1 month ago

People advocating for the 99 shitty technologies that die always seem to like to quote the people talking about the one technology that survived from past generations as if that somehow made criticism of the 99 others a bad call.

[-] ReCursing@lemmings.world 1 points 1 month ago

Ai is going nowhere mate, you're on the wrong side of this one. It's too broadly useful already and has too much potential in the future

[-] Infynis@midwest.social -2 points 1 month ago

If the next new industry is an energy hungry propaganda machine, yes I will

[-] ReCursing@lemmings.world 0 points 1 month ago

Oh fuck off with "stealing from artists" - that just proves you know nothing about the subject and and should be completely ignored.

[-] Sprocketfree@sh.itjust.works -1 points 1 month ago

This added zero to the discourse other then making you look like a complete idiot. Are we in the shitpost community here?

this post was submitted on 27 Dec 2024
3 points (71.4% liked)

Technology

63023 readers
895 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS