179
submitted 11 months ago by L4s@lemmy.world to c/technology@lemmy.world

Firm predicts it will cost $28 billion to build a 2nm fab and $30,000 per wafer, a 50 percent increase in chipmaking costs as complexity rises::As wafer fab tools are getting more expensive, so do fabs and, ultimately, chips. A new report claims that

all 34 comments
sorted by: hot top controversial new old
[-] toiletobserver@lemmy.world 107 points 11 months ago

Or, and hear me out, we could just write less shitty software...

[-] the_q@lemmy.world 42 points 11 months ago

You're right. This is the biggest issue facing computing currently.

[-] tailiat@lemmy.ml 17 points 11 months ago

The ratio of people who are capable of writing less-shitty software to the number of things we want to do with software ensures this problem will not get solved anytime soon.

[-] go_go_gadget@lemmy.world 9 points 11 months ago

The ratio of people who are capable of writing less-shitty software to the number of things we want to do with software ensures this problem will not get solved anytime soon.

Eh I disagree. Every software engineer I've ever worked with knows how to make some optimizations to their code bases. But it's literally never prioritized by the business. I suspect this will shift as IaaS takes over and it's a lot easier to generate the necessary graphs showing the stability of your product being maintained while the consumed resources has been reduced.

[-] winterayars@sh.itjust.works 16 points 11 months ago

But what if i want to do all my work inside a JavaScript "application" inside a web browser inside a desktop?

(We really do have do much CPU power these days that we're inventing new ways to waste it...)

[-] TacoButtPlug@sh.itjust.works 13 points 11 months ago* (last edited 11 months ago)

But where's the fun in that?

[-] SynonymousStoat@lemmy.world -3 points 11 months ago* (last edited 11 months ago)

As long as humans have some hand in writing and designing software we'll always have shitty software.

[-] AA5B@lemmy.world 6 points 11 months ago

While I agree with the cynical view of humans and shortcuts, I think it’s actually the “automated” part of the process to blame. If you develop an app, there’s only so much you can code. However if you start with a framework, now you’ve automated part of your job for huge efficiency gains, but you’re also starting off with a much bigger app and likely lots of functionality you aren’t really using

[-] SynonymousStoat@lemmy.world 2 points 11 months ago

I was more getting at with software development it's never just the developers making all of the decisions. There are always stakeholders who often force time and attention to other things and make unrealistic deadlines, while most software developers I know would love to be able to take the time to do everything the right way first.

I also agree with the example you provided. Back when I used to work on more personal projects I loved it when I found a good minimal framework that allowed you to expand it as needed so you rarely ever had unused bloat.

[-] go_go_gadget@lemmy.world 1 points 11 months ago* (last edited 11 months ago)

If you're not using the functionality it's probably not significantly contributing to the required CPU/GPU cycles. Though I would welcome a counter example.

[-] filister@lemmy.world 50 points 11 months ago* (last edited 11 months ago)

And NVIDIA will use this as an excuse to hike up their prices by 100+%.

On a serious note, this will progressively come down in price as time passes, plus not everyone needs to use 2nm cutting edge technology. Plus transition to 2nm will also increase the density, so comparing wafer prices without acknowledging the increased density is not giving you the whole picture.

Plus DRAM scaling is becoming cumbersome and a lot more components cannot scale to 2nm, so 2nm is mostly a marketing term, and there are a lot of challenges that make this tech so expensive and difficult to design and produce.

[-] drmoose@lemmy.world 26 points 11 months ago

Afaik 2nm is the theoretical limit for current transistor tech so this sort of end-game for this type of tech.

[-] Earthwormjim91@lemmy.world 52 points 11 months ago* (last edited 11 months ago)

2nm process doesn’t actually mean 2nm though. Hasn’t in over a decade.

The current 3nm process has a 48nm gate pitch and a 24nm metal pitch. The 2nm process will have a 45nm gate pitch and a 20nm metal pitch.

“Nm” is just “generation” today. After 5nm was 3nm, next is 2nm, then 1nm. They’ll change the name after that even though they’re still nowhere near actual nm size.

[-] SpaceNoodle@lemmy.world 12 points 11 months ago

Where can I read more about this?

[-] Ludrol@szmer.info 15 points 11 months ago* (last edited 11 months ago)

Depending on how in-depth you want to delve into this.
Newsletter semianalysis.com
Youtube Asianometry
Wikipedia
Some litography university textbooks. Sadly I don't know which ones.

[-] weew@lemmy.ca 6 points 11 months ago

Intel already has plans to name the further generations xxA, after Angstroms

[-] AA5B@lemmy.world 1 points 11 months ago* (last edited 11 months ago)

Yeah I’m a bit curious what the marketing will be as they have to get more vertical, 3D. Will there be naming to reflect that or will they just follow existing naming, 0.5nm?

[-] terminhell@lemmy.world 4 points 11 months ago

I didn't think the ~5nm limit could be broke due to quantum tunneling.

[-] crazyminner@lemmy.ml 18 points 11 months ago

The nm number is just the smallest part on the waffer. It's not actually the transistor.

[-] foggy@lemmy.world 9 points 11 months ago

This was my understanding as well: That beyond ~7nm the reliability begins to lose value because the diameter of an electron 'orbit' or whatever becomes a factor.

Admittedly I'm not an expert. But my understanding was that to break this limitation and keep Moore's law were kinda leaning into quantum computation to eventually fill the incoming void.

[-] Kyrgizion@lemmy.world 5 points 11 months ago

The reason you mean is quantum tunneling. Essentially, at that small a scale an electron can 'teleport' outside of the system, which is obviously a big nono for computing.

[-] BetaDoggo_@lemmy.world 9 points 11 months ago

They solved this problem by making the nanometer bigger.

https://en.m.wikipedia.org/wiki/5_nm_process

[-] OrangeCorvus@lemmy.world 19 points 11 months ago

Your device will be 11% faster and the battery will last 6% more but it will dramatically change the way you interact with your device.

[-] billwashere@lemmy.world 5 points 11 months ago

And cost 4000% more.

[-] AA5B@lemmy.world -4 points 11 months ago* (last edited 11 months ago)

If it’s enough to run on-device ai, it’s a win. Imagine autocorrect being able to mangle your texting without ever connecting to the cloud. Huge prvacy win.

With the goggles coming soon, I think they’ll focus chip improvements on GPU and neural engine to better support that

[-] ExLisper@linux.community 11 points 11 months ago

Autocorrect doesn't send anything to the cloud, it's just a dictionary. If your keyboard is sending your texts to the cloud you have to change your keyboard, not run AI. AI doesn't do autocorrect, it could maybe do word suggestions but would be super inefficient at it and probably not much better than current methods.

I'm writing thins on a 22 nm CPU and the letter appear hella fast.

[-] profdc9@lemmy.world 4 points 11 months ago

Not so far fetched:

"I predict in 100 years computers will be twice as powerful, 10,000 times larger, and only the five richest kings of Europe will own one."

https://www.youtube.com/watch?v=ykxMqtuM6Ko

this post was submitted on 23 Dec 2023
179 points (95.0% liked)

Technology

59708 readers
1536 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS