281
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 02 Oct 2023
281 points (96.1% liked)
Programming
17314 readers
35 users here now
Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!
Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.
Hope you enjoy the instance!
Rules
Rules
- Follow the programming.dev instance rules
- Keep content related to programming in some way
- If you're posting long videos try to add in some form of tldr for those who don't want to watch videos
Wormhole
Follow the wormhole through a path of communities !webdev@programming.dev
founded 1 year ago
MODERATORS
That depends on your definition of correct lmao. Rust explicitly counts utf-8 scalar values, because that's the length of the raw bytes contained in the string. There are many times where that value is more useful than the grapheme count.
And rust also has the "🤦".chars().count() which returns 1.
I would rather argue that rust should not have a simple len function for strings, but since str is only a byte slice it works that way.
Also also the len function clearly states:
That Rust function returns the number of codepoints, not the number of graphemes, which is rarely useful. You need to use a facepalm emoji with skin color modifiers to see the difference.
The way to get a proper grapheme count in Rust is e.g. via this library: https://crates.io/crates/unicode-segmentation
Makes sense, the code-points split is stable; meaning it's fine to put in the standard library, the grapheme split changes every year so the volatility is probably better off in a crate.
Yeah, although having now seen two commenters with relatively high confidence claiming that counting codepoints ought be enough...
...and me almost having been the third such commenter, had I not decided to read the article first...
...I'm starting to feel more and more like the stdlib should force you through all kinds of hoops to get anything resembling a size of a string, so that you gladly search for a library.
Like, I've worked with decoding strings quite a bit in the past, I felt like I had an above average understanding of Unicode as a result. And I was still only vaguely aware of graphemes.
For what it's worth, the documentation is very very clear on what these methods return. It explicitly redirects you to crates.io for splitting into grapheme clusters. It would be much better to have it in std, but I understand the argument that Std should only contain stable stuff.
As a systems programming language the .len() method should return the byte count IMO.
The problem is when you think you know stuff, but you don't. I knew that counting bytes doesn't work, but thought the number of codepoints was what I want. And then knowing that Rust uses UTF-8 internally, it's logical that
.chars().count()
gives the number of codepoints. No need to read documentation, if you're so smart. 🙃It does give you the correct length in quite a lot of cases, too. Even the byte length looks correct for ASCII characters.
So, yeah, this would require a lot more consideration whether it's worth it, but I'm mostly thinking there'd be no
.len()
on the String type itself, and instead to get the byte count, you'd have to do.as_bytes().len()
.