It's actually worse.
The video focuses on how you're leaking personal info all the time through the software that you use and the connections that you make, and ways to mitigate it.
However, have you guys heard about forensic linguistics? That's how the Unabomber was caught. The way that you use your language(s) is pretty unique to yourself, and can be used to uncover your identity. This was done manually by two guys, Fitzgerald and Shuy; they were basically identifying patterns in how Unabomber wrote to narrow down the suspects further and further, until they hit the right guy.
Now, let's talk about large "language" models, like Gemini or ChatGPT. Frankly, I believe that people who think that LLMs are "intelligent" or "comprehend language" themselves lack intelligence and language comprehension. But they were made to find and match patterns in written text, and rather good at it.
Are you getting the picture? What Fitzgerald and Shuy did manually 30 years ago can be automated now. And it gets worse, note how those LLMs "happen" to be developed by companies that you can't trust to die properly (Google, Amazon, Facebook, Apple, Microsoft and its vassal OpenAI).
So, while the video offers some solid advice regarding privacy, sadly it is not enough. If you're in some deep shit, and privacy is a life-or-death matter for you, I strongly advise you be always mindful of what and how you write.
And, for the rest of us: fighting individually for our right to privacy is not enough. We need to assemble and organise ourselves, to fight on legal grounds against those who are trying to kill it. You either fight for your rights or you lose them.
Just my two cents. I apologise as this is just side-related to the video, but I couldn't help it.