PGP itself is a bit of mess.
For one thing, there's really only one major/popular implementation of it these days, which is GPG. The codebase is arcane. Pretty major security vulnerabilities pop up constantly. It doesn't have stable funding. Several years ago the entire project almost collapsed when the world discovered it had been maintained for several years by a single person who didn't have any time or money to maintain it. The situation is a little bit better now, but not much.
(For this reason, people are starting to use age instead of gpg, as the code is much smaller, cleaner, forces safe defaults, and doesn't seem to have security problems)
But the bigger problem that was never properly solved with PGP is key distribution. How do you get somebody's key in the first place? Some people put their keys on their own personal (https) webpage, which is fine, but that's not a solution for everyone, and doesn't scale very well. Okay, so you might use a key server, but that has privacy implications (your identity is essentially public to the world) and centralizes everything down to a handful of small "trusted" key servers (since there would be no way to trust key servers in a decentralized way). We should probably just have email servers themselves serve keys somehow, but nobody's put that into the email standard protocols.
The fact that keys expire amplifies all the problems with key distribution, and encourages people to do really unsafe things with keys, like just blindly trust them. You can sign other people's keys for them, but that also does not scale very well.
The key distribution problem is something that things like Signal have "solved" with things like phone number verification, but there's really no clear way to solve it on something totally distributed like email.
(I think you're arguing from an ethical standpoint whereas OP was arguing legally, but anyway....)
No, that shouldn't happen. If an AI were ever able to recite back its training data verbatim, that AI would be overfitting. It happens by accident sometimes early on in development when your training data is too small and your model is too big, but it's an error, and is something to be avoided and corrected.
The whole point of training is to get it to a point where it can't recite back any of its training data. In order for that to happen, the AI is forced to sort of generalize and abstract (sorry for anthropomorphizing) its training data. That's the only way to get it to be able to generate something new, which is the whole point of the endeavour.
Long story short, if an AI could recite back an entire book, by definition it could not be an AI, and it wouldn't resemble any of the popular LLMs we have now like ChatGPT. (But you may see snippets and pastiches and watermarks show up)