So is it considered 'secure'? and to what extent?
This is sort of the type of problem that a specifically trained ML model could be pretty good at.
This isn't that though, its seems to me to literally be asking a LLM to just make stuff up. Given that, the results are interesting but I wouldn't trust it.
Can you elaborate or throw me a link or two? Am not familiar with this.
Timatal
joined 8 months ago
Convince it to hire a task rabbit or something to fill it. Bypass the channels it was given.