The job market is queasy and since you're reading this, you need to upgrade your CV. It's going to require some work to game the poorly trained AIs now doing so much of the heavy lifting. I know you don't want to, but it's best to think of this as dealing with a buggy lump of undocumented code, because frankly that's what is between you and your next job.
A big reason for that bias in so many AIs is they are trained on the way things are, not as diverse as we'd like them to be. So being just expensively trained statistics, your new CV needs to give them the words most commonly associated with the job you want, not merely the correct ones.
That's going to take some research and a rewrite to get it looking like those it was trained to match. You need to be adding synonyms and dependencies because the AIs lack any model of how we actually do IT, they only see correlations between words. One would hope a network engineer knows how to configure routers, but if you just say Cisco, the AI won't give it as much weight as when you say both, nor can you assume it will work out that you actually did anything to the router, database or code, so you need to explicitly say what you did.
Fortunately your CV does not have to be easy to read out loud, so there is mileage in including the longer versions of the names of the more relevant tools you've mastered, so awful phrases like "configured Fortinet FortiGate firewall" are helpful if you say it once, as does using all three F words elsewhere. This works well for the old fashioned simple buzzword matching still widely used.
This is all so fucked.
Sorry if this is a stupid question, but is there a good place to figure out how to run LLM's locally? Seems safer than entering personal data onto a server somewhere
As a start, you could take a look at Ollama, which seems to be available in many package managers if you use one. I've done some experimenting with mistral-nemo, but you should pick a model size appropriate to your hardware and use case. I believe there are GUIs and extensions for Ollama, but as someone with a low interest in LLMs, I've only used the bare bones features through my terminal, and I haven't used it for any projects or tasks.
You definitely shouldn't trust it to teach you anything (I've seen some highly concerning errors in my tests), but it might be useful to you if you can verify the outputs.
Also check out the PrivacyGuides page on LLMs.
Thank you for the information! Yeah, I don't really trust them. They feel flimsy and unreliable for most things. Sometimes, they have their moments where they seem actually helpful.
I hate their usage overall, I just figure if I need it to help me land a job at some point, I should probably just have some extra options ready.