1080
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 06 Sep 2023
1080 points (99.4% liked)
Technology
59670 readers
1539 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
Sysadmin pro tip: Keep a 1-10GB file of random data named DELETEME on your data drives. Then if this happens you can get some quick breathing room to fix things.
Also, set up alerts for disk space.
Why not both? Alerting to find issues quickly, a bit of extra storage so you have more options available in case of an outage, and maybe some redundancy for good measure.
A system this critical is on a SAN, if you're properly alerting adding a bit more storage space is a 5 minute task.
It should also have a DR solution, yes.
A system this critical is on a hypervisor with tight storage “because deduplication” (I’m not making this up).
This is literally what I do for a living. Yes deduplication and thin provisioning.
This is still a failure of monitoring or slow response to it.
You keep your extra capacity handy on the storage array, not with some junk files on the filesystem.
You also need to know how over provisioned you are and when you're likely to run out of capacity... you know this from monitoring.
Then when management fails to react promptly to your warnings. Shit like this happens.
The important part is that you have your warnings in writing, and BCC them to a personal email so you can cover your ass
Exactly, I was being sarcastic about management’s “solution”
Yes, alert me when disk space is about to run out so I can ask for a massive raise and quit my job when they dont give it to me.
Then when TSHTF they pay me to come back.
That high hourly rate is really satisfying, I guess....not been there.
A lot of companies have minimal alerting or no alerting at all. It's kind of wild. I literally have better alerting in my home setup than many companies do lol
It's certainly cheaper to not have any but it will limit growth substantially
I have free monitoring I set up myself though lol
I imagine it's a case where if you're knowledgeable, yeah it's free. But if you have to hire people knowledgeable to implement the free solution, you still have to pay the people. And companies love to balk at that!
I think it's that and any IT employees they have would not be allowed to work on it because they would be working on other stuff because companies wouldn't prioritize that, since they don't know how important it is until it's too late.
There's cases where disk fills up quicker than one can reasonably react, even if alerts are in place. And sometimes culprit is something you can't just go and kill.
That's what the Yakuza is for.
Had an issue like that a few years back. A stand alone device that was filling up quickly. The poorly designed device could only be flushed via USB sticks. I told them that they had to do it weekly. Guess what they didn't do. Looking back I should have made it alarm and flash once a week on a timer.
The real pro tip is to segregate the core system and anything on your system that eats up disk space into separate partitions, along with alerting, log rotation, etc. And also to not have a single point of failure in general. Hard to say exact what went wrong w/ Toyota but they probably could have planned better for it in a general way.
Even better, cron job every 5 mins and if total remaining space falls to 5% auto delete the file and send a message to sys admin
Sends a message and gets the services ready for potential shutdown. Or implements a rate limit to keep the service available but degraded.
Also, if space starts decreasing much more rapidly than normal.
At that point just set the limit a few gig higher and don't have the decoy file at all
10GB is nothing in an enterprise datastore housing PBs of data. 10GB is nothing for my 80TB homelab!
It not going to bring the service online, but it will prevent a full disk from letting you do other things. In some cases SSH won’t work with a full disk.
It’s all fun and games until tab autocomplete stops working because of disk space
The real apocalypse
Tab complete in vim go lolllllooolol NO
It's nothing for my homework folder.
That's an incredible collection of homework!
500Gb maybe.
Or make the file a little larger and wait until you're up for a promotion..