26
1
How do you handle backup? (programming.dev)
submitted 3 months ago by anzo@programming.dev to c/datahoarder@lemmy.ml

cross-posted from: https://lemmy.dbzer0.com/post/26278528

I'm running my media server with a 36tb raid5 array with 3 disks, so I do have some resilience to drives failing. But currently can only afford to loose a single drive at a time, which got me thinking about backups. Normally I'd just do a backup to my NAS, but that quickly gets ridiculous for me with the size of my library, which is significantly larger than my NAS storage of only a few tb. And buying cloud storage is much too expensive for my liking with these amounts of storage.

Do you backup only the most valuable parts of your library?

27
1
submitted 3 months ago by Wojwo@lemmy.ml to c/datahoarder@lemmy.ml

I'm celebrating my datahoarding problem.

28
1
submitted 3 months ago by lars@lemmy.sdf.org to c/datahoarder@lemmy.ml
29
1

cross-posted from: https://beehaw.org/post/15404535

Data: https://archive.org/details/gamefaqs_txt

Mirror upload for faster download, 1 Mbit (expires in 30 days): https://ufile.io/f/r0tmt

GameFAQs at https://gamefaqs.gamespot.com hosts user created faqs and documents. Unfortunately they are baked into the HTML webpage and cannot be downloaded on their own. I have scraped lot of pages and extracted those documents as regular TXT files. Because of the sheer amount of data, I only focused on a few systems.

In 2020, a Reddit user named "prograc" archived faqs for all systems at https://archive.org/details/Gamespot_Gamefaqs_TXTs . So most of it is already preserved. I have a different approach of organizing the files and folders. Here a few notes about my attempt:

  • only 17 selected systems are included, so it's incomplete
  • folder names of systems have their long name instead short, i.e. Playstation instead ps
  • similarly game titles have their full name with spaces, plus a starting "The" is moved to the end of the name for sorting reasons, such as "King of Fighters 98, The"
  • in addition to the document id, the filename also contain category (such as "Guide and Walkthrough"), the system name in short "(GB)" and the authors name, such as "Guide and Walkthrough (SNES) by BSebby_6792.txt"
  • the faq documents contain an additional header taken from the HTML website, including a version number, the last update and the previously explained filename, plus a webadress to the original publication
  • HTML documents are also included here with a very poor and simple conversion, but only the first page, so multi page HTML faqs are still incomplete
  • no zip archives or images included, note: the 2020 archive from "prograc" contains false renamed .txt files, which are in reality .zip and other files mistakenly included, in my archive those files are correctly excluded, such as nes/519689-metroid/faqs/519689-metroid-faqs-3058.txt
  • I included the same collection in an alternative arrangement, where games are listed without folder names for the system, this has the side effect of removing any duplicates (by system: 67.277 files vs by title: 55.694 files), because the same document is linked on many systems and therefore downloaded multiple times
30
1
submitted 3 months ago by Showroom7561@lemmy.ca to c/datahoarder@lemmy.ml

Hey guys, so it seems that Linkwarden isn't as good as I was hoping, since some websites will throw up a cookie popup or some other screen that basically prevents the capture.

Firefox Screenshot seems to work well, but it saves a PNG, which isn't really text searchable.

FF's "save page as..." feature seems to break things when viewing them back.

Save to PDF is another option, and that seems to be decent.

I'm not looking to copy entire websites, but I like to save web pages for later reference (i.e. instructions/specs).

I use Synology Note Station, but they don't have a web clipper for Firefox...

I'm fine with using a folder structure to store files, despite not being totally ideal when compared to Linkwarden.

Does anyone have any other suggestions that perhaps I've missed? Nothing too complicated... ideally, as simple as a button click would be great.

31
1
32
1

Running GParted gives me an error that says

fsyncing/closing dev/sdb: input/output error

Using Gnome Disk Utility under the assessment section it says

Disk is OK, one bad sector

Clicking to format it to EXT4 I'm getting a message that says

Error formatting volume

Error wiping device: Failed to probe the device 'dev/SDB' (udisks-error-quark, 0)

Running sudo smartctl -a /dev/SDB I get a few messages that all say

... SCSI error badly formed scsi parameters


In terms of the physical side I've swapped out the SATA data and power cable with the same results.


Any suggestions?

Amazon has a decent return policy so I'm not incredibly concerned but if I can avoid that hassle it would nice.

33
1

a few days ago i saw a post on the reddit datahoarder community asking how to backup keys and other small files for a long time.
it reminded me of a script i made some time ago to save my otp secrets in case of loss of device or a reenactment of the raivo otp incident,
so i decided to make it public on github, hope someone here finds it useful

github.com/Leviticoh/weedcup

the density is not great, about 1kB per A4 page, but it can recover from losing up to half of the printed surface and, if stored properly, paper should last very long

34
1
submitted 4 months ago by Potatisen@lemmy.world to c/datahoarder@lemmy.ml

Basically title!

I want to run it through my NAS to free up some space.

Tha ks in advance.

35
1
submitted 4 months ago* (last edited 4 months ago) by andioop@programming.dev to c/datahoarder@lemmy.ml

I read something about once-reliable sites that would tell you the best [tech thing] now not giving legit reviews, being paid to say good things about certain companies, and I do not remember where I read that or which sites, so I figured I'd bypass the issue and ask people here. I'm pretty new to anything near the level of complexity and technical details that I see on datahoarder communities. I know about the 321 backup rule and that's it. This is me trying to find something to hold copy 3 of my data.

36
1
submitted 4 months ago by evasync@lemmy.world to c/datahoarder@lemmy.ml

i want to buy a few hard drives for backups.

What is the most reliable option for longetivity? i was looking at the wd ae, which they claim is fit for this purpose, but knowing nothing about hard drives, I wouldnt know if it was a marketing claim..

37
1
submitted 4 months ago by lars@lemmy.sdf.org to c/datahoarder@lemmy.ml

cross-posted from: https://lemmy.world/post/17689141

I'll just save them in this folder so that I can totally come back later and read them.

38
1
39
1
submitted 4 months ago by Thavron@lemmy.ca to c/datahoarder@lemmy.ml
40
1

I was considering making a 30+ TB NAS to simplify and streamline my current setup but because it's a relatively low priority for me I am wondering is it worth it to hold off for a year or two?

I am unsure if prices have more or less plateaued and the difference won't be all that substantial. Maybe I should just wait for Black Friday.

For context it seems like two 16TB HDD would cost about $320 currently.


Here's some related links:

  • This article by Our World in Data contains a chart with how the price per GB has decreased overtime.

  • This article by Tom's Hardware talks about how in July 2023 SSD prices bottomed out before climbing back up predicted further increases in 2024.

41
1
Renewed drives (slrpnk.net)
submitted 5 months ago by greengnu@slrpnk.net to c/datahoarder@lemmy.ml

Are they worth considering or only worth it at certain price points?

42
1
submitted 5 months ago by xnx@slrpnk.net to c/datahoarder@lemmy.ml

cross-posted from: https://slrpnk.net/post/10273849

Vimms Lair is getting removal notices from Nintendo etc. We need someone to help make a rom pack archive can you help?

Vimms lair is starting to remove many roms that are being requested to be removed by Nintendo etc. soon many original roms, hacks, and translations will be lost forever. Can any of you help make archive torrents of roms from vimms lair and cdromance? They have hacks and translations that dont exist elsewhere and will probably be removed soon with ios emulation and retro handhelds bringing so much attention to roms and these sites

43
1

I've been working on this subtitle archive project for some time. It is a Postgres database along with a CLI and API application allowing you to easily extract the subs you want. It is primarily intended for encoders or people with large libraries, but anyone can use it!

PGSub is composed from three dumps:

  • opensubtitles.org.Actually.Open.Edition.2022.07.25
  • Subscene V2 (prior to shutdown)
  • Gnome's Hut of Subs (as of 2024-04)

As such, it is a good resource for films and series up to around 2022.

Some stats (copied from README):

  • Out of 9,503,730 files originally obtained from dumps, 9,500,355 (99.96%) were inserted into the database.
  • Out of the 9,500,355 inserted, 8,389,369 (88.31%) are matched with a film or series.
  • There are 154,737 unique films or series represented, though note the lines get a bit hazy when considering TV movies, specials, and so forth. 133,780 are films, 20,957 are series.
  • 93 languages are represented, with a special '00' language indicating a .mks file with multiple languages present.
  • 55% of matched items have a FPS value present.

Once imported, the recommended way to access it is via the CLI application. The CLI and API can be compiled on Windows and Linux (and maybe Mac), and there also pre-built binaries available.

The database dump is distributed via torrent (if it doesn't work for you, let me know), which you can find in the repo. It is ~243 GiB compressed, and uses a little under 300 GiB of table space once imported.

For a limited time I will devote some resources to bug-fixing the applications, or perhaps adding some small QoL improvements. But, of course, you can always fork them or make or own if they don't suit you.

44
1
submitted 5 months ago by ylai@lemmy.ml to c/datahoarder@lemmy.ml
45
1

I'm looking at my library and I'm wondering if I should process some of it to reduce the size of some files.

There are some movies in 720p that are 1.6~1.9GB each. And then there are some at the same resolution but are 2.5GB.
I even have some in 1080p which are just 2GB.
I only have two movies in 4k, one is 3.4GB and the other is 36.2GB (can't really tell the detail difference since I don't have 4k displays)

And then there's an anime I have twice at the same resolution, one set of files are around 669~671MB, the other set 191 each (although in this the quality is kind of noticeable while playing them, as opposed to the other files I extract some frames)

What would you do? what's your target size for movies and series? What bitrate do you go for in which codec?

Not sure if it's kind of blasphemy in here talking about trying to compromise quality for size, hehe, but I don't know where to ask this. I was planning on using these settings in ffmpeg, what do you think?
I tried it in an anime at 1080p, from 670MB to 570MB, and I wasn't able to tell the difference in quality extracting a frame form the input and the output.
ffmpeg -y -threads 4 -init_hw_device cuda=cu:0 -filter_hw_device cu -hwaccel cuda -i './01.mp4' -c:v h264_nvenc -preset:v p7 -profile:v main -level:v 4.0 -vf "hwupload_cuda,scale_cuda=format=yuv420p" -rc:v vbr -cq:v 26 -rc-lookahead:v 32 -b:v 0

46
1
submitted 6 months ago by ylai@lemmy.ml to c/datahoarder@lemmy.ml
47
1

I was so confident that WhatsApp was backing itself up to Google ever since I got my new pixel but I just wasn't. Then yesterday I factory reset my phone to fix something else and I lost it all. Years worth of chats from so many times in my past just aren't there, all my texts with my mom and my family, group chats with old friends... I can't even look at the app anymore, I'll never use Whatsapp as much as I used to. I just don't feel right with this change. There's no way to get those chats back and now it doesn't feel like there's any point backing up WhatsApp now! I really wanna cry like this is so unfair!! And all I had to do was check Whatsapp before I did a factory reset.. the TINIEST THING I could have done and prevented this and I didn't fucking do it!!!!!!!

How do I get past this?

48
1
submitted 6 months ago* (last edited 6 months ago) by ylai@lemmy.ml to c/datahoarder@lemmy.ml
49
1
submitted 1 year ago by archivist@lemmy.ml to c/datahoarder@lemmy.ml

@ray@lemmy.ml Got it done, I'm first of the mods here and will be learning a little Lemmy over the next few weeks.

While everything is up in the air with the reddit changes I'll be very busy working on replacing the historical pushshift API without reddits bastardizations should a PS version come back.

In the mean time you should all mirror this data ensuring its survival, do what you do best and HOARD!!

https://the-eye.eu/redarcs/

datahoarder

6786 readers
39 users here now

Who are we?

We are digital librarians. Among us are represented the various reasons to keep data -- legal requirements, competitive requirements, uncertainty of permanence of cloud services, distaste for transmitting your data externally (e.g. government or corporate espionage), cultural and familial archivists, internet collapse preppers, and people who do it themselves so they're sure it's done right. Everyone has their reasons for curating the data they have decided to keep (either forever or For A Damn Long Time). Along the way we have sought out like-minded individuals to exchange strategies, war stories, and cautionary tales of failures.

We are one. We are legion. And we're trying really hard not to forget.

-- 5-4-3-2-1-bang from this thread

founded 4 years ago
MODERATORS