1
1
submitted 4 days ago* (last edited 4 days ago) by xXPoisonFoxXx@sh.itjust.works to c/datahoarder@lemmy.ml

I was able to get a list of the most recent anime from aniwave using this reddit thread Goofhey made: https://old.reddit.com/r/animepiracy/comments/1f2xbg7/archived_aniwaves_12000_anime_pages_on_wayback/ and scraping all 411 pages archived in the wayback machine. Back in March I built a web scraper using python requests and beautiful soup and got a list of all of aniwaves current anime sorted in alphabetical order. I compared that list to what was most recently saved in wayback machine by Goofhey. I discovered that some anime were missing. I guess its because the pages saved by Goofhey in the wayback machine were sorted by recently updated and since recently updated is constantly changing it caused some anime to be excluded but I think I got all or most of them by combining both list. Then a using a Disqus scraper I made I fed it links from the list I made and downloaded the comments. I tested the scraper on various sites(myasiantv, gogoanime, aniwave) the scraper can most likely work on most websites that use disqus with a bit of tweaking.

I also managed to get all of Gogoanime's old comments from before 2021 going all the way back to 2014/2015. Something interesting I found is that a few copycat websites(6anime, gogoanimes) still have all of gogoanimes old comments from before 2021. I have a few questions regarding this and I would appreciate if anyone can answer them.

  1. What happened to the old gogoanime comments? and why couldnt the Gogoanime admins get them back if a copycat site was able to do it?
  2. New disqus threads for new anime are still being made with the same disqus link structure as the old comment threads how are these new threads being made?

The Aniwave(9anime) comments currently have a few problems that I will fix later:

currently missing some glitched/merged comment threads

Imgur images didn't download properly

Some images were downloaded twice(as the scraper was downloading I made changes to how images were named and ran it again)

Most commented pages on each site sorted from most(Aniwave) to least(Anitaku) amount of comments:

Aniwave(9anime): Attack on Titan The Final Season Part 3 Episode 1

Gogoanime Old comments: Yuri on Ice Category page

Anitaku(Gogoanime): Kimetsu no Yaiba Yuukaku Hen Episode 10

Folders were compressed into tarballs with zstd level 9 compression:

Aniwave(9anime): TOTAL GB UNCOMPRESSED: 69.2 GiB TOTAL GB COMPRESSED:17.4 GiB

Gogoanime: TOTAL GB UNCOMPRESSED: 84.8 GiB TOTAL GB COMPRESSED: 48.2 GiB

Anitaku(Gogoanime): TOTAL GB UNCOMPRESSED: 16.6 GiB TOTAL GB COMPRESSED: 1 GiB

Inside each of the anime folders, you will find 3 types of files that end with 'part X.json,' 'full.json,' and 'simple.json':

Part files - downloaded from disqus and unmodified and contain a maximum of 100 comments

Full - concatonated all part files

Simple - Full file with info stripped out to make more readable by human eyes

DOWNLOADS:

Aniwave(9anime) Comments: https://mega.nz/file/RfgliKJR#kV9MXkEYC-5tqS9A4ZenOMoQKKxpj_ujNadzKeu--qs

Anitaku(Gogoanime) March 2024: https://mega.nz/file/FDBngTQB#p3GMrhPpBY893GLBUJfBePwDOYsKFWmpRyarFlGWCZs

Gogoanime Comments Before 2021: Unfortunatly the compressed file size for Gogoanime is 48.2 GiB and I dont know how to share it since I ran out of free storage space. I will make another post when I figure out how to set up a torrent and also add the link here

2
1

Following up from my previous post.

I used the API at https://archive.org/developers/changes.html to enumerate all the item names in the archive. Currently there are over 256 million item names. However I went through a sample of them and noted the following:

There are many, many items from the archive which have been removed. Much higher than I expected. If you have critical data, of course Internet Archive should never be your only backup.

I don't know the distribution of metadata and .torrent file sizes since i have not tried downloading them yet. It looks like it would require a lot of storage if there are many files or the content is huge (if only 50% of the items remain and the average .torrent + metadata is 20KB it would be over 2.5 TB to store). But on the other hand, the archive has a lot of random one off uploads that are not very big, so some metadata is 800 bytes and the torrent 3KB in those cases (only 640 GB to store if combined is 5 KB).

3
1
submitted 2 weeks ago* (last edited 2 weeks ago) by BermudaHighball@lemmy.dbzer0.com to c/datahoarder@lemmy.ml

I'd love to know if anyone's aware of a bulk metadata export feature or repository. I would like to have a copy of the metadata and .torrent files of all items.

I guess one way is to use the CLI but this relies on knowing which item you want and I don't know if there's a way to get a list of all items.

I believe downloading via BitTorrent and seeding back is a win-win: it bolsters the Archive's resilience while easing server strain. I'll be seeding the items I download.

Edit: If you want to enumerate all item names in the entire archive.org repository, take a look at https://archive.org/developers/changes.html. This will do that for you!

4
1

Following this post in the Fediverse: https://neuromatch.social/@jonny/113444325077647843

5
1

Reposted from lemmy.world c/politics since it violated it's rule #1 about links.

Now that the fascists have taken over, what books, academic studies, and pieces of knowledge should take priority in personal/private archival? I'm thinking about what happened in Nazi Germany, especially with the burning of the Institute for Sexual Science(Institut für Sexualwissenschaft) and what was lost completely in the burnings.

Some of us should consider saving stuff digitally or physically.

6
1
submitted 3 weeks ago by TCB13@lemmy.world to c/datahoarder@lemmy.ml

cross-posted from: https://lemmy.world/post/21563379

Hello,

I'm looking for a high resolution image of the PAL cover from the Dreamcast (I believe).

There was this website covergalaxy that used it have in 2382x2382 but all the content seems to be gone. Here's the cache https://ibb.co/nRMhjgw . Internet archive doesn't have it.

Much appreciated!

7
1
Archiveteam Veoh grab (tracker.archiveteam.org)
submitted 3 weeks ago* (last edited 3 weeks ago) by kabi@lemm.ee to c/datahoarder@lemmy.ml

Just looking at the numbers, it doesn't seem to me like archival will complete before the shutdown date (nov. 11). There are 2million+ elements left, likely 100TB+ of videos.

If you care to help them out, see instructions at the top of the page. Be sure you have a "clean connection", though.

edit: They're saying that the current rate seems to be plenty enough to finish by the deadline. Workers are often left idling at the moment.

8
1
submitted 1 month ago by kabi@lemm.ee to c/datahoarder@lemmy.ml

The September 17th archive of the oldest public video on ashens' channel is saved with the comments section of a completely different video.

(only loads on desktop for me)

Not sure how this happened, usually it's no comments section at all.

If I were trying to make a point here: an archive doesn't even have to be malicious to contain misleading information presented as fact.

9
1
submitted 1 month ago* (last edited 1 month ago) by andioop@programming.dev to c/datahoarder@lemmy.ml

I did try to read the sidebar resources on https://www.reddit.com/r/DataHoarder/. They're pretty overwhelming, and seem aimed at people who come in knowing all the terminology already. Is there somewhere you suggest newbies start to learn all this stuff in the first place other than those sidebar resources, or should I just suck it up and truck through the sidebar?

EDIT: At the very least, my goal is to have a 3-2-1 backup of important family photos/videos and documents, as well as my own personal documents that I deem important. I will be adding files to this system at least every 3 months that I would like incorporated into the backup. I would like to validate that everything copied over and that the files are the same when I do that, and that nothing has gotten corrupted. I want to back things up from both a Mac and a Windows (which will become a Linux soon, but I want to back up my files on the Windows machine before I try to switch to Linux in case I bungle it), if that has any impact. I do have a plan for this already, so I suppose what I really want is learning resources that don't expect me to be a computer expert with 100TB of stuff already hoarded.

10
1

I download lots of media files. So far I have been storing these files after I am done with them on a 2TB hard disk. I have been copying over the files with rsync. This has so far worked fairly well. However the hard disk I am using is starting to get close to full. Now I am needing to find a solution so I can span my files over multiple disks. If I were to continue to do it as I do now, I would end up copying over files that would already be on the other disk. Does the datahoading community have any solutions to this?

For more information, my system is using Linux. The 2TB drive is formatted with ext4. When I make the backup to the drive I use ’rsync -rutp’. I don’t use multiple disks at the same time due to having only one usb sata enclosure for 3 1/2 inch disks. I don’t keep the drive connected all the time due to not needing it all the time. I keep local copies until I am done with the files (and they are backed up).

11
1

Hey everyone,

it’s me again, one of the two developers behind GameVault, a self-hosted gaming platform similar to how Plex/Jellyfin is for your movies and series, but for your game collection. If you've hoarded a bunch of games over the years, this app is going to be your best friend. Think of it as your own personal Steam, hosted on your own server.

If you haven’t heard of GameVault yet, you can check it out here and get started within 5 minutes—seriously, it’s a game changer.

For those who already know GameVault, or its old name He-Who-Must-Not-Be-Named, we are excited to tell you we just launched a major update. I’m talking a massive overhaul—so much so, that we could’ve rebuilt the whole thing from scratch. Here’s the big news: We’re no longer relying on RAWG or Google Images for game metadata. Instead, we’ve officially partnered with IGDB/Twitch for a more reliable and extended metadata experience!

But it doesn’t stop there. We’ve also rolled out a new plugin system and a metadata framework that allows you to connect to multiple metadata providers at once. It’s never been this cool to run your own Steam-like platform right from your good ol' 19" incher below your desk!

What’s new in this update?

  • IGDB/Twitch Integration: Say goodbye to unreliable metadata scrapers. Now you can enjoy game info sourced directly from IGDB.
  • Customizable Metadata: Edit and fine-tune game metadata with ease. Your changes are saved separately, so the original data stays intact.
  • Plugin System: Build your own plugins for metadata or connect to as many sources as you want—unlimited flexibility!
  • Parental Controls: Manage age-appropriate access for the family and children.
  • Built-in Media Player: Watch game trailers and gameplay videos directly in GameVault.
  • UI Overhaul: A fresh, streamlined look for the app, community, game and admin interface.
  • Halloween Theme: For GameVault+ users, we’ve added a spooky Halloween skin just in time for the season!

Things to keep in mind when updating:

  • GameVault Client v1.12 is now required for servers running v13 or above.
  • Older clients won’t work on servers that have been updated to v13.

For a smooth update and a guide on how to use all these new features, check out the detailed migration instructions in the server changelogs. As always, if you hit any snags, feel free to reach out to us on Discord.

If you run into any issues or need help with the migration, feel free to join and open a ticket in our Discord community—we’re always happy to help!

If you want to support our pet-project and keep most upcoming features of GameVault free for everyone, consider subscribing to GameVault+ or making a one-time donation. Every little bit fuels our passion to keep building and improving!

Thanks for everything! We're more than 800 Members on our discord now and I can’t wait to hear what you think of the latest version.

12
1

I know for photos i could throw them through something like Converseen to take them from .jpg to .jxl, preserving the quality (identical visially, even when pixel peeping), but reducing file size by around 30%. What about video? My videos are in .h265, but can i reencode them more efficiently? im assuming that if my phone has to do live encoding, its not really making it as efficient as it could. could file sizes be reduced without losing quality by throwing some processing time at it? thank you all

13
1
14
1
submitted 1 month ago* (last edited 1 month ago) by wizardbeard@lemmy.dbzer0.com to c/datahoarder@lemmy.ml

Yoko Taro is the creative director behind the Nier and Drakengard series, and he has released a lot of supplemental material across a variety of mediums over the years in Japan. Accord's Library is a site that is dedicated to finding this material, archiving it, and translating it.

Today, in Accord's Library Discord, they announced that they received a Cease and Desist from Square-Enix, and on Oct 31, the Library and Gallery sections of the site will be closed and taken offline.

Announcement Screenshot

Announcement TextDearest Recorders and Observers of Accord's Library.

These past few years have been a pleasure, but we regret to inform you all that we've been contacted by the Square Enix Legal Team. And after some private communications, based on the outlined requirements we have come to the conclusion that Accord's Library must close its doors by the end of the month. While we are sad to have to go, we also must respect the wishes of the Legal Team.

The Library and Gallery will remain opened for the next 2 weeks and will be officially closed on Oct 31.

We hope to continue spending time with you all, and other fans in the future through our Discord Server, which we plan to keep opened.

On behalf of the entire Council for Accord's Library, we sincerely thank you for your support and friendship over the years. We hope that you will continue to use the discord, though we understand if this is where we part ways.

From the very bottom of our hearts, we will be forever grateful to everyone who's volunteered their time to help build Accord's Library into what it was. Thank you to all of our Transcribers, Translators, and most of all, all of you for sticking with us.

Take care of yourselves out there. Glory to Mankind.

  • The Accord's Library Council

If anyone is skilled with backing up sites, any assistance would be appreciated. Even if it's just to point at the right tool for the job (been almost a decade since I've backed up part of a site).

Shoutout to !helloharu@lemmy.world making the original post.

15
1
The Stallman Report (stallman-report.org)

Good data to archive.

16
1

Hi everyone!

I've been using Create-Synchronicity for a few years, and it's been great for my needs. However, it hasn't been updated in a while, and I'm curious if there might be a more current alternative out there.

I'm looking for features like mirror, incremental, and two-way incremental backups, as well as the ability to schedule my backups. Opensource is a great plus.

There are plenty of options available, so I thought it would be a good idea to ask you all what you're using and what you would recommend.

Thanks a bunch for your help!

17
1
My dank internet archive stash (archive.hyperreal.coffee)

It's mostly old computer and gaming magazines at this time.

18
1

I tried to put emphasis on the personal nature of the question in the title. I'm not asking for myself or the average individual. I also mean ideal in the way where cost is still a factor. The iPhone 16 Pro has a 1TB model but it's around $500 more expensive than the 128GB version.

I imagine answers are going to vary significantly depending on an individuals approach like relying on cloud storage, SD cards, or a Magsafe NVMe drive for example.

I found 1TB (512GB on the phone and on the SD card) was ideal for me. I could keep the things I wanted for "just in case scenarios" like the files needed for the source ports of Diablo, Half Life, and Morrowind in case I want to play a game or some ebooks if I have time to read. I never needed to uninstall applications and shuffle things. I even had plenty of breathing room.

Another question I'd be interested in hearing people's answers to is what is the minimum storage capacity you would consider for a phone. I don't think I would buy a device with less than 256GB of potential storage. If it has an SD card slot that opens up the potential a significantly.

19
1

I have a bunch of old VHS tapes that I want to digitize. I have never digitized VHS tapes before. I picked up a generic HDMI capture card, and a generic composite to HDMI converter. Using both of those, I was planning on hooking a VCR up to a computer running OBS. Overall, I'm rather ignorant of the process. The main questions that I currently have are as follows:

  • What are the best practices for reducing the risk of damaging the tapes?
  • Are there any good steps to take to maximize video quality?
  • Is a TBC required (can it be done in software after digitization)?
  • Should I clean the VCR after every tape?
  • Should I clean every tape before digitization?
  • Should I have a separate VCR for the specific purpose of cleaning tapes?

Please let me know if you have any extra advice or recommendations at all beyond what I have mentioned. Any information at all is a big help.

20
1
LTO-8 Drives (poptalk.scrubbles.tech)

Hey folks, I'm lucky enough to be in one of the few places on the internet that uses LTO.

I'm running LTO-8 tape. My current drive is kind of a dud, I'm interested in a new (to me) drive. Either internal in just an old desktop, or an external drive. I'm not afraid to shell out for it, but if I am I'm nervous to drop 2000 on just a random ebay seller. Anyone have any reputable sources? In the Seattle area if anyone knows a local one too.

21
1
submitted 2 months ago by peregus@lemmy.world to c/datahoarder@lemmy.ml

Years ago I came across filecoin/sia decentralized data storage and I started trying them but then I stopped due to lack of time. Some days ago I've heard in a podcast about a kind of NAS that does kinda the same thing: it spreads chunks of data across other devices owned by other users.

Is there a service that does this but with your own hardware or, even better, something open source where you can have X GB as far as you share the same amount of space plus something extra?

It would be great for backup.

22
1
submitted 2 months ago by BlueKey@fedia.io to c/datahoarder@lemmy.ml

I'm entertaining the thought to write my backups onto tape storage. So my questiont to this community is: does someone know where (if any) to get cheap and simple (my requirements are "just writes & reads the data and is usable with a Linux machine") used tape drives?

Thanks for any hints.

23
1
24
1
Digitizing notebooks (katiesonger.com)

[The guide isn't mine and I'm not affiliated with it, I'm just sharing a mind-blown moment for me.]

Over the years, I have gathered many notebooks that admittedly not all contain very important information and take up a lot of space (possibly a cubic meter or more). But being kind of a (data)hoarder, I dont want to just throw them away. It's work that took years.

My solution: scanning them. My phone has a built-in camera scanner that does a suprisingly good job (it helps that the camera is kinda good too), so I have scanned thousands of pages so far. But the process is slow and takes a lot of manual labor (flipping pages, aligning pages, retaking bad photos, creating pds etc.). A typical notebook (~120pages) may take me 15minutes or more.

So I thought that maybe I could speed up the process (partially at least) by either buying a scanner or paying someone to scan them (I don't have a proper scanner, yet). Removing the pages without damaging them is a challenge though. That's where the guide in the link comes in: it turns out it's very easy to remove the spiral spring from the notebooks! I was gonna pull the pages until I found that guide. I suppose it's also very easy to remove the staples from staple-bound notebooks too. I might just have "won" many hours of my life with this idea.

The video in the guide that helped me:

https://www.youtube.com/watch?v=lfMUVpwLZGM

(For the record, my xiaomi 10 phone can scan items by creating ~20MP images which translates to typical-to-high resolutions if I scan A4 or A5 pages. Fortunately, many scanners can reach that quality. I just need them not to apply any weird effects or compression to the scanned document.)

25
1

Hi everyone,

I'm looking for a more efficient way to save and archive Lemmy comments and posts on my Android phone.
Currently, when I come across a comment I want to keep for future reference, I manually copy the text and link, then paste it into a note in my Obsidian vault. If there's an image or other media in the original post, I save and include that as well.

However, this process feels a bit cumbersome. Ideally, I’d like a way to quickly save or share a comment or post URL and automatically archive the top 20 or so comment chains, along with the original post, including any images, videos, or articles.

Has anyone found a streamlined method for doing this? I often find that by the time I return to check the responses or review the content, the post or article has disappeared. Any tips or tools that could help simplify this process would be greatly appreciated!

Thanks in advance for your suggestions!

view more: next ›

datahoarder

6786 readers
39 users here now

Who are we?

We are digital librarians. Among us are represented the various reasons to keep data -- legal requirements, competitive requirements, uncertainty of permanence of cloud services, distaste for transmitting your data externally (e.g. government or corporate espionage), cultural and familial archivists, internet collapse preppers, and people who do it themselves so they're sure it's done right. Everyone has their reasons for curating the data they have decided to keep (either forever or For A Damn Long Time). Along the way we have sought out like-minded individuals to exchange strategies, war stories, and cautionary tales of failures.

We are one. We are legion. And we're trying really hard not to forget.

-- 5-4-3-2-1-bang from this thread

founded 4 years ago
MODERATORS