[-] ticoombs@reddthat.com 16 points 2 months ago

This is sso support as the client. So you could use any backend that supports the oauth backend (I assume, didn't look at it yet).

So you could use a forgejo instance, immediately making your git hosting instance a social platform, if you wanted.
Or use something as self hostable like hydra.

Or you can use the social platforms that already exist such as Google or Microsoft. Allowing faster onboarding to joining the fediverse. While allowing the issues that come with user creation to be passed onto a bigger player who already does verification. All of these features are up for your instance to decide on.
The best part, if you don't agree with what your instance decides on, you can migrate to one that has a policy that coincides with your values.

Hope that gives you an idea behind why this feature is warranted.

65

Highly relevant to us (as admins)

[-] ticoombs@reddthat.com 9 points 7 months ago

That awkward moment when you are the person they are talking about when running beta in production!

[-] ticoombs@reddthat.com 11 points 8 months ago* (last edited 8 months ago)

Relevant: https://reddthat.com/comment/8316861 tl;dr. The current centralisation results in a lemmy-verse theoretical maximum for of 1 activity per 0.3 seconds, or 200 activities per minute. As total transfer of packets is just under 0.3 seconds between EU -> AU and back.

Edit: can't math when sleepy

[-] ticoombs@reddthat.com 5 points 8 months ago* (last edited 8 months ago)

We rebuilt the Lemmy container with an extra logging patch. Seems build docs need some work? as that's the only difference in the past 1-2 days, except for moving to postgres 16...

Thanks for the ping.

I've gone back to mainline Lemmy. @Morpheus@lemmy.today check now please

0
submitted 8 months ago* (last edited 8 months ago) by ticoombs@reddthat.com to c/reddthat@reddthat.com

Edited: this post to be the Lemmy.World federation issue post.

We are now ready to upgrade to postgres 16. When this post is 30 mins old the maintenace will start


Updated & Pinned a Comment in this thread explaining my complete investigation and ideas on how the Lemmy app will & could move forward.

[-] ticoombs@reddthat.com 5 points 11 months ago
[-] ticoombs@reddthat.com 21 points 1 year ago

It's a sad day when something like this happens. Unfortunately with how the Lemmy's All works it's possible a huge amount of the initial downvotes are regular people not wanting to see the content, as downvotes are federated. This constituted as part of my original choices for disabling it when I started my instance. We had the gripes people are displaying here and it probably constituted to a lack in Reddthat's growth potential.

There needs to be work done not only for flairs, which I like the idea of, but for a curated All/Frontpage (per-instance). Too many times I see people unable to find communities or new content that piques their interest. Having to "wade through" All-New to find content might attribute to the current detriment as instead of a general niche they might want to enjoy they are bombarded with things they dislike.

Tough problem to solve in a federated space. Hell... can't even get every instance to update to 0.18.5 so federated moderation actions happen. If we can't all decide on a common Lemmy instance version, I doubt we can ask our users to be subjected to not using the tools at their disposal. (up/down/report).

Keep on Keeping on!

Tiff - A fellow admin.

[-] ticoombs@reddthat.com 5 points 1 year ago

We (reddthat) would welcome your community. 😉

[-] ticoombs@reddthat.com 4 points 1 year ago* (last edited 1 year ago)

Hello,

Please see the rules of reddthat here: https://reddthat.com/post/9701

No racism or other discrimination No endorsement of hate speech

As such you have broken our rules, specifically being racist. You have remained civil and I thank-you for that. Normally I would have banned the account in question by now, but have decided not too unless others end up reporting your comments, but I see that as unlikely as this has become a worthwhile discussion.

What has your post in the community abandonedporn got to do with this? Nothing. This is an offtopic conversation which you started off with zero context. And text with zero context can be read with a myriad of inflections even when the best intentions are meant.
If you started off saying you knew the area, or lived down the road, that may have some credibility and then tied into your current feelings about the issue then we wouldn't have had a problem if you said it in a way that acknowledged it was your own opinion. Like how you have now is amazing, I wish all comments were as thoughtful as this. But you chose to originally post in a community that has next to nothing to do with Russia to voice your opinion on your dislike.

Hatred towards any country is not welcome on Reddthat. I understand your feelings and they are worth while.

I look forward to seeing you in a different community, maybe a political community to have a discussion.

Thankyou

Reddthat Admin Team

[-] ticoombs@reddthat.com 4 points 1 year ago

Please do not bring your hatred of a few against a whole population. That type of hatred is against Reddthat's rules.

Please refrain from future comments like that.

[-] ticoombs@reddthat.com 12 points 1 year ago

If you don't see create community in the top next to create post, then your home server doesn't allow users to create community

No. You have to have an account on that server. (And have to use that account regularly as well, otherwise you won't see reports about your community)

You make posts.

[-] ticoombs@reddthat.com 23 points 1 year ago

Don't forget & in community names and sidebars.

Constantly getting trolled by &

180
submitted 1 year ago by ticoombs@reddthat.com to c/memes@lemmy.ml

Click here to go see the bonus panel! Hovertext: What's really irritating is the way some of them live lives of quiet desperation which is a thing our people have been doing for generations. Today's News:

Thanks everyone. Some day I will stop being surprised by how willing my readers are to go in a totally new direction with my on an almost yearly basis. I hope you feel your money is well-spent. Zach

11
TITLE (url)
submitted 1 year ago by ticoombs@reddthat.com to c/memes@lemmy.ml

BODY

46

It seems the moderator who originally created this community is no longer around.

If you would like to moderate, (and have posted / been active in this community) please coment on this post asking to become the moderator!

Cheers

Tiff

[-] ticoombs@reddthat.com 10 points 1 year ago
1
submitted 1 year ago* (last edited 1 year ago) by ticoombs@reddthat.com to c/reddthat@reddthat.com

This is terribly hard to write. If you flushed your cache right now you would see all the newest posts without images. These are now 404s, even thought the images exist. In 2 hours everyone will see this. Unfortunately there is no going back, recovering the key store for all the "new" images.

What happened?

After the picture migration from our local file store to our object storage, i made a configuration change so that our Docker container no longer reference the internal file store. This resulted in the picture service having an internal database that was completely empty and started from scratch 😔

What makes this worse is that this was inside the ephemeral container. When the containers are recreated that data is lost. This had happened multiple times over the 2 day period.

What made this harder to debug was our CDN caching was hiding the issues, as we had a long cache time to reduce the load on our server.

The good news is that after you read this post, every picture will be correctly uploaded and added to the internal picture service database! 😊 The "better" news is the all original images from the 28th of June and before will start working again instantly.

Timeframe

The issue existed from the period from 29th of June to 1st of July.

Resolution

Right now. 1st of July 8:48 am UTC.
From now on, everything will work as expected.

Going forward

Our picture service migration has been fraught with issues and I cannot express how annoyed and disheartened by the accidents that have occurred. I am yet to have provided a service that I would be happy with.

I am very sorry that this happened and I will strive to do better! I hope you all can accept this apology

Tiff

1
submitted 1 year ago* (last edited 1 year ago) by ticoombs@reddthat.com to c/reddthat@reddthat.com

Welcome to everyone joining up so far!

Housekeeping & Rules

Reddthat's rules are defined here (& are available from the side bar as well). When in doubt check the sidebar on any community!

Our rules can be summed up with 4 points, but please make sure you do read the rules.

  • Remember that we are all humans
  • Don’t be overtly aggressive towards anyone
  • Try and share ideas, thoughts and criticisms in a constructive way
  • Tag any NSFW posts as such

Funding & Future

How we pay for all the services is explained in our main funding and donation post. We are 100% community funded, and completely open about our bills and expenses.
The post may need some updating over the course of the next day or two as we deal with an increased user base.
If you enjoy it here and want to help us keep the lights on and improve our services. Please help us out. Any amount of dollarydoos is welcome!

I'm lost and completely new to the fediverse!

That's fine, we were all there too! Our fediverse friends created a starting guide here which can be read to better understand how everything works.

As part of the fediverse you can follow and block any community you want to as well as follow and interact with any other person, on any other server. That means even if you did not choose Reddthat as your home server, you still could have interacted with us regardless!
So I'd like to say thanks for picking our server to come and experience the fediverse on.

Reddthat Communities:

Other Communities

Communities can be viewed by clicking the Communities section at the top (or this link). You will be greeted by 3 options. Subscription, Local, All.
They are pretty self explanatory, but for a quick definition:

  • Subscription: Communities that you personally are subscribed too (can exist anywhere on the fediverse, on any server)
  • Local: Communities that were created on Reddthat
  • All: Every community that Reddthat knows about

All is not every community on the whole fediverse, all is only the ones we know about. Also known as "federated" against.
There is the fediverse browser which attempts to list every community on the fediverse.
Just because a community exists elsewhere, doesn't mean that you have to join it. You can create your own! We welcome you to create any community on Reddthat that your heart desires.

Good luck on your adventures and welcome to Reddthat. Cheers,
Tiff

PS. Want to see the signup stats?

Our signups have gone 🚀 through the roof! From 1-2 per hour to touching on 90! Tell your friends about reddthat, and let's break 100!

      2 18:00
     18 19:00
      7 20:00
     20 21:00
     27 22:00
     76 23:00
     90 00:00
     84 01:00
     62 02:00
     47 03:00
     32 04:00
     28 05:00
     44 06:00
     43 07:00
     43 08:00
     38 09:00
      5 10:00
     42 11:00
     53 12:00
     49 13:00
     48 14:00
     50 15:00
     16 16:00
     24 17:00
     62 18:00
     40 19:00
     41 20:00
     20 21:00
     16 22:00
     18 23:00

Total: 1145 !

1
submitted 1 year ago* (last edited 1 year ago) by ticoombs@reddthat.com to c/reddthat@reddthat.com

So for those of you who were refreshing the page and looking at our wonderful maintenance page it took way longer than we planned! A full write up I'll do after I've dealt with a couple time out issues.

Here is a bonus meme.

So? How'd it go...

Exactly how we wanted it to go... except with a HUGE timeframe.
As part of the initial testing with object storage I tested using a backup of our files. I validated that the files were synced, and that our image service could retrieve them while on the object store.

What I did not account for was the latency to backblaze from Australia, how our image service handled migrations, and the response times from backblaze.

  • au-east -> us-west is about 150 to 160ms.
  • the image service was single threaded
  • response times to adding files are around 700ms to 1500ms (inclusive of latency)

We had 43000 files totaling ~15GB of data relating to images. If each response time is 1.5 seconds per image, and we are only operating on one image at a time, yep, that is a best case scenario of 43000 seconds or just under 12 of transfer time at an average of 1s per image.

The total migration took around 19 hours as seen by our pretty transfer graph:

So, not good, but we are okay now?

That was the final migration we will need to do for the foreseeable future. We have enough storage to last over 1 year of current database growth, with the option to purchase more storage on a yearly basis.
I would really like to purchase a dedicated server before that happens and if we continue having more and more amazing people join our monthly donations on our Reddthat open collective, I believe that can happen.

Closing thoughts

I would like to take this opportunity to apologise for this miscalculation of downtime as well as not fully understanding the operational requirements on our usage of object storage.
I may have also been quite vocal on the Lemmy Admin matrix channel regarding the lack of a multi-threaded option for our image service. I hope my sleep deprived ramblings were coherent enough to not rub anyone the wrong way.
A big final thank you to everyone who is still here, posting, commenting and enjoying our little community. Seeing our community thrive gives me great hope for our future.

As always. Cheers,
Tiff

PS.Our bot defence in our last post was unfortunately not acting as we hoped it would and it didn't protect us from a bot wave. So I've turned registration applications back on for the moment.

PPS. I see the people on reddit talking about Reddthat. You rockstars!


Edit:

Instability and occasional timeouts

There seems to be a memory leak with Lemmy v0.18 and v0.18.1 which some other admins have reported as well and has since been plaguing us. Our server would be completely running fine, and then BAM, we'd be using more memory than available and Lemmy would restart. These would have lasted about 5-15 seconds, and if you saw it would have meant super long page loads, or your mobile client saying "network error".

Temporary Solution: Buy more RAM.
We now have double the amount of memory courtesy of our open collective contributors, and our friendly VPS host.

In the time I have been making this edit I have already seen it survive a memory spike, without crashing. So I'd count that as a win!

Picture Issues

This leaves us with the picture issues. It seems the picture migration had an error. A few of the pictures never made it across or the internal database was corrupted! Unfortunately there is no going back and the images... were lost or in limbo.

If you see something like below make sure you let the community/user know:

Also if you have uploaded a profile picture or background you can check to make sure it is still there! <3 Tiff

1
submitted 1 year ago* (last edited 1 year ago) by ticoombs@reddthat.com to c/reddthat@reddthat.com

Hello Reddthat! It is I, the person at the top of your feed.

First off. Welcome all new users! Thank you for signing up and joining our Reddthat community.

Bot Defence

Starting yesterday after going through 90+ registration applications, I couldn't do it anymore. I felt compelled to give people a great experience with instantly getting let in and kept my phone on me for over 24 hours, checking every notification to see if that was another registration application I needed to quickly click accept on.

I want to quickly say thank you to the people who obviously read all the information and for those that didn't I'm keeping a close eye on you... 😛
I found a better solution to our signup problems.

As we use cloudflare for our CDN I have turned on their security system for the signup page. ~~Now when anyone goes to the signup page, they will be given a challenge that needs to be solved.
That means any bots that cannot pass cloudflare's automated challenge cannot signup.
A win until we get our captcha back working. ~~ Well I did not check the signup process correctly. It doesn't act as I thought it would, so I'll disable it after the migration.

Downtime / Migration to object storage

Today in the fediverse we have successfully confirmed that object storage will be an acceptable path forward but will not operate as initially hoped.
I initially hoped to offload everything via our CDN, but the data still needs to go through our app server. The silver lining is that we can still cache it heavily on our CDN to ensure that the pictures will be served fast as possible for you.

So it may be slightly pricier than we initially planned for when moving to object storage, but in the end we still benefit, functionally and monetarily. The reason is we were not going to be billed for egress (fetching/displaying images), where as now we will be. The fees are very low and still should be covered by our wonderful monthly donators.

We will have about 15-20GB of storage that needs to be moved and unfortunately our image service is incapable of running at the same time the migration is done, which means we need to turn it off while the migration happens. To top it all off we have... 43000+ (and counting) small image files. If you haven't worked with large swarms of small images before, the one that I can tell you is that transferring small images, sucks.

So we can do two things:

1. Turn off everything

  • Dedicate all CPU and bandwidth to the migration
  • Ensuring continuity and reducing the risk of something going wrong

2. Turn off the picture service

We can run Reddthat without the picture service & uploads while we perform the migration, but the migration will have an impact in server performance.

  • This will amount to having any picture we host (that isn't cached) return a 404.
  • Any uploads will timeout during that period, and return an error popup.
  • Pages will be slightly slower to respond.
  • Something else might break 🤷

Because of the risks associated with running only half our services, I've decided to continue with our planned downtime and go with option 1, turning off everything while we perform the migration.

Date: 28th June (UTC)

  • Start Time: 0:05
  • End Time: 6:00 (Expected)

It will probably take the whole 6 hours. In our testing, it did 150 items in 10 minutes... I will put up a maintenance page and will keep you all updated during that time frame especially if it is going to take longer, but unfortunately it will take however long it takes.

This will be the last announcement until we do the migration.
Cheers,
Tiff

PS. Like what we are doing? Become a contributor on our Open Collective to help finance these changes!

1
submitted 1 year ago* (last edited 1 year ago) by ticoombs@reddthat.com to c/reddthat@reddthat.com

Hello Everyone!

Lets start with the good, we recently just hit over 700 accounts registered! Hello! I hope you are doing well on your fediverse journey!

We have also hit... (15 when I posted this) total individual contributors! Every time I see an email saying we have another contributor it makes me feel all warm and fuzzy inside! The simple fact that together we are making something special here really touches my soul.

If you feel like joining us and keeping the server online and filled to the brim with coffee, you can see our open collective here: Reddthat Open Collective.

0.18 (& Downtime?)

As I have said in my community post (over here!), I wanted to wait for 0.18.1 to come out so I did not have to fight off a wave of bots because there is no longer a captcha, & I didn't want to enable registration applications because that just ruins the whole on-boarding experience in my opinion.

So, where does that leave us?

I say, screw those bots! We are going to use 0.18.0 and we are going to rocket our way into 0.18.1 ASAP. 🚀
and.... it's already deployed! Enjoy the new UI and the lack of annoyances on the Jerboa application! If you are getting any weird UI artifacts, hold Control (or command) and press the refresh button, as it is a browser cache issue.

~~I'm going to keep the signup process the same but monitor it to the point of helicopter parenting. So if we get hit by bots we'll have to turn on the Registration Application which I'm really hoping we won't have to. So anyone out there... lets just be friends okay?~~

Well... it looks like we cannot be friends! Application Registration is now turned on. Sorry everyone. I guess we can't have nice things!

Weren't we going to wait?

Moving to 0.18 was actually forced by me (Tiff) as I upgraded the Lemmy app running on our production server from 0.17.4 to 0.18.0. This updated caused the migrations to be performed against the database. The previous backup I had at the time of the unplanned upgrade was from about an hour before that. So rolling the database back was certainly not a viable option as I didn't want to lose an hour worth of your hard typed comments & posts.

The mistake or "root cause" was caused by an environmental variable that was set in my deploy scripts. I utilise environment variables in our deployments to ensure deployments can be replicated across our "dev" server (my local machine) and our "prod" server (the one you are reading this post on now!).

This has been fixed up and I'm very sorry for pushing the latest version when I said we were going to wait. I am also going to investigate how we can better achieve a rollback for the database migrations if we need to in the future.

Pictures (& Videos)

The reason I was testing our deployments was to fix our pictures service (called pictrs). As I've said before (in a non-announcement way) we are slowly using more and more space on our server to store all your fancy pictures, as well as all the pictures that we federate against. If we want to ensure stability and future expansion we need to migrate our pictures from sitting on the same server to an object storage. The latest version of pictrs now has that capability, and it also has the capability of hosting videos!
Now before you go and start uploading videos there are limits! we decided to limit videos to 400 frames, which is about 15 seconds worth of video. This was due to video file-sizes being huge compared pictures as well as the associated bandwidth that comes with video content sharing. There is a reason there are not hundreds of video sharing sites.

Object Storage Migration

I would like to thank the 5 people who have donated on a monthly recurring basis because without knowing there is a constant income stream using a CDN and Object storage would not be feasible.

Over the next week I will test the migration from our filesystem to a couple object storage hosting companies and ensure they can scale up with us. Backblaze being our first choice.

Maintenance Window

  • Date: 28th of June
  • Start Time: 00:05 UTC
  • End Time: 02:00 UTC
  • Expected Downtime: the full 2 hours!

If all goes well with our testing, I plan to perform the migration on the 28th of July around 00:05 UTC. We currently have just under 15GB of images so I expect it to take at maximum 1 hour, with the actual downtime closer to 30-40 minutes, but knowing my luck it will be the whole hour.

Closing

Make sure you follow the !community@reddthat.com for any extra non-official-official posts, or just want to talk about what you've been doing on your weekend!

Something I cannot say enough is thank you all for choosing Reddthat for your fediverse journey!

I hope you had a pleasant weekend, and here's to another great week!
Thanks all!
Tiff

1
submitted 1 year ago* (last edited 1 year ago) by ticoombs@reddthat.com to c/reddthat@reddthat.com

After updating to the new pictrs container, all image uploads are (randomly?) failing.

~~We are looking into it.~~

https://github.com/LemmyNet/lemmy-ansible/pull/90 broke it as they updated the pictrs container from 0.3.1 to 0.4.0-rc.7

The 0.4.0-rc.7 intermittently worked for small images, and even videos! (which is the next change as part of the 0.4.x version that will be coming out.)

I've rolled back to v0.3.3 and its back and working, See the comments for a gif, that gets converted to a mp4, and a jpg, and a png.

Enjoy your weekends everyone!

1
submitted 1 year ago* (last edited 1 year ago) by ticoombs@reddthat.com to c/reddthat@reddthat.com

Hello! It seems you have made it to our donation post.

Thankyou

We created Reddthat for the purposes of creating a community, that can be run by the community. We did not want it to be something that is moderated, administered and funded by 1 or 2 people.

Current Recurring Donators:

Current Total Amazing People:

Background

In one of very first posts titled "Welcome one and all" we talked about what our short term and long term goals are.
In 7 days since starting, we have already federated with over 700 difference instances, have 24 different communities and over 250 users that have contributed over 550 comments. So I think we've definitely achieved our short term goals, and I thought this was going to take closer to 3 months to get these types of numbers!

We would like to first off thank everyone for being here, subscribing to our communities and calling Reddthat home!

Funding

Open Collective is a service which allows us to take donations, be completely transparent when it comes to expenses and our budget, and allows us some form of idea of how we tracking month-to-month.

Servers are not free, databases are growing, images are uploading and the servers are running smoothly.

This post has been edited to only include relevant information about total funds and "future" plans. Because sometimes, we reach the future we were planning for!

Current Plans:

Database

The database service is another story. The database has grown to 1.8GB (on the file system) in the 7 days. Some quick math, makes that 94GB in 1 year's time. Our current host allows us to addon 100GB of SSD storage for $100/year, which is very viable and will allow us to keep costs down, while planning for our future.

Annual Costings:

Our current costs are:

  • Domain: 15 Euro
  • Server: $118.80 Aud
  • Server Ram Upgrade: $54.50 Aud
  • Server: $528.00 Aud
  • Wasabi Object Storage: $72 Usd
  • Total: ~$830 Aud per year (~$70/month)

That's our goal. That is the number we need to achieve in funding to keep us going for another year.

Cheers
Tiff

PS. Thank you to our donators! Your names will forever be remembered by me:

  • Guest x6
  • Dave_r
  • MentallyExhausted
  • Stimmed
  • pleasestopasking
  • RIPSync
  • Incognito
  • Siliyon
  • Ed
  • Alexander
  • hexed
  • muffin
view more: next ›

ticoombs

joined 1 year ago
MODERATOR OF