[-] admin@lm.boing.icu 1 points 1 year ago

Will try this, thanks for the tip

[-] admin@lm.boing.icu 3 points 1 year ago

Thanks, good point. Didn't know about that risk

16
submitted 1 year ago by admin@lm.boing.icu to c/lemmy@lemmy.ml

I'm using my own instance. Is there a way to block the posts coming from other instances that are below a certain threshold? For example -3 upvotes. I don't need to see them I trust the moderators of other instances to handle their own posts responsibly. As it is today half my Homescreen is full of -40 voted posts that might have been deleted on their home instance already. But they get downloaded to me regardless.

[-] admin@lm.boing.icu 1 points 1 year ago

This already had a brim but it is quite ugly. The thing is I had the same failure while printing a bigger object like a Stormtrooper helmet. Randomly popped off the bed after like 2.5 hours

[-] admin@lm.boing.icu 1 points 1 year ago

I have cleaned the bed already with alcohol and microfiber cloth. Yeah the brim is supposed to be cylindrical but it obviously isn't :D . Is that a modification I should do in Prusa Slicer? For the first layer height

11
submitted 1 year ago by admin@lm.boing.icu to c/3dprinting@lemmy.ml

Hi, I'm new to printing. I got myself a Kobra 2 and it was printing fine when I got it. But recently it started to knock the prints off at random times. I have done several recalibrations, adjusted the Z offset up and down. What I have noticed is that all of these models seem to have a small overflow of material which I'm guessing the nozzle is bumping into. But I have no idea what causes it. I'll post the latest print that failed, there the extra material is more pronounced than usual. Any suggestions what I should adjust? (Btw I'm using Octoprint)

[-] admin@lm.boing.icu 35 points 1 year ago

It's pretty much a "develop from zero" situation. You can import assets, but will probably have to at least fix them up. If you are lucky, the two engines use the same language, but probably not. For example Unity uses C# while UE5 uses C++. And then you didn't even get to the parts where you actually use use the engine. Everything that touches the capabilities of the specific game engine need to be rewritten. That is off the top of my head: interaction, physics engine usage, collision engine usage, AI stuff etc.

[-] admin@lm.boing.icu 1 points 1 year ago

Here is what I'm using atm. Is there a better way to do this? I'm still learning K8S :)

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: sonarr-pvc
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: longhorn
  resources:
    requests:
      storage: 250Mi
***
[....]
volumes:
      - name: config
        persistentVolumeClaim:
          claimName: sonarr-pvc
[-] admin@lm.boing.icu 1 points 1 year ago

Yeah I'm using Longhorn. Might be that I have set it up wrong, but didn't seem to have helped with the DB corruption issue.

[-] admin@lm.boing.icu 8 points 1 year ago

Basically this. I have my home stuff running in a K3S cluster, and I had to restore my Sonarr volume several times because the SQLite DB has corrupted. Transitioning to Postgres should solve this issue, and I already have quite a few other stuff in it, for example Radarr and Prowlarr

36

Thought I would let you all know in case you have missed it. A few days ago Postgres support was finally merged into Sonarr dev branch (meaning 4.x version). I have already transitioned to it, so far it runs without issue

You can mostly follow the same instructions as for Radarr from here: https://wiki.servarr.com/radarr/postgres-setup

I used the following temporary docker container to do the conversion (obviously replace stuff you need to):

docker run --rm -v Route\to\sonarr.db:/sonarr.db --network=host dimitri/pgloader pgloader --debug --verbose --with "quote identifiers" --with "data only" "sqlite://sonarr.db" "postgresql://user:pwd@DB-IP/sonarr-main"

When it completed the run, it outputs a kind of table that shows if there were any errors. In my case there were 2 tables (cant remember which ones anymore) that couldn't be inserted, so I edited those manually afterwards, so it matches the ones in the original DB.

[-] admin@lm.boing.icu 4 points 1 year ago

I got the same Obsidian+Syncthing setup atm, just haven't really tried to use it for writing yet. Wanted to see what else others use that may trump it :)

42
Software for writing (lm.boing.icu)

For those who do write novels, books etc. What software do you use? What format? FOSS or proprietary?

[-] admin@lm.boing.icu 1 points 1 year ago

No error logs, based on the logs tho it just ignores the config and uses the filesystem. So predictably once my small config mount fills up (this was before emptyDir), it starts to error out saying no more space on disk. Seemingly this didn't cause any errors for Lemmy, still it doesn't feel right :)

4
submitted 1 year ago by admin@lm.boing.icu to c/lemmy@lemmy.ml

Hi all,

I'm having an issue with my Lemmy on K8S that I selfhost. No matter what I do, Pictrs doesn't want to use my Minio instance. I even dumped the env variables inside the pod, and those seem to be like described in the documentation. Any ideas?

kind: ConfigMap
metadata:
  name: pictrs-config
  namespace: lemmy
data:
  PICTRS__STORE__TYPE: object_storage
  PICTRS__STORE__ENDPOINT: http://192.168.1.51:9000
  PICTRS__STORE__USE_PATH_STYLE: "true"
  PICTRS__STORE__BUCKET_NAME: pict-rs
  PICTRS__STORE__REGION: minio
  PICTRS__MEDIA__VIDEO_CODEC: vp9
  PICTRS__MEDIA__GIF__MAX_WIDTH: "256"
  PICTRS__MEDIA__GIF__MAX_HEIGHT: "256"
  PICTRS__MEDIA__GIF__MAX_AREA: "65536"
  PICTRS__MEDIA__GIF__MAX_FRAME_COUNT: "400"
***
apiVersion: v1
kind: Secret
metadata:
  name: pictrs-secret
  namespace: lemmy
type: Opaque
stringData: 
  PICTRS__STORE__ACCESS_KEY: SOMEUSERNAME
  PICTRS__STORE__SECRET_KEY: SOMEKEY
  PICTRS__API_KEY: SOMESECRETAPIKEY
***
apiVersion: apps/v1
kind: Deployment
metadata:
  name: pictrs
  namespace: lemmy
spec:
  selector:
    matchLabels:
      app: pictrs
  template:
    metadata:
      labels:
        app: pictrs
    spec:
      containers:
      - name: pictrs
        image: asonix/pictrs
        envFrom:
        - configMapRef:
            name: pictrs-config
        - secretRef:
            name: pictrs-secret
        volumeMounts:
        - name: root
          mountPath: "/mnt"
      volumes:
        - name: root
          emptyDir: {}
***
apiVersion: v1
kind: Service
metadata:
  name: pictrs-service
  namespace: lemmy
spec:
  selector:
    app: pictrs
  ports:
  - port: 80
    targetPort: 8080
  type: ClusterIP
3
Lemmy on K8S (github.com)

So here are the files I have cobbled together in order to deploy Lemmy on my own cluster at home. I know there are helm charts in the work, but this might help someone else who cannot wait just like me :)

admin

joined 1 year ago