[-] Oskar@piefed.social 0 points 2 weeks ago

The FB export files have everything (your posts, comments, reactions etc) as html files and images/videos.

Getting it from those files into parts that can be uploaded somewhere else will take some work. Either with programming if OP has knowledge, or copy-paste.

[-] Oskar@piefed.social 1 points 2 weeks ago

Funny, I've mostly seen complaints that Mastodon is "empty".

Some tips:

  • Use high-volume hashtags for discovery, not for following.
  • Discovery: niched hashtags (e.g. #photography - >#birdphoto)
  • Discovery: accounts that use the high-use hashtags. Follow or mute
  • mute/filter "spammers" and "spammy" hashtags
  • Ditch FOMO, grow your feed slowly.
  • Look at boosted accounts from accounts you follow. Maybe they're interesting for you.
[-] Oskar@piefed.social 3 points 2 weeks ago

Pixelfed has been around for a couple of years. I created an account in the summer of 2022. But it didn't become popular until there was an app. A web app that does the same isn't enough. If it isn't in App Store/Play it doesn't exist for most people (which is something all Fediverse devs must think about).

Anyway, plenty of information about the development pace was available for anyone who wants to make an informed decision before they send their money somewhere. Apparently >2000 people thought it was worth it.

[-] Oskar@piefed.social 4 points 3 weeks ago

Interesting, I had missed that there are "non-official" models that can be used with Ollama just like the official ones. e.g. https://ollama.com/huihui_ai/deephermes3-abliterated

And it gave a good explanation to my "lithmus test" code snippet

[-] Oskar@piefed.social 3 points 3 weeks ago

Ollama's Python API works well and there's a lot of examples. However, I've not gotten the Ollama REST API to work, the response doesn't unpack into json for me.

Just design the rest of your system so that it doesn't have to know anything about the implementation (only prompts and responses) and you should be able to easily replace the LLM part later.

[-] Oskar@piefed.social 4 points 4 weeks ago* (last edited 4 weeks ago)

If Bluesky becomes federated with multiple instances, it will be just as impossible to enter as ActivityPub-based services apparently are since instance-selection is a blocker.

RIght?

;)

[-] Oskar@piefed.social 1 points 1 month ago

Interesting, lots of "bang for the buck". I'll check it out

[-] Oskar@piefed.social 1 points 1 month ago

of course, I haven't looked at models >9B for now. So I have to decide if I want to run larger models quickly or even larger models quickly-but-not-as-quick-as-on-a- Mac-Studio.

Or I could just spend the money on API credits :D

10
Mac Studio 2025 (piefed.social)

Thinking about a new Mac, my MPB M1 2020 16 GB can only handle about 8B models and is slow.

Since I looked it up I might as well shared the LLM-related specs:
Memory bandwidth
M4 Pro (Mac Mini): 273GB/s M4 Max (Mac Studio): 410 GB/s

Cores cpu / gpu
M4 pro 14 / 20
M4 Max 16 / 40

Cores & memory bandwidth is of course important, but with the Mini I could have 64 GB ram instead of 36 (within my budget that is fixed for tax reasons).

Feels like the Mini with more memory would be better. What do you think?

[-] Oskar@piefed.social 6 points 2 months ago

Oskar

joined 2 months ago