[-] projectmoon@lemm.ee 23 points 3 weeks ago

https://agnos.is/posts/tech-recruitment-is-out-of-control.html

This was my experience at the beginning of 2024. It was bad enough that I had to write a blog post about it.

[-] projectmoon@lemm.ee 23 points 1 month ago

Not necessarily. While of course in many many cases, open source is a volunteer effort, there's usually some implicit transaction going on. Whether that's improving the software for yourself and passing that on to others, being a business and improving a library or something you use that helps your project generate revenue, or even a straight up commercial transaction.

But in all these cases, the open source project can be taken by you (or others) and you can do whatever you want with it. In the case of Winamp here, you cannot do any of that. It would be different if they were paying for contributions. But they're not, so.

[-] projectmoon@lemm.ee 48 points 1 month ago* (last edited 1 month ago)

They basically want free labor.

[-] projectmoon@lemm.ee 23 points 1 month ago

That is exactly the plan.

[-] projectmoon@lemm.ee 53 points 4 months ago

Depends on the continuity and who's writing it, but often yes. He was notably portrayed this way in the Justice League cartoon.

1

Current situation: I've got a desktop with 16 GB of DDR4 RAM, a 1st gen Ryzen CPU from 2017, and an AMD RX 6800 XT GPU with 16 GB VRAM. I can 7 - 13b models extremely quickly using ollama with ROCm (19+ tokens/sec). I can run Beyonder 4x7b Q6 at around 3 tokens/second.

I want to get to a point where I can run Mixtral 8x7b at Q4 quant at an acceptable token speed (5+/sec). I can run Mixtral Q3 quant at about 2 to 3 tokens per second. Q4 takes an hour to load, and assuming I don't run out of memory, it also runs at about 2 tokens per second.

What's the easiest/cheapest way to get my system to be able to run the higher quants of Mixtral effectively? I know that I need more RAM Another 16 GB should help. Should I upgrade the CPU?

As an aside, I also have an older Nvidia GTX 970 lying around that I might be able to stick in the machine. Not sure if ollama can split across different brand GPUs yet, but I know this capability is in llama.cpp now.

Thanks for any pointers!

[-] projectmoon@lemm.ee 41 points 9 months ago

The fork was originally created because upstream NewPipe elected not to include sponsor block functionality.

[-] projectmoon@lemm.ee 29 points 10 months ago

Depends on the language. There is no explicit typing in JavaScript, for example. That's why Typescript was invented.

5
submitted 1 year ago by projectmoon@lemm.ee to c/meta@lemm.ee

Not sure if this has been asked before or not. I tried searching and couldn't find anything. I have an issue where any pictures from startrek.website do not show up on the homepage. It seems to only affect startrek.website. Going to the link directly loads the image just fine. Is this something wrong with lemm.ee?

[-] projectmoon@lemm.ee 28 points 1 year ago

It used to be open source, then it went completely closed. As mentioned, Organic Maps is the fork that is the continuation of the GPL app.

[-] projectmoon@lemm.ee 53 points 1 year ago

I think "complex" refers to the various dark patterns used by Windows and Mac/iOS to scare and/or force users that know nothing of computers into using the default browsers.

[-] projectmoon@lemm.ee 29 points 1 year ago

You should probably add what license the icon will be under, if it's submitted to the project. Creative Commons? GPL?

[-] projectmoon@lemm.ee 22 points 1 year ago

Unfortunately, it doesn't look like that BBC experiment is going well. They've barely posted anything, relative to what they could post. They should set up their systems to auto-post to Mastodon when they post to Twitter or where ever else.

[-] projectmoon@lemm.ee 28 points 1 year ago

Am I missing something? Or is the link to this tool not actually present in the post? I only see a screenshot.

view more: next ›

projectmoon

joined 1 year ago