[-] Zeth0s@reddthat.com 4 points 1 year ago* (last edited 1 year ago)

It requires continuous expansive improvements. It is like real world. Building a system robust to frauds works on the short term, but on the mid and long term is impossibile. That is why laws change, evolve, we have governments and so on. Because system reacts to your rules and algorithms, making them less effective.

And these continous expensive improvements are done daily, but it is a difficult job

[-] Zeth0s@reddthat.com 4 points 1 year ago* (last edited 1 year ago)

It is not at the moment. Models are built on the assumption of stability, i.e. that what they are modelling doesn't change over time, doesn't evolve. This is clearly untrue, and cheating is a way the environment evolves. Only way to consider that, is to create a on-line continous learning algorithm. Currently this exists and is called reinforcement learning. Main issue is that methods to account for an evolving environment are still under active research. In the sense that methods to address this issue are not yet available.

It is an extremely difficult task tbf

[-] Zeth0s@reddthat.com 4 points 1 year ago

Do you have examples? It should only happen in case of overfitting, i.e. too many identical image for the same subject

[-] Zeth0s@reddthat.com 8 points 1 year ago

Can anyone access all and popular? I was curious to see the reactions but they are unreachable

[-] Zeth0s@reddthat.com 7 points 1 year ago* (last edited 1 year ago)

I believe he is talking about secure boot

https://wiki.debian.org/SecureBoot

[-] Zeth0s@reddthat.com 3 points 1 year ago

The fact that it is work that I don't want to do... Pretty much

[-] Zeth0s@reddthat.com 6 points 1 year ago* (last edited 1 year ago)

Why would they accept PR at all if they don't have a robust testing process and approvals are dictated by customers needs?

The message as it is now to potential contributors is that their contribution in not welcome, unless its free labor to financially benefit only ibm.

Which is fair, but the message itself is a new PR issue for red hat

[-] Zeth0s@reddthat.com 6 points 1 year ago* (last edited 1 year ago)

The problem of current LLM implementations is that they learn from scratch, like taking a baby to a library and telling him "learn, I'll wait out in the cafeteria".

You need a lot of data to do so, just to learn how to write, gramma, styles, concepts, relationships without any guidance.

This strategy might change in the future, but the only solution we have now is to refine the model afterward, let's say.

Tbf biases are integral part of literature and human artistic production. Eliminating biases means having "boring" texts. Which is fine for me, but a lot of people will complain that AI is dumb and boring

[-] Zeth0s@reddthat.com 4 points 1 year ago

Buy better pasta! I'd suggest rummo or de Cecco, they are good and easy to find outside Italy

[-] Zeth0s@reddthat.com 6 points 1 year ago

Here it is, all for you https://open-assistant.io/

You also get a useless leader board to replace reddit karma!

It is anyway a legit initiative to help for open source LLMs

[-] Zeth0s@reddthat.com 6 points 1 year ago

What site are you talking about exactly?

[-] Zeth0s@reddthat.com 7 points 1 year ago* (last edited 1 year ago)

Lemmy currently misses a sync feature across servers. Meaning that moving one lose all subscription and messages.

The real solutions should be a distributed network to support federation, instead than a plain federated one, i.e. an automated redistribution of users and loads across servers (lemmy instances).

I don't know how they are planning to manage it on the long run

view more: ‹ prev next ›

Zeth0s

joined 1 year ago