As a syncretic Luciferian currently, I'd say esoteric and occult books/grimoires as well. Everything that's deemed "demonic" by christianity should be safely archived.
Basically, as far as I read, different kinds of transformers serve different kinds of purposes:
-
A Decoder-only is used for generative word output (auto-completion), when the model receives an input prompt and outputs generated words based on it.
-
An Encoder-only is used for things such as classification and sentiment analysis, when the model receives an input text and outputs a classification vector.
-
An Encoder-Decoder is used for translating texts from one language to other, because it firstly encodes (hence, it classifies) then decodes (hence, it generates words).
How LLMs can translate and can kinda of "analyze the sentiment" of a given text, I'm not really sure (possibly the training data allows is so huge that decoders can achieve the encoding function) but the "basic" generation of words is a decoder thing.
I've been engaging with Lemmy more than Mastodon. Lemmy allows for more interactions through discussions. To me, it seems like Mastodon is slower to get interactions (for context, I have accounts on three different instances and used to post dark/gothic/satirical/surrealist poetry written by me daily, but I haven't posted in days because no one seems to be really engaging with it). Mastodon has a lot of potential, but I think few are really committed to stick to the fediverse and all its potential.
As for why people don't come back, maybe they're confused about which instance to use since there are thousands of different instances for different purposes, I'm not exactly sure.
they trap damp (Florida is extremely humid, unlike Switzerland), grow mold, don't breathe, and cause sickness
Hi. Brazilian here. A very humid country where I live. Here, almost all houses are made of brick and concrete, even near the seashore. There are even entire concrete buildings near Brazilian beaches (such as Rio de Janeiro, Santos, Salvador, Recife, Porto Alegre, Florianópolis and so on) as well as near rivers (such as Manaus and even at the capital, Brasília). Indeed, mold is a thing, a thing that needs constant cleaning. Wall painting does a role in protecting from mold buildup.
We don't exactly have hurricanes (because it's scientifically a thing from the northern hemisphere) but we do have tornadoes and strong winds very often. We have hailstorms. However, there are very old houses and buildings still standing since 1800, centennial houses.
A dev here. Not a reddit dev, but a dev. Deleting thing online doesn't necessarily mean real deletion of the content. For instance, every post and comment is a row of a "big notebook" (a table on a database) and every row is split by columns for specific data: who's the author, where it was posted (which community), what's the content and, sometimes, a yes/no column called "is it deleted?". When you delete such post, you are writing a "yes" inside that column, without actually replacing the content. It's an oversimplified explanation of how platforms register posts, sometimes there's a "version" table (think of it as multiple notebooks keeping track of different things simultaneously) that will keep the different versions of an edited post/comment, so they will remain intact inside such table.
Tl;dr: once on the internet, always on the internet (unfortunately). Especially if we're dealing with a corporation that profits over user's data. Rare cases where a thing on the internet finds real oblivion.
Isn't a file browser needed for browsing the saved documents and spreadsheets?
Not to mention that office suites (such as WPS, OpenOffice and LibreOffice) will inevitably pop up a file browser when the "Open" or "Save" buttons/menu items are clicked.
I'm a 10+ (cumulative) yr. experience dev. While I never used The GitHub Copilot specifically, I've been using LLMs (as well as AI image generators) on a daily basis, mostly for non-dev things, such as analyzing my human-written poetry in order to get insights for my own writing. And I already did the same for codes I wrote, asking for LLMs to "Analyze and comment" my code, for the sake of insights. There were moments when I asked it for code snippets, and almost every code snippet it generated was indeed working or just needing few fixes.
They've been becoming good at this, but not enough to really replace my own coding and analysis. Instead, they're becoming really better for poetry (maybe because their training data is mostly books and poetry works) and sentiment analysis. I use many LLMs simultaneously in order to compare them:
- Free version of Google Gemini is becoming lazy (short answers, superficial analysis, problems with keeping context, drafts aren't so diverse as they were before, among other problems)
- free version of ChatGPT is a bit better (can keep contexts, can issue detailed answers) but not enough (it does hallucinate sometimes: good for surrealist poetry but bad for code and other technical matters when precision and coherence matters)
- Claude is laughable hypersensitive and self-censoring to certain words independently of contexts (got a code or text that remotely mentions the word "explode" as in PHP's
explode
function? "Sorry, can't comment on texts alluding to dangerous practices such as involving explosives", I mean, WHAT?!?!) - Bing Copilot got web searching, but it has a context limit of 5 messages, so, only usable for quick and short things.
- Same about Bing Copilot goes for Perplexity
- Mixtral is very hallucination-prone (i.e. does not properly cohere)
- LLama has been the best of all (via DDG's "AI Chat" feature), although it sometimes glitches (i.e. starts to output repeated strings ad æternum)
As you see, I tried almost all of them. In summary, while it's good to have such tools, they should never replace human intelligence... Or, at least, they shouldn't...
Problem is, dev companies generally focus on "efficiency" over "efficacy", wishing the shortest deadlines while wishing some perfection. Very understandable demands, but humans are humans, not robots. We need our time to deliver, we need to cautiously walk through all the steps needed to finally deploy something (especially big things), or it'll become XGH programming (Extreme Go Horse). And machines can't do that so perfectly, yet. For now, LLM for development is XGH: really fast, but far from coherent about the big picture (be it a platform, a module, a website, etc).
null
As a programmer, I see lots of null characters going on (AA is the base64's starting sequence for 00s).
Yes. It's cryptii.com, a site containing tools for ciphering/deciphering (Caesar Cipher, ROT-13, Vigènere and so on). That's because their ad, at the top right of the page, is so small that's almost unnoticeable. No popup ads, no flashing ads, no crowded ad sections, just a single, small ad at the header. Sites like that (with static and small, non-intrusive ads) deserve to have ads allowed.
Are you depressed?
I guess so. Yeah, I am.
Do you know anyone NOT depressed?
IMHO, those who didn't gaze into the abyss, yet. For "if you gaze long enough into the abyss, the abyss gazes also into you". And the abyss is part of our reality, the dark emptiness that fills us all, both scientifically (99.9% of empty space inside any atom), esoterically (the primordial waters, Tohu Va-bohu, Nuith, Chaos, the Qlippoths, the Yin, Shakti, and so on) and philosophically (nihilism and absurdism). Everyone will gaze into the abyss someday, the light will and must gaze into darkness. She's inevitable. For She is everywhere and nowhere.
The problem is beyond social media accounts. Modern life makes us to have digital things, "apps". As much as I'd benefit from it (I'm a programmer), I can't help but recognize how dangerous is this digital dependence and requirement. Not only our entire lives become bits and bytes across gazillions of platforms, they're out of our real control: from advertising platforms to hackers, the online information kind of awaits to fall on third-party hands.
How many of our information is now inside the training data from major AI models (as much as I like some aspects of AIs, that's a fact), such as GPT-4, Claude Somnet and, especially, Google's Gemini, whose company is responsible for more than 90% of the search engine market while also responsible for our smartphones' brains, not just Android but things embedded on Apple's ecosystems as well?
But people only notice how far our digital footprint goes when there's some serious thing such as the risk of persecution from the government. People decide to delete their accounts hoping that it'll lead to their data being magically erased and, as a programmer, I say: no, our data remains, there's no
DELETE * FROM users WHERE id = your_id
, there's actually aUPDATE users SET deleted=CURRENT_TIME() WHERE id = your_id
that's not the same thing (it just marks your account as deleted, but all the data remains for whatever time period they wish, not even mentioning periodic database backups that'll preserve your data in the hands of that platform)... not even mentioning how your data could've already been assimilated through platform integrations (API) by third-party partners such as advertisers. There's no way to force the erasure.Yeah, there's the law such as GDPR's "Right to be forgotten", but there's a Brazilian saying "O que os olhos não veem o coração não sente" (What the eyes can't see, the heart can't feel). A platform can "confirm the account deletion" but they can keep the data without anyone's knowledge. It's worse: there are laws that require the companies to keep the data for some time (here in Brazil, for example, companies need to keep data for five years, because the justice could need the data in order to solve some investigation).
So, I don't like to be a harbinger of doom, but our digital traces will never actually entirely disappear from the Internet.. especially if you guys are thinking of avoiding the incoming persecution from a new government. Online data remains as far as we couldn't tell. And this includes way beyond social media platforms: it also includes your apps such as, I dunno, your Starbucks accounts? Your Amazon accounts? Everything is data that can be analyzed among a big data and traced back to each one's preferences, including political preferences... I'm sorry to say that, but I need to transmit this knowledge as a developer.