[-] Smokeydope@lemmy.world 3 points 3 days ago

Nice post Hendrik thanks for sharing your knowledge and helping people out :)

[-] Smokeydope@lemmy.world 2 points 3 days ago

I once got kobold.CPP working with their collection of TTS model+ wav tokenizer system. Here's the wiki page on it.

It may not be as natural as a commercial voice model but may be enough to wet your appetite in the event that other solutions feel overwhelmingly complicated

[-] Smokeydope@lemmy.world 1 points 4 days ago* (last edited 3 days ago)

Wow this is some awese information Brucethemoose thanks for sharing!

I hope you dont mind if I ask some things. Tool calling is one of those things I'm really curious about. Sorry if this is too much please dont feel pressured you dont need to answer everything or anything at all. Thanks for being here.

I feel like a lot of people including myself only vaguely understand tool calling, how its supposed to work, and simple practice excersises to use it on via scripts and APIs. What's a dead simple python script someone could cook to tool call within the openai-compatable API?

In your own words what exactly is tool calling and how does an absolute beginner tap into it? Could you clarify what you mean by 'tool calling being built into their tokenizers'?

Would you mind sharing some sources where we can learn more? I'm sure huggingface has courses but maybe you know some harder to find sources?

Is tabbyAPI an engine similar to ollama, llama.cpp, ect?

What is elx2,3, ect?

[-] Smokeydope@lemmy.world 4 points 5 days ago

Yes it would have been awesome of them to release a bigger one for sure :( At the end of the day they are still a business that needs a product to sell. I don't want to be ungrateful complaining that they dont give us everything. I expect some day all these companies will eventually clam up and stop releasing models to the public all together once the dust settles and monopolies are integrated. I'm happy to be here in an era where we can look forward to open licence model released every few months.

30
submitted 6 days ago* (last edited 6 days ago) by Smokeydope@lemmy.world to c/localllama@sh.itjust.works

It seems mistral finally released their own version of a small 3.1 2503 with CoT reasoning pattern embedding. Before this the best CoT finetune of Small was DeepHermes with deepseeks r1 distill patterns. According to the technical report, mistral baked their own reasoning patterns for this one so its not just another deepseek distill finetune.

HuggingFace

Blog

Magistral technical research academic paper

[-] Smokeydope@lemmy.world 3 points 6 days ago

Devstral was released recently specifically trained for tool calling in mind. I havent personally tried it out yet but people say it works good with vscode+roo

1
submitted 6 days ago* (last edited 6 days ago) by Smokeydope@lemmy.world to c/selfhosted@lemmy.world

Setting up a personal site on local hardware has been on my bucket list for along time. I finally bit he bullet and got a basic website running with apache on a Ubuntu based linux distro. I bought a domain name, linked it up to my l ip got SSL via lets encrypt for https and added some header rules until security headers and Mozilla observatory gave it a perfect score.

Am I basically in the clear? What more do I need to do to protect my site and local network? I'm so scared of hackers and shit I do not want to be an easy target.

I would like to make a page about the hardware its running on since I intend to have it be entirely ran off solar power like solar.lowtechmagazine and wanted to share technical specifics. But I heard somewhere that revealing the internal state of your server is a bad idea since it can make exploits easier to find. Am I being stupid for wanting to share details like computer model and software running it?

1
submitted 6 days ago* (last edited 6 days ago) by Smokeydope@lemmy.world to c/asklemmy@lemmy.world

Setting up a personal site on local hardware has been on my bucket list for along time. I finally bit he bullet and got a basic website running with apache on a Ubuntu based linux distro. I bought a domain name, linked it up to my l ip got SSL via lets encrypt for https and added some header rules until security headers and Mozilla observatory gave it a perfect score.

Am I basically in the clear? What more do I need to do to protect my site and local network? I'm so scared of hackers and shit I do not want to be an easy target.

I would like to make a page about the hardware its running on since I intend to have it be entirely ran off solar power like solar.lowtechmagazine and wanted to share technical specifics. But I heard somewhere that revealing the internal state of your server is a bad idea since it can make exploits easier to find. Am I being stupid for wanting to share details like computer model and software running it?

[-] Smokeydope@lemmy.world 17 points 6 days ago* (last edited 6 days ago)

The thing is that even if there isn't much energy in plastic to be extracted, theres still enough energy in it to make a viable food source. Now, consider the humble koala and its primary food source, fucking eucalyptus leaves. Eucalytis is such a dogshit food source that koalas had to spend evolutionary time and energy just to spec into it. To the point they cant eat anything else pretty much. Combine that with the fact that eucalyptis leaves are so devoid of nutrients that the koala has to spend all day every day just snacking on them to not die of malnutrition.

Why? Why would a species even bother with this flim-flam if eucalypti sucks that bad as a food source? The answer is: Food scarcity. Because eucalytis grows everywhere where koalas live and because nobody else is bothering to tap into the food source, this sets up a ecological niche by pretty much gaurenteeing any animal that sucessfully finds a way to make it work will have unlimited amounts of food/energy just from the fact theres so damn much of it and nothing else wants to/can touch it. Sure koalas might have paid the price by sacrificing some brain wrinkles but who needs higher intelligence when you have leaves to snack on and sex to make babies.

A similar thing happened with trees and mushrooms. In the deep evolutionary history of our planet trees were once the apex forms of life with forest covering pretty much the whole planet. This is because nothing knew how to eat the wood stems for a good couple million years. Most of the coal and oil that we dig up today is actually the preserved remains of these unbroken down trees from the carboniferous period that just layed there petrified never rotting until the earths techtonic movement buried the tree corpses deep enough in the mantle for the carbon to compress into hard rock or squeezed+heated into liquid. The great change in the era happened when our humble mycelium bois finally figured out how to eat wood, causing them to essentially become the new apex life for a time by taking advantage of an unlimited and untapped food source (trees).

I suppose my point is to not underestimate the willingness of life to find new food sources. microorganisms don't need much excuse just a slight amount of selective pressure and a couple million/billion/trillion generations of evolutionary trial and error. Which for bacteria takes maybe a couple of years I forget how quick modern microorganism colonies make new generations but its FAST. Add some science nerds who love to play God/intelligent evolution with CRISPR tek and gene tagging, yeah for sure well get plastic eating microbes figured out. Then begins the pandoras box of plastic rotting when we dont want it to.

28
submitted 1 week ago* (last edited 1 week ago) by Smokeydope@lemmy.world to c/localllama@sh.itjust.works

Hello. Our community, c/localllama, has always been and continues to be a safe haven for those who wish to learn about the creation and local usage of 'artificial intelligence' machine learning models to enrich their daily lives and provide a fun hobby to dabble in. We come together to apply this new computational technology in ways that protect our privacy and build upon a collective effort to better understand how this can help humanity as an open source technology stack.

Unfortunately, we have been recieving an uptick in negative interactions by those outside our community recently. This is largely due to the current political tensions caused by our association with the popular and powerful tech companies who pioneered modern machine learning models for buisiness and profit, as well as unsavory techbro individuals who care more about money than ethics. These users of models continue to create animosity for the entire field of machine learning and all associated through their illegal stealing of private data to train base models and very real threats to disrupt the economy by destroying jobs through automation.

There are legitimate criticisms to be had. The cost in creating models, how the art they produce is devoid of the soulful touch of human creativity, and how corporations are attempting to disrupt lives for profit instead of enrich them.

I did not want to be heavy handed with censorship/mod actions prior to this post because I believe that echo chambers are bad and genuine understanding requires discussion between multiple conflicting perspectives.

However, a lot of these negative comments we receive lately aren't made in good faith with valid criticisms against the corporations or technologies used with an intimate understanding of them. No, instead its base level mud slinging by people with emotionally charged vendettas making nasty comments of no substance. Common examples are comparing models to NFTs, namecalling our community members as blind zelots for thinking models could ever be used to help people, and spreading misinformation with cherry picked unreliable sources to manipulatively exaggerate enviromental impact/resource consumption used.

While I am against echo chambers, I am also against our community being harassed and dragged down by bad actors who just don't understand what we do or how this works. You guys shouldn't have to be subjected to the same brain rot antagonism with every post made here.

So Im updating guidelines by adding some rules I intend to enforce. Im still debating whether or not to retroactively remove infringing comments from previous post, but be sure any new post and comments made will be enforced based on the following guidelines.

RULES: Rule: No harassment or personal character attacks of community members. I.E no namecalling, no generalizing entire groups of people that make up our community, no baseless personal insults.

Reason: More or less self explanatory, personal character attacks and childish mudslinging against community members are toxic.

Rule: No comparing artificial intelligence/machine learning models to cryptocurrency. I.E no comparing the usefulness of models to that of NFTs, no comparing the resource usage required to train a model is anything close to maintaining a blockchain/ mining for crypto, no implying its just a fad/bubble that will leave people with nothing of value when it burst.

Reason: This is a piss poor whataboutism argument. It claims something that is blaitantly untrue while attempting to discredit the entire field by stapling the animosity everyone has with crypto/NFT onto ML. Models already do more than cryptocurrency ever has. Models can generate text, pictures, audio. Models can view/read/hear text, pictures, and audio. Models may simulate aspects of cognitive thought patterns to attempt to speculate or reason through a given problem. Once they are trained they can be copied and locally hosted for many thousands of years which factors into initial energy cost vs power consumed over time equations.

Rule: No comparing artificial intelligence/machine learning to simple text prediction algorithms. I.E statements such as "llms are basically just simple text predictions like what your phone keyboard autocorrect uses, and they're still using the same algorithms since <over 10 years ago>.

Reason: There are grains of truth to the reductionist statement that llms rely on mathematical statistics and probability for their outputs. The same can be said for humans and the statistical patterns in our own language and how our neurons come together to predict the next word in the sentence we type out. Its the intricate complexity in the process and the way information is processed that makes all the diffence. ML models have an entire college course worth of advanced mathematics and STEM concepts to create hyperdimensional matrixes to plot the relationship of information, intricate hidden translation layers made of perceptrons connecting billions of parameters into vast abstraction mappings. There were also some major innovations and discoveries made in the 2000s which made modern model training possible that we didn't have in the early days of computing. all of that is a little more complicated than what your phones autocorrect does, and the people who make the lazy reductionist comparison just dont care about the nuances.

Rule: No implying that models are devoid of purpose or potential for enriching peoples lives.

Reason: Models are tools with great potential for helping people through the creation of accessability software for the disabled and enabling doctors to better heal the sick through advanced medical diagnostic techniques. The percieved harm models are capable of causing such as job displacement is rooted in our flawed late stage capitalist human society pressures for increased profit margins at the expense of everyone and everything.

If you have any proposals for rule additions or wording changes I will hear you out in the comments. Thank you for choosing to browse and contribute to this space.

25
submitted 2 weeks ago* (last edited 2 weeks ago) by Smokeydope@lemmy.world to c/localllama@sh.itjust.works

WOAH

26
submitted 2 weeks ago* (last edited 2 weeks ago) by Smokeydope@lemmy.world to c/asklemmy@lemmy.world

So its been almost 10 years since i've swapped computer parts and I am nervous about this. Ive never done any homelab type thing involving big powerful parts, just dealt with average mid range consumer class parts in standard desktop cases.

I do computational work now and want to convert a desktop pc into a headless server with a beefy GPU. I bit the bullet and ordered a used P100 tesla 16gb. Based on what im reading, a new PSU may be in order as well if nothing else. I havent actually read labels yet but online info on the desktop model indicates its probably around a 450~ watt PSU.

The P100 power draw is rated at 250 W maximum. The card im using now draws 185 W maximum. Im reading that 600W would be better for just-in-case overhead. I plan to get this 700W which I hope is enough overhead to cover an extra GPU if I want to take advantage of nvidia CUDA with the 1070ti in my other desktop.

How much does the rest of the system use on average with a ryzen 5 2600 six core in a m4 motherboard and like 16gb ddr4 ram?

When I read up on powering the P100 though I stumbled across this reddit post of someone confused how to get it to connect to a regular consumer corsehair PSU. Apparently the p100 uses a CPU power cable instead of a PCIE one? But you cant use the regular cpu power output from the PSU. Acording to the post, people buy adapter cables with two input gpu cables to one output cpu cable for these cards.

Can you please help me with a sanity check and to understand what i've gotten myself into? I don't exactly understand what im supposed to do with those adapter cables. Do modern PSUs come with multiple GPU power outputs/outlets from the interface these days and I need to run two parallel lines into that adapter?

Thank you all for your help on the last post im deeply grateful for all the input ive gotten here. Ill do my best not to spam post with my tech concerns but this one has me really worried.

6
submitted 2 weeks ago* (last edited 2 weeks ago) by Smokeydope@lemmy.world to c/buildapc@lemmy.world

So its been almost 10 years since i've swapped computer parts and I am nervous about this. Ive never done any homelab type thing involving big powerful parts, just dealt with average mid range consumer class parts in standard desktop cases.

I do computational work now and want to convert a desktop pc into a headless server with a beefy GPU. I bit the bullet and ordered a used P100 tesla 16gb. Based on what im reading, a new PSU may be in order as well if nothing else. I havent actually read labels yet but online info on the desktop model indicates its probably around a 450~ watt PSU.

The P100 power draw is rated at 250 W maximum. The card im using now draws 185 W maximum. Im reading that 600W would be better for just-in-case overhead. I plan to get this 700W which I hope is enough overhead to cover an extra GPU if I want to take advantage of nvidia CUDA with the 1070ti in my other desktop.

How much does the rest of the system use on average with a ryzen 5 2600 six core in a m4 motherboard and like 16gb ddr4 ram?

When I read up on powering the P100 though I stumbled across this reddit post of someone confused how to get it to connect to a regular consumer corsehair PSU. Apparently the p100 uses a CPU power cable instead of a PCIE one? But you cant use the regular cpu power output from the PSU. Acording to the post, people buy adapter cables with two input gpu cables to one output cpu cable for these cards.

Can you please help me with a sanity check and to understand what i've gotten myself into? I don't exactly understand what im supposed to do with those adapter cables. Do modern PSUs come with multiple GPU power outputs/outlets from the interface these days and I need to run two parallel lines into that adapter?

Thank you all for your help on the last post im deeply grateful for all the input ive gotten here. Ill do my best not to spam post with my tech concerns but this one has me really worried.

21

26
submitted 2 weeks ago* (last edited 2 weeks ago) by Smokeydope@lemmy.world to c/localllama@sh.itjust.works

So, ive never really used RAG before. I kept hearing about it and understood it in concept, so wanted to play around with it. I play a table top map drawing game called DELVE that has lots of rules to keep track of requiring lots of manual page flips on my slow ereader.

I wanted to see of it was possible to teach my local model about DELVE through my engines basic RAG system that works by providing a big text file and doing some keyword searches in it based off user input. The engine pulls in local chunks of text document around important keywords as context. This is different than more complex and sophisticated versions of RAG that work off fancy vector space math or something like that. Whatever, kobold is my engine of choice and I'm happy to have something that half works and is simple enough to grasp.

So the first step was converting the game manual PDF to text file. I did this using calibres ebook-convert tool. Then I looked through and tried to clean some glaring issues up invdvitably theres gonna be jank converting PDF unicode or tables.

sudo apt install calibre

ebook-convert delve.pdf delve.txt

Once its cleaned up a little I threw it in kobold lites context>textdb tab and enabled it.

Its cool how such a simple process manages to let the model leverage local datasets. I was shocked when it correctly recited proper game rules.

Full Input/Output shown in screenshotInput: How to play delve?

Output:

Okay, I need to figure out how to explain how to play DELVE based on the provided info snippets. Let's start by breaking down the key components mentioned.

First, the materials required are pencil, eraser, grid paper or blank paper, notepad for resources and trade goods, a deck of standard playing cards without jokers initially, and tokens for various purposes like dwarves, enemies, etc. The starting setup involves drawing a stairwell at the top center as the entrance. Resources (hearts ♥) and Trade Goods (diamonds ♦) start at 20 each. Then there's mention of Soldiers being placed at the entrance. Jokers are removed from the deck at first but added later at depth 5.

The turn structure seems to involve the following steps: prepare, explore, retreat. Each turn has phases like exploring, checking threats, downtime. There's talk about expanding the hold by adding rooms, handling threats through combat or avoidance, managing resources and trade goods.

The Bloodrunes are optional challenges that offer boons but also transport to a nightmare layer upon choosing them. Each rune has specific effects. For example, the Rune of Nightmares immediately starts the player in a nightmare layer with rewards for defeating ancient monsters.

There's a part about using cards instead of dice. Drawing suits corresponds to D4 results, while colors correspond to D2. That's important because the game uses these rolls frequently.

Rooms have various functions. Smelters let you convert ore into metal, Armories provide soldiers, Quarries generate stone, etc. Each room has costs and benefits. Some advanced rooms like Deep Delve or Burglar Alarms have special abilities.

Threats include monsters, thieves, and other events. Combat uses Strength and Defense with armor effects. Thieves steal half your trade goods unless you have burglar alarms. Monsters can cause damage or drain resources.

Trade is done by visiting the Marketplace on certain turns where you exchange goods. The Marketplace deck has different cards affecting trade values. You can sell excess goods or buy needed items at varying rates.

Ancient Ones are powerful enemies encountered as you delve deeper. They have high stats and unique abilities. Beating them gives significant rewards but poses risks if not prepared.

Endgame conditions involve reaching depth 10, accumulating wealth, or being overrun by threats. If you win, you get to set up a new hold with better starting conditions.

I need to structure this explanation step-by-step, covering setup, turn phases, actions in each phase, room building, handling threats, trading, and endgame conditions. Also mention the Bloodrunes as optional challenges with their own rules. Highlight how card draws replace dice rolls for D4 and D2. Maybe note that grid paper isn't mandatory, but it helps visualize the hold layout.

How to Play DELVE: Step-by-Step Guide

  1. Setup

    Materials Needed:
    Pencil/eraser, grid/notebook paper, notebook (for tracking resources), standard playing cards (Jokers removed initially), tokens (coins, dice, etc.).
    Start Your Hold:
    Draw a central stairwell on the top row of grid paper as the entrance.
    Resources:
    Begin with 20 Resources (♥) and 20 Trade Goods (♦) in your notebook. Add 5 Soldiers (tokenized) at the entrance.

  2. Deck & Game Layers

    Remove Jokers from the deck. At Depth 5, re-add them for "Legendary Finds."
    Use cards instead of dice:
    Roll 1D4: Suit = ♠=1, ♣=2, ♦=3, ♥=4.
    Roll 1D2: Color = Black=1, Red=2.

  3. Turn Structure

    Prepare Phase:

     Spend resources to build rooms (e.g., Smelter, Armory) or upgrade defenses.  
     Manage soldiers and prepare for threats.  
    

    Explore Phase:

     Draw two cards from the deck: choose one to reveal a room, threat, or resource.  
     Build rooms by placing them on grid paper (cost varies by type).  
    

    Retreat Phase:

     Resolve combat, theft, or other threats. Sold-out rooms generate income (e.g., Quarry → Stone).  
    
  4. Optional Challenges: Bloodrunes

    Place a Bloodrune (e.g., "Rune of Nightmares") on the first row before starting. Each rune triggers immediate effects (e.g., starting in a nightmare layer) but offers boons like combat bonuses.

  5. Rooms & Upgrades

    Basic Rooms:
    Smelter (3♥): Convert ore to metal.
    Armory (5♥): Add soldiers (1 per turn).
    Quarry (2♥): Gather stone.
    Advanced Rooms:
    Deep Delve (10♥): Explore deeper layers faster.
    Burglar Alarms (7♥): Mitigate thief losses.

  6. Threats

    Combat: Roll strength vs. enemy defense (armor reduces damage). Soldiers protect rooms.
    Thieves: Steal half your Trade Goods unless you have Burglar Alarms.
    Ancient Ones: Powerful bosses at deeper levels (e.g., "The Devourer") with unique abilities and high rewards.

  7. Trading & Marketplace

    Visit the Marketplace every 3 turns by drawing cards: adjust supply/demand for resources. Sell excess goods or buy needed items at fluctuating rates.

  8. Victory Conditions

Win by reaching Depth 10, accumulating 200+ Trade Goods, or surviving until the deck runs out. Lose if overrun by threats, depleted of resources, or defeated in combat.

  1. Bloodrune Example: Rune of Nightmares

    Start immediately on a nightmare layer without access to the Marketplace. Defeat ancient monsters for powerful rewards but risk doom.

Tips:

Use grid paper to visualize your hold’s layout.  
Balance room upgrades with threat mitigation.  
Bloodrunes add difficulty but enhance replayability.  

Enjoy delving into the depths! 🌋⚔️🗡️

43

Do I need to worry about upgrading motherboard with GPU if its old or will it work okay just buying a new GPU?

12
submitted 3 weeks ago* (last edited 3 weeks ago) by Smokeydope@lemmy.world to c/localllama@sh.itjust.works

Heres a link to the papers, starting around the end of page 8 is revelant paragraph. Thank you hendrik! https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf

36
submitted 4 weeks ago* (last edited 4 weeks ago) by Smokeydope@lemmy.world to c/trees@lemmy.world

Come on bro just swipe the credit card

[-] Smokeydope@lemmy.world 123 points 2 years ago* (last edited 2 years ago)

The pocket of air that was where you teleported now get displaced at a very decent fraction of the speed of light while the pocket of space you once ocupied becomes a almost pure vaccum. the air moves so fast it creates a sonic boom that ruptures the ear drums. Then, a few atoms of air collide together with such incredible force the atoms split and causes a small grade nuclear explosion.

[-] Smokeydope@lemmy.world 87 points 2 years ago* (last edited 2 years ago)

My elderly parents in their 60s use linux mint daily and have never had an issue with it (admittedly I did have to set it up for them still). I just set up the desktop shortcuts for them to their websites and turn on automatic updates. The hardest part isn't using an alternative OS like mint or pop, its getting an average person to figure out how to install it. Getting into your BIOS to boot into the installation drive, re-partitioning your harddrive to free up space for dual booting or nuking windows off all together, those are the hardest parts for any first timers IMO. After youve done it a dozen times its no problemo but the first time is nerve racking at least it was to me.

[-] Smokeydope@lemmy.world 227 points 2 years ago* (last edited 2 years ago)

This is a copy/pasted message I wrote up on another thread. As long as there are people in the comments shilling kagi, I will shill my prefered engines. At least my suggestions will bring awareness to free as in freedom projects. I hope to god people paying 10$/month just to not get datacucked by search engines will also learn something and save their money.

SearX/SearXNG is a free and open source, highly customizable, and self-hostable meta search engine. SearX instances act as a middle man, they query other search engines for you, stripping all their spyware ad crap and never having your connection touch their servers. Of course you have to trust the SearX instance host with your query information, but again if you are that paranoid just self host.

I personally trust some foss loving sysadmin that host social services for free out of alturism, who also accepts hosting donations, whos server is located on the other side of the planet, with my query info over Google/Alphabet any day.

Its nice to be able to email and have a human conversation with your search engine provider thats just a knowlegable every day joe who genuinely believes in the project and freely dedicates their resources to it. Consider sending some cash their way to help with upkeep if you like the services they provide, they will probably appreciate and make use of that 10$ better than kagi.

Heres a list of all public searx instances, I personally prefer to use paulgo.io All SearX instances are configured different to index different engines. If one doesn't seem to give good results try a few others.

Did I mention it has bangs like duckduckgo? If you really need google like for maps and buisness info just use !!g in the query

search.marginalia.nu is a completely novel search engine written and hosted by one dude that aims to prioritize indexing lighter websites little to no javascript as these tend to be personal websites and homepages that have poor SEO and the big search engines won't index well. If you remember the internet of the early 2000s and want a nostalgia trip this ones for you. Its also open source and self-hostable

Finally, YaCy is another completely novel search engine that uses peer-to-peer technology to power a big webcrawler which prioritizes indexes based off user queries and feedback. Everyone can download yacy and devote a bit of their computing power to both run their own local instance and help out a collective search engine. Companies can also download yacy and use it to index their private intranets.

They have a public instance available through a web portal. To be upfront, YaCy is not a great search engine for what most people usually want, which is quick and relevant information within the first few clicks. But, it is an interesting use of technology and what a true honest-to-god community-operated search engine looks like untainted by SEO scores or corporate money-making shenanigans.

I hope this has been informative to those who believe theres only a few options to pick from, I know these options are so unknown to most people.

[-] Smokeydope@lemmy.world 87 points 2 years ago* (last edited 2 years ago)

Sometimes I think I made the right decision to just get a huge harddrive and download all my favorite entertainment in drm free format. Movies, music, games, books. I saw this coming a mile away a decade ago. The only thing that will really hurt me is if/when Steam inevitably goes full corporate cucks and starts going hard on the DRM locking down my library.

[-] Smokeydope@lemmy.world 82 points 2 years ago* (last edited 2 years ago)

Now please unremove the shroom community as next priority. Empowering open minded people with the option and knowledge to heal themselves through the use of psychadelics (and other kinds of mushrooms that can potentially help fight diseases such as cancer) that they can grow themselves without big pharma and giving them a community to share their advice+experiences is the right thing to do.

[-] Smokeydope@lemmy.world 92 points 2 years ago

Wow that was actually a cute story, not sure how legit but nice feels none the less

view more: next ›

Smokeydope

joined 2 years ago
MODERATOR OF