this post was submitted on 19 May 2025
1548 points (98.1% liked)
Microblog Memes
10828 readers
295 users here now
A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.
Created as an evolution of White People Twitter and other tweet-capture subreddits.
RULES:
- Your post must be a screen capture of a microblog-type post that includes the UI of the site it came from, preferably also including the avatar and username of the original poster. Including relevant comments made to the original post is encouraged.
- Your post, included comments, or your title/comment should include some kind of commentary or remark on the subject of the screen capture. Your title must include at least one word relevant to your post.
- You are encouraged to provide a link back to the source of your screen capture in the body of your post.
- Current politics and news are allowed, but discouraged. There MUST be some kind of human commentary/reaction included (either by the original poster or you). Just news articles or headlines will be deleted.
- Doctored posts/images and AI are allowed, but discouraged. You MUST indicate this in your post (even if you didn't originally know). If an image is found to be fabricated or edited in any way and it is not properly labeled, it will be deleted.
- Absolutely no NSFL content.
- Be nice. Don't take anything personally. Take political debates to the appropriate communities. Take personal disagreements & arguments to private messages.
- No advertising, brand promotion, or guerrilla marketing.
RELATED COMMUNITIES:
founded 2 years ago
MODERATORS
It still scrambles things, removes context, and can overlook important things when it summarizes.
That is why the "review" part of the comment you reply to is so important.
Yeah thats why you give it examples of how to summarize. But im machine learning engineer so maybe it helps that I know how to use it as a tool.
It doesn't know what things are key points that make or break a diagnosis and what is just ancillary information. There's no way for it to know unless you already know and tell it that, at which point, why bother?
You can tell it because what you're learning has already been learned. You are not the first person to learn it. Just quickly show it those examples from previous text or tell it what should be important based on how your professor tests you.
These are not hard things to do. Its auto complete, show it how to teach you.
In order to tell it what is important, you would have to read the material to begin with. Also, the tests we took in class were in preparation for the board exams which can ask you about literally anything in medicine that you are expected to know. The amount of information involved here and the amount of details in the text that are important basically necessitate reading the text yourself and knowing how the information in that text relates to everything else you've read and learned.
Trying to get the LLM to spit out an actually useful summary would be more time-consuming than just doing the reading to begin with.
Off topic since you mentioned you are an ML engineer.
How hard is it to train a GPT at home with limited resources.
Example I have a custom use cases and limited data, I am a software developer proficient in python but my experience comes from REST frameworks and Web development
It would be great if you guide me on training at a small scale locally.
Any guides or resources would be really helpful.
I am basically planning hobby projects where I can train on my own data such as my chats with others and then do functions. Like I own a small buisness and we take a lot of orders on WhatsApp, like 100 active chats per month with each chat having 50-500 messages. It might be small data for LLM but I want to explore the capabilities.
I saw there are many ways like fine tuning and one shot models and etc but I didn't find a good resource that actually explains how to do things.