[-] tal@lemmy.today 3 points 9 hours ago

cultural wasteland

https://en.wikipedia.org/wiki/List_of_U.S._states_and_territories_by_historical_population

According to this, Nevada only had 110k people statewide in 1940.

In 1940, New York City had 7.5 million.

Gotta have people to produce cultural output.

[-] tal@lemmy.today 12 points 9 hours ago* (last edited 9 hours ago)

These are not official state foods. They are what the source website has decided to appoint as the favorite food for each.

https://web.archive.org/web/20190622015744/https://www.cookingchanneltv.com/recipes/packages/50-state-foods

This is a list of official state foods:

https://en.wikipedia.org/wiki/List_of_U.S._state_foods

EDIT: Corrected link; source page is down and had originally linked to wrong page. Used archive.org to get to original.

[-] tal@lemmy.today 8 points 14 hours ago

I got 1000 games, 200 of which are GOG offline installers.

Nothing but food and bills now as I wait for it all to collapse.

While I'll believe that you have solid storage longevity, prepping for societal collapse by archiving 1000 video games seems kind of unorthodox.

[-] tal@lemmy.today 1 points 15 hours ago

I think I watched part of Pretty Woman at one point, lost interest, never finished it. Wouldn't have known the names of anyone in it, though.

[-] tal@lemmy.today 0 points 20 hours ago

Richard Gere?

https://en.wikipedia.org/wiki/Richard_Gere

Richard Tiffany Gere (/ɡɪər/ GEER;[1][2] born August 31, 1949) is an American actor. He began appearing in films in the 1970s, playing a supporting role in Looking for Mr. Goodbar (1977) and a starring role in Days of Heaven (1978). Gere came to prominence with his role in the film American Gigolo (1980), which established him as a leading man and a sex symbol.

Ah.

[-] tal@lemmy.today 1 points 22 hours ago

Also responding in response to a private message in hopes that some information might be useful to others:

To be honest, I understood about half of it haha.

rubs chin

So, I'm not sure what bits aren't clear, but if I had to guess as to terms in my comments, you can mostly just search for and get a straightforward explanation, but:

inpainting

Inpainting is when you basically "erase" part of an already-generated image that you're mostly happy with, and then generate a new image, but only for that tiny bit. It's a useful way to fine-tune an image that you're basically happy with.

“Image-to-image”.

That's an Automatic1111 term, I think. Oh, Automatic1111 is a Web-based frontend to run local image generation, as opposed to ArtBot, which appears to be a Web-based frontend to Horde AI, which is a bunch of volunteers who donate their GPU time to people who want to do generation on someone else's GPU. I'm guessing that ArtBot got it from there.

Automatic1111 is was widely-used, and IMHO is easier to start out with, but ComfyUI, which has a much steeper learning curve but is a lot more powerful, is displacing it as the big Web UI for local generation.

Basically, Automatic1111, as it ships without extensions, has two "tabs" where one does image generation. The first is "text-to-image". You plug in a prompt, you get back an image. The second is "image-to-image". You plug in an image and a prompt and process that image to get a new image. My bet is that ArtBot used that same terminology.

prompt

This is just the text that you're feeding a generative image AI to get an image. A "prompt term" is one "word" in that.

Stable Diffusion

This is one model (well, a series of models). That's what converts your text into an image. It was the first really popular one. Flux, which I referenced above, is a newer one. It's possible for people who have enough hardware and compute time to create "derived models"


start from one of those and then train models on additional images and associated terms to "teach" them new concepts. Pony Diffusion is an influential model derived from Stable Diffusion, for example.

A popular place to download models


the ones that are freely distributable


for local use is civitai.com. That also has a ton of AI-generated images and shows the model and prompts used to generate them, which IMHO is a good way to come up to speed on what people are doing.

Horde AI


unfortunately but understandably


doesn't let people upload their own models to the computers of the people volunteering their GPUs, so if you're using that, you're going to be limited to using the selection of models that Horde has chosen to support.

Models have different syntax. Unfortunately, it looks like ArtBot doesn't provide a "tutorial" for each or anything. There are guides for making prompts for various "base" models, like Stable Diffusion and Flux, and generally you want to follow the "base" model's conventions.

SD

A common acronym for "Stable Diffusion".

sampler

So, the basic way these generative AIs work is by starting with what amounts to being an image full of noise -- think of a TV just showing static. That static is randomly-generated. On computers, random numbers are usually generated via pseudo-random number generators. These PRNGs start with a "seed" value, and that determines what sequence of random numbers they come up with. Lots of generative AI frontends will let you specify a "seed". That will, thus, determine what static you're starting out with. You can have a seed that changes each generation, which many of them do and I think that ArtBot does, looking at its Web UI, since it has a "seed" field that isn't filled in by default. IMHO, this is a bad default, since if you do that, each image you generate will be totally different


you can't "refine" one by slightly changing the prompt to get a slightly-different image.

Anyway, once they have that "static" image, then they perform "steps". Each "step" takes the existing image and uses the model, the prompt, and the sampler to determine a new state of the image. You can think of this as "trying to see images in the static". They just repeat this a number of times, however many steps you have them set to run. They'll tend to wind up with an image that is associated with the prompt terms you specified.

An easy way to see what they're doing is to run a generation with a fixed seed set to 0 steps, then one set to 1 step, and so forth.

You seem super knowledgeable on the topic, where did you learn so much?

I honestly don't, because for me, this is a part-time hobby. Probably the people who you can access who are most-familiar with it that I've seen are on subreddits on Reddit dedicated to this stuff. I'm trying to bring some of it over to the Threadiverse.

  • Civitai.com is a good place to see how people are generating images, look at their prompt terms.

  • Here and related Threadiverse communities, though there's not a lot of talk on here, mostly people showing off images (though I'm trying to improve that with this comment and some of my past ones!). !stable_diffusion@lemmy.dbzer0.com tends towards more the technical side. !aigen@lemmynsfw.com has porn, but not a lot of discussion, though I remember once posting an introduction to use of the Regional Prompting extension for Automatic1111 there.

  • Reddit's got a lot more discussion; last I looked, mostly on /r/StableDiffusion, though the stuff there isn't all about Stable Diffusion.

  • There are lots of online tutorials talking about designing a prompt and such, and these are good for learning about a particular model's features.

Some stuff is specific to one particular model or frontend, and some spans multiple, and while there's overlap today, that information isn't exactly nicely and neatly categorized. For example, "negative prompts" are a feature of Stable Diffusion, and are invaluable there


are prompt terms that it tries to avoid rather than include


but Flux doesn't support them. DALL-E, a commercial service, doesn't support negative prompts. Midjourney, another commercial service, does. Commercial services also aren't gonna tell everyone exactly how everything they do works. Also, today this is a young and very fast-moving field, and information that's a year old can be kind of obsolete. There isn't a great fix for that, I'm afraid, though I imagine that it may slow down as the field matures.

[-] tal@lemmy.today 2 points 1 day ago* (last edited 1 day ago)

It does look like they have at least one Flux model in that ArtBot menu list of models, so might try playing around with that, see if you're happier with the output. I also normally use 25 steps with Flux rather than 20, and the Euler sampler, both of which it looks like it can do.

EDIT: Looks like for them, "Euler" is "k_euler".

[-] tal@lemmy.today 2 points 2 days ago* (last edited 13 hours ago)

I'm not familiar with Artbot.

investigates

Yes, it looks like it supports inpainting:

https://tinybots.net/artbot/create

Look down in the bottom section, next to "Image-to-image".

That being said, my experience is that inpainting is kind of time-consuming. I could see fine-tuning the specific look of a feature -- like, maybe an image is fine except for a hand that's mangled, and you want to just tweak that bit. But I don't know if it'd be the best way to do this.

  • I don't know if this is actually true, but I recall reading that prompt term order matters for Stable Diffusion (assuming that that is the model you are using; it looks like ArtBot lets you select from a variety of models). Earlier prompt terms tend to define the scene. While I've tended to do this, I haven't actually tried to experiment enough to convince myself that this is the case. You might try sticking the "dog" bit earlier in the prompt.

  • If this is Stable Diffusion or an SD-derived model and not, say, Flux, prompt weighting is supported (or at least it is when running locally on Automatic1111, and I think that that's a property of the model, not the frontend). So if you want more weight to be placed on a prompt term, you can indicate that. Adding additional parentheses will increase weight of a term, and you can provide a numeric weight: A cozy biophilic seaport village. In the distance there are tall building and plants. There are spaceships flying above. In the foreground there is a cute ((dog)) sitting on a bench. or A cozy biophilic seaport village. In the distance there are tall building and plants. There are spaceships flying above. In the foreground there is a cute (dog:3) sitting on a bench.

  • In general, my experience with Stable Diffusion XL is that it's not nearly as good as Flux at taking in English-language descriptions of relationships between objects in a scene. That is, "dog on a bench" may result in a dog and a bench, but maybe not a dog on a bench. The images I tend to create with Stable Diffusion XL tend to be a list of keywords, rather than English-language sentences. The drawback with Flux is that it's heavily weighted towards creating photographic images, and I'm guessing, from what you submitted, that you're looking more for a "created by a graphic artist" look.

EDIT: Here's the same prompt you used fed into stoiquoNewrealityFLUXSD35f1DAlphaTwo, which is derived from Flux, in ComfyUI:

Here it is fed into realmixXL, which is not derived from Flux, but just from SDXL:

The dog isn't on the bench in the second image.

[-] tal@lemmy.today 3 points 2 days ago

https://www.pewpewtactical.com/glock-18-sale-cant-have-one/

The Glock 18 is a full-sized automatic pistol chambered in 9mm capable of 1,200 rounds a minute.

https://www.firequest.com/AJ299.html

Glock 9MM 100rd Drum - Fits Glock 17/18/19/26

It's all about the DPS.

[-] tal@lemmy.today 2 points 2 days ago* (last edited 2 days ago)

Long Glock is loooong.

I think the scale on that model might be off a bit or something.

[-] tal@lemmy.today 15 points 2 days ago

All animals are equal, but some animals are more equal than others.

— George Orwell, Animal Farm

21
submitted 2 weeks ago* (last edited 2 weeks ago) by tal@lemmy.today to c/comicstrips@lemmy.world

Print shows Uncle Sam asleep in a chair with a large eagle perched on a stand next to him; he is dreaming of conquests and annexations, asserting his "Monroe Doctrine" rights, becoming master of the seas, putting John Bull in his place, and building "formidable and invulnerable coast defenses"; on the floor by the chair are jingoistic and yellow journalism newspapers.

Caption:

Uncle Sam's Dream of Conquest and Carnage


Caused by Reading the Jingo Newspapers

Puck, November 13, 1895

Note that I downscaled the image to half source resolution to conform to lemmy.today pict-rs resolution restrictions; it's still pretty decent resolution.

12
submitted 2 weeks ago by tal@lemmy.today to c/comicstrips@lemmy.world

Illustration shows Uncle Sam using a magnifying glass to see in his left hand a diminutive man labeled "Rumor Monger" yelling "Panic, National Disaster, Failures, [and] Ruin" into a megaphone labeled "Wall Str."

Caption:

The Wall Street Rumor-monger

Uncle Sam


Well ! Well ! Will this nuisance ever learn that the country governs Wall Street ; not Wall Street, the country ?

23
submitted 2 weeks ago by tal@lemmy.today to c/comicstrips@lemmy.world

Illustration shows an old man labeled "Republican Reactionary" and an old woman labeled "Democratic Reactionary" standing together, looking up at a dirigible labeled "Progressive Policies".

Caption:

Set in their ways

"Well, the young folks may go if they want to, but they'll never get you and me in the breakneck thing."

Source: https://www.loc.gov/resource/ppmsca.27734/

Puck, May 10, 1911.

29
submitted 2 weeks ago* (last edited 2 weeks ago) by tal@lemmy.today to c/comicstrips@lemmy.world

Illustration shows a man labeled "Workingman" bent over under the weight of an enormous dinner pail labeled "Tariff for Graft Only".

Caption:

The Fullest Dinner Pail

Source: https://www.loc.gov/resource/ppmsca.26274/

Puck, May 27, 1908

Note that the dinner pail was the analog of what we'd call the lunchbox today; dinner was, at one point, the mid-day meal.

15
submitted 2 weeks ago* (last edited 2 weeks ago) by tal@lemmy.today to c/comicstrips@lemmy.world

Illustration shows Uncle Sam in a tree, chased there by the Russian Bear which is standing at the base of the tree; Uncle Sam has dropped his rifle labeled "U.S. Duty on Russian Sugar."

Caption:

As the tariff-war must end

Uncle Sam (to Russia)


Don't shoot! I'll come down!

Source: https://www.loc.gov/resource/ppmsca.25550/

Puck, July 31, 1901

30
submitted 2 weeks ago by tal@lemmy.today to c/news@lemmy.world
11
submitted 2 weeks ago* (last edited 2 weeks ago) by tal@lemmy.today to c/youshouldknow@lemmy.world

db0 set up an AI image generator bot both on the Threadverse and Mastodon some time back for anyone to use. All one needs to do is mention it in a comment followed by the text "draw for me" and then prompt text, and it'll respond with some generated images. For example:

@aihorde@lemmy.dbzer0.com draw for me An engraving of a skunk.

Caused it to reply back to me with:

Here are some images matching your request

Prompt: An engraving of a skunk.

Style: flux

The bot has apparently been active for some time and it looks like few people were aware that it existed or used it


I certainly wasn't!

I don't know whether it will work in this community, as this community says that it prohibits most bots from operating here. However, I set up a test thread over here on !test@sh.itjust.works to try it out, where it definitely does work; I was exploring some of how it functions there, and if you're looking for a test place to try it out, that should work!

It farms out the compute work to various people who are donating time on their GPUs via AI Horde.

The FAQ for the bot is here. For those familiar with local image generation, it supports a number of different models.

The default model is Flux, which is, I think, a good choice


that takes English-like sentences describing a picture, and is pretty easy to use without a lot of time reading documentation.

A few notes:

  • The bot disallows NSFW image generation, and if it detects one, it'll impose a one-day tempban on its use to try to make it harder for people searching for loopholes to generate them.

  • There appears to me in my brief testing to be some kind of per-user rate limit. db0 says that he does have a rate limit on Mastodon, but wasn't sure whether he put one on Lemmy, so if you might only be able to generate so many images so quickly.

  • The way one chooses a model is to change the "style" by ending the prompt text with "style: stylename". Some of these styles entail use of a different model; among other things, it's got models specializing in furry images; there's a substantial furry fandom crowd here. There's a list of supported styles here with sample images.

db0 has encouraged people to use it in that test post and in another thread where we were discussing this, says have fun. I wanted to post here to give it some visibility, since I think that a lot of people, like me, have been unaware that has been available. Especially for people on phones or older computers, doing local AI image generation on GPUs really isn't an option, and this lets folks who do have GPUs share them with those folks.

135
submitted 3 weeks ago by tal@lemmy.today to c/cassettefuturism@lemm.ee

Original post by Crul@lemm.ee:

Source: Photo by Sandstein - File:Epson HX-20 in case - MfK Bern.jpg - Wikimedia Commons

Wikipedia: Epson HX-20

The Epson HX-20 (also known as the HC-20) was the first "true" laptop computer. It was invented in July 1980 by Yukio Yokozawa, who worked for Suwa Seikosha, a branch of Japanese company Seiko (now Seiko Epson), receiving a patent for the invention.

Seen on Functional object - Object, Epson, Epson portable computer, 1980-1989

102
submitted 4 weeks ago by tal@lemmy.today to c/cassettefuturism@lemm.ee

https://lemm.ee/post/65824884 for details.

Moderators interested in migrating to a new community on another instance might want to consider selecting an instance and doing so sooner rather than later so that users here have time to see a migration post here and subscribe to the new community.

46
submitted 1 month ago by tal@lemmy.today to c/news@lemmy.world
126
submitted 1 month ago by tal@lemmy.today to c/news@lemmy.world
135
submitted 1 month ago by tal@lemmy.today to c/world@lemmy.world
view more: next ›

tal

joined 2 years ago