[-] magn418@lemmynsfw.com 8 points 10 months ago* (last edited 10 months ago)

They all have the same breasts. Need more variation 😆

[-] magn418@lemmynsfw.com 2 points 11 months ago* (last edited 11 months ago)

https://lemmynsfw.com/post/4048137

I'd say try MythoMax-L2 first. I think it's a pretty solid allrounder. Does NSFW but also other things. Nothing special and not the newest any more, but easy to get going without fiddling with the settings too much.

If you can't run models with 13B parameters, I'd have to think which one of the 7B models is currently the thing. I think 7B is the size most people play around with and produce new finetunes, merges and what not. But I also can't keep up with what the community does, every bit of information is kind of outdated after 2-4 weeks 😆

[-] magn418@lemmynsfw.com 4 points 11 months ago* (last edited 11 months ago)

I assume (from your user handle) that you know about the allure of roleplaying and diving into fantasy scenarios. AI can do it to some degree. And -of course- people also do erotic roleplay. I think this always took place. People met online to do this kind of roleplay in text chats. And nowadays you can do it with AI. You just tell it to be your synthetic maid or office affair or waifu and it'll pick up that role. People use it for companionship, it'll listen to you, ask you questions, reassure you... Whatever you like. People also explore taboo scenarios... It's certainly not for everyone. You need a good amount of imagination, everything is just text chat. And the AI isn't super smart. The intelligence if these models isn't quite on the same level as the big commercial services like ChatGPT. Those can't be used as they all banned erotic roleplay and also refuse to write smutty stories.

I agree with j4k3. It's one of the use-cases for AI I keep coming back for. I like fantasy and imagination in connection with erotics. And it's something that doesn't require AI to be factually correct. Or as intelligent as it'd need to be to write computer programs. People have raised concerns that it's addicting and/or makes people yet more lonely to live with just an AI companion... To me it's more like a game. You need to pay attention not to get stuck in your fantasy worlds and sit in front of your computer all day. But I'm fine with that. And I'm less reliant on AI with that, than people who use AI to sum up the news and believe the facts ChatGPT came up with...

[-] magn418@lemmynsfw.com 2 points 11 months ago* (last edited 11 months ago)

Hehe. It's fun. And a different experience every time 😆

I don't know which models you got connected to. Some are a bit more intelligent. But they all have their limits. I also sometimes get that. I roleplay something happening in the kitchen and suddenly we're in the livingroom instead. Or lying in bed.

And they definitely sometimes have the urge to mess with the pacing. For example that now is the time to wrap up everything in two sentences. It really depends on the exact model. Some of them have a tendency to do so. It's a bit annoying if it happens regularly. The ones trained more on stories and extensive smut scenes will do better.

The comment you saw is definitely also something AI does. It has seen text with comments or summaries underneath. Or forum style conversations. Some of the amateur literature contains lines like 'end of story' or 'end of part 1' and then some commentary. But nice move that it decided to mock you 😂

Thanks for providing a comparison to human nsfw chats. I always wondered how that works (or turns out / feels.) Are there dedicated platforms for that? Or do you look for people on Reddit, for example?

[-] magn418@lemmynsfw.com 2 points 11 months ago* (last edited 11 months ago)

The LLMs use a lot of memory. So if you're doing inference on a GPU you're going to want one with enough VRAM. Like 16GB or 24GB. I heard lots of people like the NVidia 3090 Ti because that graphics card could(/can?) be bought used for a good price for something that has 24GB of VRAM. The 4060 Ti has 16GB of VRAM and (I think) is the newest generation. And AFAIK the 4090 is the newest consumer / gaming GPU with 24GB of VRAM. All the gaming performance of those cards isn't really the deciding factor, the somewhat newer models all do. It's mostly the amount of VRAM on them that is important for AI. (And pay attention, a NVidia card with the same model name can have variants with different amounts of VRAM.)

I think the 7B / 13B parameter models run fine on a 16GB GPU. But at around 30B parameters, the 16GB aren't enough anymore. The software will start "offloading" layers to the CPU and it'll get slow. With a 24GB card you can still load quantized models with that parameter count.

(And their professional equipment dedicated to AI includes cards with 40GB or 48GB or 80GB. But that's not sold for gaming and also really expensive.)

Here is a VRAM calculator:

You can also buy an AMD graphics card in that range. But most of the machine learning stuff is designed around NVidia and their CUDA toolkit. So with AMD's ROCm you'll have to do some extra work and it's probably not that smooth to get everything running. And there are less tutorials and people around with that setup. But NVidia sometimes is a pain on Linux. If that's of concern, have a look at RoCm and AMD before blindly buying NVidia.

With some video cards you can also put more than one into a computer, combine them and thus have more VRAM to run larger models.

The CPU doesn't really matter too much in those scenarios, since the computation is done on the graphics card. But if you also want to do gaming on the machine, you should consider getting a proper CPU for that. And you want at least the amount of VRAM in RAM. So probably 32GB. But RAM is cheap anyways.

The Apple M2 and M3 are also liked by the llama.cpp community for their excellent speed. You could also get a MacBook or iMac. But buy one with enough RAM, 32GB or more.

It all depends on what you want to do with it, what size of models you want to run, how much you're willing to quantize them. And your budget.

If you're new to the hobby, I'd recommend trying it first. For example kobold.cpp and text-generation-webui with the llama.cpp backend (and a few others) can do inference on CPU (or CPU plus some of it on GPU). You can load a model on your current PC with that and see if you like it. Get a feeling what kind of models you prefer and their size. It won't be very fast, but it'll do. Lots of people try chatbots and don't really like them. Or it's too complicated for them to set it up. Or you're like me and figure out you don't mind waiting a bit for the response and your current PC is still somewhat fine.

[-] magn418@lemmynsfw.com 2 points 1 year ago* (last edited 1 year ago)

Thanks, yeah this definitely very useful to me. Lots of stuff regarding this isn't really obvious. And I've made every mistake that degrades the output. Give conflicting instructions, inadvertently direct things into a direction I didn't want and it got shallow and predictable. Or not set enough direction.

Briggs Myers

I agree, things can prove useful for a task despite not being 'true' (in lack of a better word). I can tell by the way you write that you're somewhat different(?) than the usual demographic here. Mainly because your comments are longer and focused on detail. And it seems to me you're not bothered with giving "easy answers", in contrast to the average person who is just interested in getting an easy answer to a complex problem. I can see how that can prove to be incompatible at times. In real-life I've always done well by listening to people and then going with my gut feeling concerning their personality. I don't like judging people or putting them into categories since that doesn't help me in real-life and narrows my perspective. Whether I like someone or want to listen to them, for example for their perspective or expertise, is determined by other (specific) factors and I make that decision on a case-by-case basis. Some personality traits often go alongside, but that's not always the case and it's really more complex than that.

Regarding story-writing it's obviously the other way around. I need to guide the LLM into a direction and lay down the personality in a way the model can comprehend. I'll try to incorporate some of your suggestions. In my experience the LLMs usually get the well-known concepts including some of the information the psychology textbooks have available. So, I haven't tried yet, but I'd also conclude that it's probably better to have it deduct things from a BM personality type than describing it with many adjectives. (That's what I've done to this point.)

In my experience the complexity starts to piles up if you do more than the obvious or simple role-play. I want characters with depth, ambivalence... And conflict is what drives the story. Back when I started tinkering with AI, I've done a submissive maid character. I think lots of people have started out with something like that. And even the more stupid models can easily pull that off. But you can't then go on and say the character is submissive and defiant at the same time, it just confuses the LLM and doesn't provide good results... I'm picking a simple example here, but that was the first situation where I realized I was doing it wrong. My assessment is that we need some sort of workaround to get it into a form that the LLM can understand and do something with it. I'm currently busy with a few other things but I'll try introducing psychology and whether the other workarounds like shadow-characters you've described prove useful to me.

If you pay very close attention to each model, you will likely notice how they remind themselves [...]

Yes, I've observed that. It comes to no surprise to me that LLMs do it, as human-written stories also do that. Repeat important stuff, or build a picture that can later be recalled by a short mention of the keywords. And that's in the training data, so the LLMs pick up on that.

With the editing it's a balance. It picks up on my style and I can control the level of detail this way, start a specific scene with a first sentence. But sometimes it seems I'm also degrading the output, that is correct.

the best way to roleplay within Oobabooga itself is to use the Notepad tab

I've also been doing that for some time now.

drop boundaries, tell it you know it can [...]

Nice idea. I've done things like that. Telling it it is a best-seller writer of erotic fiction already makes a good amount of difference. But there's a limit to that. If you tell it to write intense underground literature, it also picks up on the lower quality and language and quirks in amateur writing. I've also tried an approach like few-shot prompting, give it a few darker examples to shift the boundaries and atmosphere. I think the reason why all of that works is the same, the LLM needs to be guided where to orientate itself, what kind of story type it's trying to reproduce because they all have certain stereotypes, tropes and boundaries built in. Without specific instructions it seems to prefer the common way, remaining within socially acceptable boundaries, or just use something as an example for something that is wrong, immediately contrast ethical dilemmas and push towards a resolution. Or not delve into conflict too much.

And I've never deemed useful what other people do. Overly tell it what to do and what not to do. Especially phrasing it negatively "Don't repeat yourself", "Don't write for other characters", "Don't talk about this and that"... has never worked for me. It's more the opposite, it makes everything worse. And I see a lot of people doing this. In my experience the LLM can understand negative worded instructions, but it can't "not think of an elephant". Positively worded things work better. And yet better is to set the tone correctly, have what you want emerge from simple concepts and a concrete setting that answers the "why" and not just tells what to do.

I've also introduced further complexity, since I don't like spoon-feeding things to the reader. I like to confront them with some scenario, raise questions but have the reader make up their mind, contemplate and come up with the answers themselves. The LLMs I've recently tried know that this is the way stories are supposed to be written. And why we have open-ended stories. But they can't really do it. The LLMs have a built-in urge to answer the questions and include some kind of resolution or wrap-up. Or analyze the dilemmas they've just made up, focus on the negative consequences to showcase something. And this is related to the point you made about repeating information in the stories. If I just rip it out by editing it, it sometimes leads to everything getting off-track.

I'll try to come up with some sort of meta-level story for the LLM. Something that answers why the ambivalence is there, why to explore the realm beyond boundaries. Why we only raise questions and then not answer them. I think I need something striking, easy and concrete. Giving the real reason (I'm writing a story to explore things and this is how stories work,) doesn't seem to be clear enough to yield reliable results.

[-] magn418@lemmynsfw.com 2 points 1 year ago* (last edited 1 year ago)

I also don't mind exaggerated. I'd like to be able to picture myself in that situation, but I have some imagination available. And I know realistic and grounded sex. That's not what I read literature for.

[-] magn418@lemmynsfw.com 5 points 1 year ago* (last edited 1 year ago)

It's difficult. Sometimes it's necessary to introduce that, since it generally also throws the reader off if they got time to form a picture in their head and you suddenly destroy that and have her large blubbery breasts weigh down on your chest on page 20. Or need to describe her eyes in detail later because they look at each other for a minute. Or in book 2 her sister comes visit her who also has ginger hair... I think that's the reason why people do it. The less specific you are, the more you have to constantly factor in that all the characters could have vastly different appearances. And later descriptions of scenes have to get even less detailed.

Generally speaking I'm completely with you. Reading stories sparks imagination. And it's fun to imagine the characters, picture the scenes. It's not easy to write it that way.

And regarding the numbers to the body type: I think it's also frowned upon to say someone has C-cup breasts because that's too technical. You should describe them instead.

[-] magn418@lemmynsfw.com 3 points 1 year ago* (last edited 10 months ago)

My own results:

[Edit: Don't use this as advise. I've re-tested some of the models and I'm not happy with the results. They're inconsistent and don't hold up. Also some of my "good" models perform badly with role-play.]

Model name Tested Use-Case Language Pacing Bias Logic Creativity Sex scene Comment
Velara-11B-v2 Q4_K_M.gguf porn storywriting 4 4.5 3 4 4.5 4 generally knows what to detail, good atmosphere ⭐⭐⭐⭐
EstopianMaid-13B Q4_K_M.gguf porn storywriting 4 4 4 3 3 5 good at sex ⭐⭐⭐⭐
MythoMax-l2-13B Q4_K_M.gguf porn storywriting 4 5 4 4 4 3.5 good pacing, still a solid general-purpose model ⭐⭐⭐⭐
FlatDolphinMaid-8x7B Q4_K_M.gguf porn storywriting 4.5 4 3 4 4.5 3.5 intelligent but isn't consistent in picking up and fleshing out interesting parts, build atmosphere and go somewhere ⭐⭐⭐⭐
opus-v1.2-7b-Q4_K_M-imatrix.gguf porn storywriting 3 5 3 3 5 3.5 very mixed results, not consistent in quality ⭐⭐⭐
Silicon-Maid-7B Q4_K_M.gguf porn storywriting 4.5 3.5 3 4 3 3 has a bias towards being overly positive ⭐⭐⭐
Lumosia-MoE-4x10.7 Q4_K_M.gguf porn storywriting 4 3.5 4 3 4 3 mediocre ⭐⭐
ColdMeds-11B-beta-fix4 gguf porn storywriting 3.5 3 4 4 3.5 3.5 mediocre ⭐⭐
Noromaid-13B-0.4-DPO q4_k_m.gguf porn storywriting 4 4.5 4 2 4 3 very descriptive, issues w intelligence and repetition ⭐⭐
OrcaMaid-v3-13B-32k Q4_K_M.gguf porn storywriting 2 4 4 2 4 3.5 not very elaborate language, sometimes gets a bit off ⭐⭐
Kunoichi-DPO-v2-7B Q4_K_M.gguf porn storywriting 4 1 4 4 4 3.5 rushes things, consistently too fast for storytelling ⭐⭐
LLaMA2-13B-Psyfighter2 Q4_K_M.gguf porn storywriting 4.5 3.5 3 3 3 3.5 good language, doesn't know what to narrate in detail ⭐⭐
go-bruins-v2.1.1 Q8_0.gguf porn storywriting 3 4 4 4 3 2 sometimes a bit dull, not good sex scenes ⭐⭐
Neural-Chat-7B-v3-16k q8_0.gguf porn storywriting 4 4 3 2 4 2 sometimes tries to hard with elaborate language ⭐⭐
NeuralTrix-7B-DPO-Laser q4_k_m.gguf porn storywriting 3.5 3.5 4 4 3.5 2 misses interesting parts ⭐⭐
LLaMA2-13B-Tiefighter Q4_K_M.gguf porn storywriting 4 3 3 2 3.5 3.5 often introduces things out of thin air ⭐⭐
mistraltrix-v1 Q4_K_M.gguf porn storywriting 4 4 3 3 3.5 2 complicated sentences, no good description of sex ⭐⭐
Toppy-M-7B Q4_K_M.gguf porn storywriting 4 2 4 4 4 3 too fast, not focusing on the right details ⭐⭐
WestLake-7B-v2-laser-truthy-DPO Q5_K_M.gguf porn storywriting 3 4 4 4 4.5 1 is creative, didn't do proper sex scenes ⭐⭐
Distilabeled-OpenHermes-2.5-Mistral-7B Q4_K_M.gguf porn storywriting 4 3.5 3 4 3.5 2 a bit dull ⭐⭐

What I've done is: Instructed the LLMs to be a writer of erotic stories, who sells bestsellers and likes to push limits and explore taboos. I've included a near-future scenario with questionable ethics and quite some room to build atmosphere, explore the world or introduce characters or get smutty after a few paragraphs. Told it several times to be vivid and detailed, to describe scenes, reactions and emotions and immerse the reader. I've included a few things about one female character and provided the situation she's brought in. That pretty much sets the first two chapters. Then I fed it through each model twice, let them each write like 2500 tokens, read all of those stories and rated how I liked them.

I've paid attention to use the correct, specific prompt formats. But I can't tune all the parameters like temperature etc for each one of them, so I've just used a Min-P setting that usually works well for me. That's not ideal. If you have a model that scores too low in your opinion, please comment and I'll re-test it with better sampler parameters.

Also feel free to comment or make suggestions in general.


[I invite you to share and reuse my content. This text is licensed CC-BY 4.0]

[-] magn418@lemmynsfw.com 2 points 1 year ago* (last edited 1 year ago)

I like it. Sure, maybe it's a bit more on the mechanical/technical side of describing the act. You could also describe the character's emotions and how these things make them feel. But it's not that obvious in such a short text. I think if you had given me that without a disclaimer, I wouldn't have guessed it's by someone who isn't considered to be a normie...

Especially in the realm of erotica and pornographic stories, there are so many perspectives on things, fetishes and really outstanding things people like and focus on... The word "normal" kind of looses its meaning here.

There are guides on how to write erotica. It's more focused to describe a scene than other kinds of fiction anyways. And I mean general tipps on storywriting apply if you want it to sound professional. Use past tense, choose a perspective from which the story is told, have a central theme and something that develops the narrative and characters and goes somewhere. But if you're just doing it for you, you can skip all of these and just do whatever makes you happy. Except for consent of involved parties, there aren't many rules to sexuality. There is some pressure by society, but in the end we all have to find out what we like and do that.

[-] magn418@lemmynsfw.com 2 points 1 year ago* (last edited 1 year ago)

I'd have a look at https://lite.koboldai.net/ and start with that. It's free, doesn't need a login and has lots of models available that are run by volunteers. A downside would be occasional wait time and your texts aren't encrypted or anything. Technically everything can be read by server operators.

Easy to use are the paid services. Big and well-known ones include character.ai and poe.com but I'm not sure if they allow NSFW stuff.

There are a lot of NSFW waifu services: agnai.chat kajiwoto.ai venus.chub.ai chatfai.com crushon.ai janitorai.com charstar.ai candy.ai risuai.xyz ...

Some of them have a free tier or trial and some of them have some default characters available.

[-] magn418@lemmynsfw.com 2 points 2 years ago* (last edited 2 years ago)

Alright. To contribute answering my own question: Some people seem to enjoy VRchat. That's virtual reality however, not text-based. Maybe it's the predominant way today, idk.

YouTube: The Unspoken World of ERP | A VRChat Documentary

https://knowyourmeme.com/memes/erp-erotic-roleplay

view more: next ›

magn418

joined 2 years ago
MODERATOR OF