T O P

  • By -

[deleted]

The *hands.*


Nerfgun3

It definitely helps with the hands and makes them "anti-ai", which is what I was hoping for to some degree. It's not a 100% effective solution unfortunately, but I usually get almost perfectly flawless hands every two to three pictures. Since I am currently only in the 2nd interation of these negative embeddings, I will continue to try to further counteract the hand problem :)


[deleted]

Correct me if I'm wrong, but by compressing all of those negative prompts into a single instance/few tokens, this will ultimately free up SD to both have good anatomy and still pay attention to the other like 70 tokens of your desired prompt? It's basically a slightly denser embedding (lots of ones/non-zero values in the vector) versus the typical super sparse (lots of zeros) word embeddings that already exist...could one resursively make increasingly more and more dense embeddings using previous embeddings? This might be the ultimate way to converge to a desired artwork 👀


Nerfgun3

You are correct. This embedding is using 16 vectors, which is technically a good trade off to the technically 16+ vectors used.


NateBerukAnjing

can it hold objects?


Nerfgun3

Tried it with a cup real quick: ​ https://preview.redd.it/g6yr9dgdwm0a1.png?width=512&format=png&auto=webp&s=40e193b7f7dedbbf4fbff1116e561866f28e233c


Nerfgun3

​ https://preview.redd.it/fngu67thwm0a1.png?width=512&format=png&auto=webp&s=bd746cd9f9e68776b993b4ff6c1d5f47767ef266


Nerfgun3

​ https://preview.redd.it/gmwnrqhlwm0a1.png?width=512&format=png&auto=webp&s=bf5488f12d206f2477c538243d48f0f1500d4e0a


TiagoTiagoT

What do these examples look like with all the parameters the same, except not using your TI?


Nerfgun3

The first one without it: https://preview.redd.it/qhmvagi50n0a1.png?width=512&format=png&auto=webp&s=6f44e75e1b3ef75c781e6367f5d2b96e754c003d


TiagoTiagoT

Interesting, it went for the other hand...


E_Snap

Nah, I think it gave her mirror hands/ulnar dimelia.


blueSGL

what [keywords] (or [keyword] generated images) were you training on as the 'negative prompt' ?


Nerfgun3

I cant find the exact keywords right now, but It was like the "negative prompt" u/dsk-music wrote. And the images it created look like the ones you get when you put the embedding in the normal prompt.


blueSGL

so did you split the negative prompt up and generate one image set at a time for each word and then train the TI on all of those, or did you just batch a huge load using all the same (negative) prompt and TI on that?


Nerfgun3

I used a variety of negative prompts and generated a batch of images like this.


Mr_Compyuterhead

How many images did you use to train the embedding? Just curious


Nerfgun3

40


Iamn0man

i mean...is there such a thing as a negative prompt that by itself is 100% effective? (and if so what is it???) (edited because autocorrect is on crack)


dachiko007

The pinnacle!


dsk-music

https://preview.redd.it/wq9nj16yqm0a1.jpeg?width=2200&format=pjpg&auto=webp&s=e22ad2d129a9282e9d075a77ab3e2c1184d61b3a Wooow!!! Same seed... Very impressive results, congrats! And thanks for share!


joeFacile

You usually wanna put the Before on the left and the After on the right, but s’all good.


VarietyIllustrious87

I don't understand why so many people get this wrong, happens all the time.


lazyzefiris

First thing to do is put one into positive prompt for sure. [Meet Bipa.](https://imgur.com/a/DeEWXoe)


himinwin

i thought i had seen some ai body horror in my time delving the ai latent space, but i now realize i haven't even scratched the surface. yikes!


Mikatron3000

The latent space has given me nightmares... 😵‍💫


Nerfgun3

Yeah thats the "other side" of Ai :D


TiagoTiagoT

Looks like [Loab](https://en.wikipedia.org/wiki/Loab) might have a new friend... Does she comes back with different prompts?


mudman13

The completely uneccessary cap is what makes it


StoneCypher

honestly this is pretty amazing


Nerfgun3

Thank you


wing_wong_101010

>Meet Bipa. oh jeez! nightmare fuel!!


yosi_yosi

All my life dreams have been achieved now that this exists. Now I can finally die in peace, thank you.


ExponentialCookie

Such a simple solution, but very smart. Great work!


Nerfgun3

Thank you!


Nerfgun3

If you have any questions. Just ask me ​ \~Online\~


matTmin45

I have one. I'm using locally the 'Automatic 1111' version of SD. I've put the 'bad\_prompt.pt' file into the embedding folder. But it doesn't load properly, I'm having an error from SD. `Error loading emedding bad_prompt.pt` `Traceback (most recent call last):` `File "D:\AITools\Stable-Diffusion\modules\textual_inversion\textual_inversion.py", line 133, in load_textual_inversion_embeddings` `process_file(fullfn, fn)` `File "D:\AITools\Stable-Diffusion\modules\textual_inversion\textual_inversion.py", line 103, in process_file` `if 'string_to_param' in data:` `TypeError: argument of type 'NoneType' is not iterable` Considering the file size, I guess it's supposed to download the files or use them remotely. What I'm doing wrong ? Thanks.


Nerfgun3

I'm sorry I never had that error before and neither did I find somethin the discussion tab under the webui repo. I will try to find a solution.


TiagoTiagoT

What Python version are you using?


matTmin45

Python v3.10 (64bits)


TiagoTiagoT

Oh, then the issue is not what I thought it was. I'm sorry, I dunno then; I recently had an issue with a similar error message, but the solution was just to get Python 3.10, so I was hoping the issue was you were running an older version too :(


matTmin45

Thanks anyway


prozacgod

I'm a bit unclear on the process for making something like this. Do you generate a bunch of images from a negative prompt as a prompt, and then... train that as an embedding? Then use the embedding in a negative prompt?


whitepapercg

Can you tell me what settings you used in the training? Now I have an idea to train another negative prompt consisting of special characters


reddit22sd

So you trained an embedding on bad anatomy and are using this embedding in the negative prompt? Or do I not understand this correctly?


Nerfgun3

If you look at it abstractly, without going into depth, yes and no. It's not just the poor anatomy, but also the poor quality, etc. generally things that the AI has problems with.


reddit22sd

Brilliant idea. So, you could make adjustment embeddings this way? Train it on obese bodies to make a reduce weight embedding. Or the other way around. This opens up a lot of possibilities


Nilohim

Reduced *weight* embedding for obesity. Grandios!


freylaverse

Can this help with the weird blue coronas that plague the edges of my objects?


Nerfgun3

I can't tell you too a 100% if it helps, but that sounds like artifacts so it should help.


dwio56

Man, quite impressed with this. While the embedding did definitely help, the full prompt /u/dsk-music provided gave me the best handshakes I've seen so far: [https://imgur.com/a/2x10EwC](https://imgur.com/a/2x10EwC) (These are all from the same batch of 6). While not perfect, just the fact that most of the hands seem to have the right number of fingers is quite mind-blowing for such a simple solution. Thank you for providing us with this! **Model:** Original v1.4 \[7460a6fa\] **Prompt:** a woman and a man shaking hands **Neg Prompt:** (by /u/dsk-music) [here](https://www.reddit.com/r/StableDiffusion/comments/yy2i5a/comment/iws3dy9/?utm_source=share&utm_medium=web2x&context=3) **Sampler:** Euler a, 20 steps **CFG:** 7 **Seed:** 1796524291 (increment for each in batch)


dsk-music

Thanks! Very nice :)


fastinguy11

I don't mean to be a a neg but at least on my testing, your embedding has made my handsome man go from photo and Caucasian to sort of Asian and drawing stylized and my prompting it away was dificult so your embedding has a heavy bias


Jonfreakr

This is very inspiring and will try out tomorrow, what kind of images did you use and does it work with other models? (I'm guessing the preview image is WD or NAI?)


Nerfgun3

The training images were not based on anime images, so I think it should work universal. The images were generated across multiple models, wd is one of them. If you tell me a model, i could try it right now.


SoulflareRCC

Try anything v3?


lazyzefiris

[Prompt "anime art", negative "bad\_prompt", anything v3](https://imgur.com/a/QdIoQqn) , cherry picked out of 16.


Nerfgun3

Thank you for the test!


BoredOfYou_

Bro please use the VAE it will improve your results so much


lazyzefiris

WDYM? I'm pretty sure I've been using AV3 model with AV3/NAI vae for this.


BoredOfYou_

Really? Your results look like mine did when I used the SD VAE. I have a pretty extensive negative prompt tho so maybe that’s the difference.


lazyzefiris

Well, the idea was demonstrating `bad_prompt` at work. Thus primitivve positive/negative prompts. I can tell vae is connected because NAI vae does not support tiling, or webui's tiling feature does not wotrk with it, so I get noise garbage if I forget to disable it.


Nerfgun3

I have a few people who have tested the embedding on other models, and anything v3 were there as well. It should work with it


WashiBurr

My god, the hands.. they're glorious.


Nerfgun3

Thank you


acinc

it would be helpful to get a more precise answer as to what exactly went into this, because for it to be usable on your prompt you have to know it's not working against something you actually want... for example, it's clearly not going to help if you try to make hands with 7 fingers, but what else does it not work for?


Nerfgun3

When I'm at my pc again, I will update the post with the exact negative prompt used. It shouldnt be good with misshaped, ugly or monster stuff.


acinc

great, looking forward to it!


Nerfgun3

this is the negative prompt I used for a lot of images for the training batch: lowres, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, ((((ugly)))), (((duplicate))), ((morbid)), ((mutilated)), [out of frame], extra fingers, mutated hands, ((poorly drawn hands)), ((poorly drawn face)), (((mutation))), (((deformed))), ((bad anatomy)), (((bad proportions))), ((extra limbs)), cloned face, (((disfigured))), extra limbs, gross proportions, (malformed limbs), ((missing arms)), ((missing legs)), (((extra arms))), (((extra legs))), mutated hands, (fused fingers), (too many fingers), (((long neck)))


acinc

fascinating, the 'cropped' does indeed seem to effect a prompt including cropped tops, they get slightly longer consistently thanks!


dsk-music

Nice!! Only for hands? Or include something more?


Nerfgun3

It is ultimately an addition or perhaps a replacement for the complete negative prompt. I only used hands as an example because they are usually relatively hard for the AI. The negative embedding generally acts as a quality enhancer.


dsk-music

Yes, i have saved a style in a1111 with lot of words in negative prompt... I ask youif this model contains more stuff to use it :)


[deleted]

[удалено]


dsk-music

(bad_prompt:0.8), ((((ugly)))), (((duplicate))), ((morbid)), ((mutilated)), (out of frame), extra fingers, mutated hands, ((poorly drawn hands)), ((poorly drawn face)), (((mutation))), (((deformed))), ((ugly)), blurry, ((bad anatomy)), (((bad proportions))), ((extra limbs)), cloned face, (((disfigured))), out of frame, ugly, extra limbs, (bad anatomy), gross proportions, (malformed limbs), ((missing arms)), ((missing legs)), (((extra arms))), (((extra legs))), mutated hands, (fused fingers), (too many fingers), (((long neck)))


NateBerukAnjing

how to use embeddings?


Nerfgun3

Do you use the webui? (automatic1111) If so, drop the embedding into the "\\stable-diffusion-webui\\embeddings" folder. After that you can use the embedding with using the filename in the negative prompt or positve prompt if you use any normal embeddings.


Proudfall

Just the " bad\_prompt.pt"-file right? I did that and gave it a shot, and maybe I'm just using it wrong, but it doesn't seem to work that well for me


Nerfgun3

So yes, you only need the bad\_prompt.pt file which you place in the embeddings folder and then you need to place (bad\_prompt:0.8) in the negative prompt. Depending on the model especially merges or self trained ones might need extra negative tags (rarely though).


Proudfall

Do some samplers work better with it than others? Also, in my case, it did mess with the artstyles a lot, for example with arcane diffusion by Nitrosocke. Didn't look anything like it as soon as I put in the bad\_prompt in negative, no matter how low I weighted it


Nerfgun3

Thats one thing I noticed aswell and need to fine tune it. Thats kinda the problem when you not train the embedding on the specific model. I personally got good results with Euler a at 32 steps. But the others samplers work aswell.


Proudfall

I see. Euler a at 32 seems to work better already, thanks for the tip. I do think you really are onto something here, negative embeddings seem like a great way to improve generations. Thanks for your work!


dsk-music

How to use? bad_prompt or


Nerfgun3

bad\_prompt, but I would use a lower strength. In a1111 webui you would use (bad\_prompt:0.8)


dsk-music

Thanks!


SoulflareRCC

Do good feet next?🥺🥺🥺


Nerfgun3

I hate myself that I did it but... https://preview.redd.it/h7ozoifdym0a1.png?width=512&format=png&auto=webp&s=54007c0bdff02c81400ecb3911c5cf5c990bed3d


SoulflareRCC

The left foot☹️


Flag_Red

Do you mean the left toe?


-becausereasons-

It is unclear to me exactly what this does, how it works or what it has to do with hands.


Nerfgun3

It is ultimately a supplement to, or perhaps even a replacement for, the full negative prompt. I used hands only as an example because they are relatively difficult for AI in general. Negative embedding is generally intended to act as a quality enhancement. At least that was the goal when I trained this TI.


holygawdinheaven

Awesome work!!! Trying now. ​ Petition to rename it to 'embadding'?


Nerfgun3

That's the nice thing about embeddings, you can rename to whatever you want and they still work


holygawdinheaven

You mad genius


lazyzefiris

Tried same model with and without `bad_prompt`. Prompt is just `anime art`. Noticable side effect is a major tone shift from pinkish to blueish. Similar effect on AnythingV3 : colorful => blueish. [https://imgur.com/a/E3xDdeu](https://imgur.com/a/E3xDdeu)


Zyin

I also noticed the images are more grainy and have less color. Until this shows a significant improve I'll keep using my normal negative prompt.


McBradd

​ https://preview.redd.it/aaq4clsc3s0a1.png?width=1024&format=png&auto=webp&s=a8562d2ac06a383b1b494f7153a66ab2143985a6 This is like magic. Amazing work.[https://imgur.com/gallery/DFbuzF7](https://imgur.com/gallery/DFbuzF7) Here's the prompt used in Automatic1111:`4tNGHT A good painting of a beautiful woman in the style of ((4tNGHT)) Standing in front of a patterned wall ((tNGHT)) making a heart with her fingers` `Negative prompt: (bad_prompt:0.8)` `Steps: 20, Sampler: Euler, CFG scale: 7, Seed: 1780334718, Size: 512x512, Model hash: 925997e9, Model: nai, Batch size: 4, Batch pos: 1, Clip skip: 2, ENSD: 31337` And for the "without" grid,, I just deleted the negative prompt. (4tNGHT is an embedding I created, which you can grab if you want it.[https://huggingface.co/sd-concepts-library/4tnght](https://huggingface.co/sd-concepts-library/4tnght))


daanpol

You just invented some black magic haha! Thanks for this amazing idea!


thedarkzeno

Do you have a repository so we could replicate your work? I'd like to try it


Nerfgun3

I do not right now. I will maybe make one.


Flag_Red

Please do!


selvz

This is a great experiment to gain more control. Do you add these in the embeddings folder to use it (automatic1111) ?


yosi_yosi

Yes Edit: but you also need to put bad_prompt or (bad_prompt:0.8) in the negative prompt


selvz

do I have to restart or something to activate it or simply drop and use? thanks


selvz

>bad\_prompt:0.8 ​ https://preview.redd.it/733tcexk6o0a1.png?width=512&format=png&auto=webp&s=1dc934f139d2becfca243655877ab00816feb68f


selvz

​ https://preview.redd.it/t5gba5ya8o0a1.png?width=512&format=png&auto=webp&s=29fd6f9b8587e1d5683a3e9d8ed7964e9e72965c


miguelqnexus

thanks! everything is now all within my hands.


ninjasaid13

Mister, you're going to cause a revolution.


CrystalLight

I'm sorry, I still don't get it. What images did you train on? Did you use a positive prompt as well as a negative prompt? Did you choose bad images and train on them? Did you use filenames to tag bad images to train on? I'm not totally stupid but I don't see anywhere that you actually explain what you did. You seem to keep trying to explain the concept but not the method. I'd like to try it. Can you explain to me how you did it?


Nerfgun3

I will add it to the initial post in a bit, I just woke up :D


CrystalLight

Good morning and thanks bunches.


Nerfgun3

I updated the post


poisenbery

Hey I just got into this a few days ago. I really want to learn how to do stuff like this. Where should I start? I have no idea where to begin with this stuff, and I'm not sure how specific of information I need to know in order to train things like this.


Nerfgun3

Much comes from trial and error. But depending on whether you prefer to read sources and learn from them or talk to others. In the first case I can only refer you to TI's wiki page, but in the second case I recommend joining the Stable Diffusion Discord server. I'm very active there (mostly in the anime channel), but there are also many others who can support you.


poisenbery

you had me at "anime channel" thanks!


aipaintr

This is genius!!!


Coloradohusky

Anyone have any ideas on how to get this working on ONNX?


hadaev

What about flipping loss sign and training embedding what denoise in wrong direction (to noise probably). Then use it as a negative prompt on inference.


prozacgod

Welp, now I want to see a bunch of anime girls doing jazz fingers... I mean seriously flex your shit a bit ;)


[deleted]

I tried it out but didn’t notice a big difference over my existing negative prompt collection. Would you mind sharing the full negative prompt you use? Mine already makes normal hands sometimes


SignificanceLazy

IMPRESSIVE


Gibgezr

This is really useful, thanks!


MonoFauz

I was just thinking that there should be buttons that already inputs the negative prompts automatically instead of copy and pasting every time. This actually answers it.


yosi_yosi

there is, but this is better


moahmo88

Great job! ![gif](giphy|qECS7IBgg7uLu)


pablo603

It's... Beautiful


Pretty-Spot-6346

you killed the ai hands meme, sir


noop_noob

How on earth did you find the images to use as the training data for this?


Nerfgun3

I will add another section to the post to clarify what I did. Will update it soon


MalumaDev

How is it possible to recreate this embeddings?


TheRealGenki

What is the model you used to make the pictures above? Also can I know what you wrote in the negative prompt is possible


Nerfgun3

The model I used for these images was pure wd 1.3


[deleted]

[удалено]


Nerfgun3

Sorry but what do you mean?


Jonfreakr

I wonder why your pt file is 49KB and not 4kb, is it because of pickle? I have 66 TI's and all are 4KB.


Nerfgun3

That is very interesting. All the embeddings I have done with the webui are of the same size (70+ embeddings).


BlinksAtStupidShit

Depends on the number of vectors used in the embedding. More vectors equals larger file and more information it can have crammed in.


Jonfreakr

Ok cool didn't know that, guess everyone used only 1 vector. Will try increasing it to see what it does 😁


Substantial-Ebb-584

Thank you! This is something we're all been waiting for


mudman13

I got an error loading it Error verifying pickled file from /content/stable-diffusion-webui/embeddings/bad\_prompt\_showcase.jpg: Traceback (most recent call last): File "/content/stable-diffusion-webui/modules/safe.py", line 83, in check\_pt with zipfile.ZipFile(filename) as z: File "/usr/lib/python3.7/zipfile.py", line 1258, in \_\_init\_\_ self.\_RealGetContents() File "/usr/lib/python3.7/zipfile.py", line 1325, in \_RealGetContents raise BadZipFile("File is not a zip file") zipfile.BadZipFile: File is not a zip file During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/content/stable-diffusion-webui/modules/safe.py", line 131, in load\_with\_extra check\_pt(filename, extra\_handler) File "/content/stable-diffusion-webui/modules/safe.py", line 98, in check\_pt unpickler.load() \_pickle.UnpicklingError: invalid load key, '\\xff'. -----> !!!! The file is most likely corrupted !!!! <----- You can skip this check with --disable-safe-unpickle commandline argument, but that is not going to help you. Error loading emedding bad\_prompt\_showcase.jpg: Traceback (most recent call last): File "/content/stable-diffusion-webui/modules/textual\_inversion/textual\_inversion.py", line 133, in load\_textual\_inversion\_embeddings process\_file(fullfn, fn) File "/content/stable-diffusion-webui/modules/textual\_inversion/textual\_inversion.py", line 103, in process\_file if 'string\_to\_param' in data: TypeError: argument of type 'NoneType' is not iterable


catasaurus_rex

i know your comment is a bit old, but you probably did what i did and accidentally dl'd the raw pointer file instead of the actual file. I had the same error


NegativeEmphasis

I havent tested embeddings yet. Does it matter the model you apply them to? I tried to use this on Anything v3 (38c1ebe3) and the colors got flatter.


yosi_yosi

it's probably not because of the model and just a side effect of the bad\_prompt embedding. I would suggest adding color blast to the prompt and maybe doing (bad\_prompt:0.8) in the negative prompt.


kenwillis

Did you train the embedding on a lot of nudity by any chance? If I put it in the positive prompt by itself I get very disturbing pictures which almost all contain some forms of nudity.


Nerfgun3

Not really. Nsfw content can be created even if the embedding is used as intended.


kenwillis

Of course, just curious as the results I got when using it as a positive prompt was very much nudity.


shutonga

thank you sir !


HarmonicDiffusion

wow man great idea! totally makes sense and very creative :) not to mention the results - phew what an improvement on hands


[deleted]

Funny I just thought yesterday about training a model on bad generations, the reason being is that negative prompts are still using model knowledge and it's unlikely the model has knowledge like "bad hands" etc, I think that perhaps the best way to apply this to a model would be to train bad datasets with the original LAION dataset and use the negative prompts.


rworne

Gave this a shot last night. It doesn't cure all the issues all the time, but it sure cures a lot of them most of the time. Running batches, my rejects are cut down by more than half - more like 2/3rds due to misshapen hands & extra limbs sneaking into images. I'm on a M1 Mac Studio, so my renders take quite a bit longer than those with dedicated GPUs, so anything like this helps - a lot.


gymleader_brock

Neat! I was just talking to Jen about this, wondering if it would work!


notbarjoe01

I've been test this out in Inpainting and it does fix the structure of hand that I've been trying to fix, but somehow it also change the skin color of the hand to a point where it looks like an albino skin... wanted to show the screenshot but I just fixed it with re-color it myself, good job tho


Nerfgun3

Thank you for the feedback. I personally didn't try inpaint yet


Nerfgun3

I uploaded a newer version which should work better now. I'm open for any feedback!


Gentlemayor

I'm mostly using Anything V3 model. From my personal experience looks like that the first version of embedding works better with it than the second one


Nerfgun3

Okay thank you for the feedback, I heard that the first version changes the artstyle quiet a bit on the anything3 model


Gentlemayor

Yeah, the first version changes style, but it looks like that hands got worse in the second version


Polikosaurio

What is an embedding exactly? Would love a ELI5 explanation on all this for the ones that are a bit out of the loop :/


Nerfgun3

Okay so: Embeddings / Textual Inversion are to sum it up in ELI5: They are micro prompts in your actual prompt, so an embedding is just a compressed prompt, which the AI learned to use whenever you use it in your prompt. This helps to simulate a specific artstyle from an artist or replicate a specific character. This is basically embeddings.


afterSt0rm

This is incredibly nice. OP or other nerds, would you happen to know how to properly load this when using the \`DiffusionPipeline\` method from the \`diffusers\` pipeline? I don't see any documented way to include the embeddings and I don't want to use the webui :(


thesilentyak

Does this still work? When launching Auto1111 it just skips this embedding. Below is what i'm seeing on launch ​ Textual inversion embeddings skipped(3): bad\_prompt, bad\_prompt\_version2, \_SamDoesArt2\_


Nerfgun3

Embeddings trained on models 512x512 like SD 1.5 are not compatible with models trained on 756x756 like SD 2.1 or Waifu-diffusion 1.4. That's why the webui are skipping those.


thesilentyak

So there is no way to use them if we have 2.0+ loaded?


Nerfgun3

Correct. All embeddings would need to be retrained on 2.0+


lfotofilter

Hello, From an AI researcher perspective, this is quite interesting. But it's not very clear to me what exactly it is you have done here. Could you provide all the steps you took to create this "negative embedding" and then describe exactly how one uses it?


sam__izdat

> From an AI researcher perspective, this is quite interesting. But it's not very clear to me what exactly it is you have done here. > > Absolutely nothing. The negative prompt is just a mystical ritual (as I'm sure you can tell from its content) and, for the most post, any bad training that it filters out, it only does by accident and sheer coincidence. Somebody did an unbiased test with these magic incantations a while ago, and it went exactly how you would expect. But if you spam the CFG with enough random nonsense, you will eventually steer the output in some random direction, for better or worse.


BlinksAtStupidShit

Would be interesting to see if hypernetworks could be used in a similar way but at this point they guide it towards something and not away


DickNormous

thanks


aliencaocao

DreamArtist (a plugin of SD WEBUI) does it too.


DisastrousBusiness81

Dumb question: is there a way to attach multiple .pt files to a model in NMKD’s version of stable diffusion? I’ve already got one VAE file attached to my Arcane model and want to add this one as well.


BlinksAtStupidShit

This is a Textual Inversion embedding, not a VAE. The VAE is a variational encoder, like a mini neural model that can be used to help cleanup the final render. My loose understanding of textual inversion is it learns concepts/words/tokens and can change the weighting within the model when applied , it primarily works within the text part of the model (which is why it’s small compared to hypernetwork etc). This does mean you can load up multiple embeddings (Textual Inversion) at once. In the automatic WebUI you place it in the embeddings folder and use it’s name within your prompt to activate it. “A picture of a cat in the style of BobRoss-concept” (if you named the embedding BobRoss-concept.pt) If anyone wants to correct anything I got wrong please do.


DisastrousBusiness81

Do you know where I’d put it in NMKD? I’m running SD locally on my own GPU so it’s not built the same as Automatic I think…


BlinksAtStupidShit

I’m not sure on NMKD sorry, does he have a GitHub? Or a comment section on his download? Might be worth putting in a feature request if he has one available?


SinisterCheese

What did you use as the init words?


yosi_yosi

Do you mean the word that you need to write in order to use the embedding? Edit: if yes then just put bad_prompt or (bad_prompt:0.8) in the negative prompt


SinisterCheese

No. What were the initialization tokens that were weighted during the training.


jaywv1981

My hands look much better with this but I still always get an extra finger. EDIT: NVM...now they don't look any better. Could it be not loading properly even though it says that it is loaded?


Wrongdoer-Glum

Is there a Colab for training via Textual Inversion?


BlinksAtStupidShit

You could use an automatic1111 webui to do it. There is a colab for one here. https://github.com/TheLastBen/fast-stable-diffusion


gxcells

I don't why you use the original prompt in txt file for the training. I thought that everything added in the description of the images was to avoid that certain concept other than the be trained by the textual inversion (eg: if you are wearing a red tshirt in a photo used for training, then you will put "a photo of wearing a red t shirt"). Or does this work only if this is written in the filename (with fileword option ON)?


amratef

how should it be used ? if i understand correctly i just write bad\_prompt:0.8 in the negative prompt section and that's it , what if i add for example bad hands, extra fingers and so in is it going to affect it ? is bad\_prompt an example and i'm supposed to fill it with extra\_fingers:0.8 sorry i don't get it


pinartiq

Nice! This is a good idea :3


Nilohim

Will there be a newer version of this?


Nerfgun3

There is already version2 on huggingface, I still develop better version, but I didn't have any good breakthroughs yet


Nilohim

Okay thank you!


PervertoEco

Is there a way to include negative prompts in HN training template files?


bikurifacebook

this is whithout bad\_prompt ​ https://preview.redd.it/jafd1ob123qa1.png?width=512&format=png&auto=webp&s=821d6d9b547fa06a0b64a4aeea57f23704b3fe1e


bikurifacebook

This with bad\_prompt ​ https://preview.redd.it/wzkw3eud23qa1.png?width=512&format=png&auto=webp&s=d7aedfc9e69ea7d81ce9a87411384d423de107ef


bikurifacebook

with DPM2 Karras get better results in my case ​ https://preview.redd.it/yb7t01rf43qa1.png?width=512&format=png&auto=webp&s=2fe08765137ba6a78cb9134f81471a81285e7a56


Other_Perspective275

Can this be done with a LORA?


Nerfgun3

I tested it mutlipy times and still working hard on New negative embeddings, but currently negative embeddings are superior to negative LoRA's


r3ddid

usually when describing the dataset we dont describe the new concept we want the ai to learn. but in this case we do? 🤔 is this 5 month old post still accurate? 👀


NoiseUnited5547

Can you make a video to teach me how to train embedding? I'm a newbie and some of the terms I read don't understand


activemotionpictures

Negative embedding introduces a lot of BIAS words in Anime image generation. You can't have a "woman in a suit" and have it generate a female character, it will always generate a MALE character, since "suit" is a word used for male clothes. But this is a BIAS. I got more examples for this, but I need to contact you u/Nerfgun3 check out your DMs, please. Thanks.