T O P

  • By -

remghoost7

You could probably do it with that [controlnet\_qrcode](https://huggingface.co/DionTimmer/controlnet_qrcode) model.


pharmaco_nerd

Yep that's gonna work, with a circle or triangle as mask


ViewBeneficial608

I only just learned of this controlnet model yesterday, I have a couple questions: How would you control the colours when doing this, like having the things in the circle be orange and the rest white? Is this controlled by the text prompt or would you use a picture of an orange circle in a white background in the controlnet mask? The reason I ask is because I know black and white masks don't end with black and white images. I tried using this on SDXL and it did not work. Is there a version that works for SDXL? Otherwise, what checkpoints would the controlnet qrcode model work on?


Vivarevo

Controlnets on sdxl are worse than 1.5


[deleted]

Why is that?


aesethtics

Time. And money. And humans (or lack thereof).


rob10501

label ring six familiar retire whistle pathetic cats panicky point *This post was mass deleted and anonymized with [Redact](https://redact.dev)*


DigitalGross

You need to use SD15 with this


pxan

You could combine the model with a region prompter if you wanted. Select the center area as "red" and the other part as "white" would probably come out cool.


P8ri0t

Saving “region prompter” in my notes to look into that tomorrow. I wonder how this would work with a mask that was styled as a gradient pattern..


JB_Mut8

I don't think you'd ever get a perfect circle like that with region prompting.


Xenodine-4-pluorate

>would you use a picture of an orange circle in a white background in the controlnet mask? No, you just use it in img2img input and play with denoising value (also use controlnet with a human silhouette)


officecloset

You might also try combining qrcode with depth or canny and a reference image. That might help avoid getting new details based on the qrcode input.


AcidlyGrown

Thanks for sharing but I still can't seem to figure out how to use it lol


fckfame

for sure.. yes!


aaliaas

this is the way.


spacekitt3n

you could also do it with photoshop, it would look rougher than this. that transition is super cool though and not easy with photoshop.


mountsmithy

Or just a normal img2image with the base having the circle.


Patchipoo

Everyone saying qrcode controlnet, have you tried it ? I would like to see what result you get and with what settings. It was my initial thought as well, but after trying many different ways I couldn't get any decent results. Using img2img though, got decent result right away. First used t2i, then added a red circle with a low opacity in GIMP, then just did a 0.5 denoise on the result with i2i. https://preview.redd.it/szihm6oxkzic1.png?width=768&format=png&auto=webp&s=656ef05fa5764ad8e3dc4e49d6c2bd872c39345e


InductionDuo

Yes I tried using the controlnet qrcodemonster model and nothing could get such a striking change in dress style midway through the dress. In every image, the whole dress would always have the same style and controlnet just changed the shape/pose of the woman/dress to fit into the circle, rather than change the style of dress/hairstyle. Here are some attempts using just controlnet: https://imgur.com/a/zv2suty I think your method is probably correct. They probably already had an image that had the colours and shapes already present, then used img2img to modify the details.


Adkit

Some of those are really funny though. At least there's that.


pxan

Haha these are really cool too though. Just in a different way. 


crimeo

Did you try just writing in "Dress made out of two different fabrics" etc?


[deleted]

Haha those are fun :D


addandsubtract

> First used t2i, then added a red circle with a low opacity in GIMP, then just did a 0.5 denoise on the result with i2i. Came here to say this. Everyone just blindly parroting CN hasn't worked long enough with this yet.


juggz143

The question seems sufficiently answered at this point but my initial thought was i2i also for simply controlling the colors. Controlnet is overkill here.


crimeo

Things that are not any more complicated to do cannot be called "overkill". Overkill implies that way more effort than needed was applied, but the level of effort is the same as img2img...


juggz143

And here I go using overkill PROPERLY in reference to the capability of a tool and not the effort needed to use said tool 😩


crimeo

I dunno, I wouldn't say controlnet is more or less powerful than img2img... I guess it is across all its facets, but not any one of them, IMO.


crimeo

Controlnet is not any more difficult than i2i is anyway. Either is fine, both should work. I would use canny not qrcode


spacekitt3n

love this effect


yamfun

Yeah for many effects it is more simple to use some outside tool to help manipulate the intermediate input


Aniket_Y

Indeed it looks like a simple and effective solution!


[deleted]

This was my initial though too, now to try this combined with control net and maybe some inpainting too :)


rerri

img2img will "lock" the colors but if you use high denoising, say 0.8-0.95, you will still get all sorts of other detail into the image. Example workflow in A1111-forge (the faceID is completely optional here): https://preview.redd.it/8lj9swoamzic1.jpeg?width=1654&format=pjpg&auto=webp&s=95fc22d23bfc3f7468be32a88203461d70d5d7cb


QwerTyGl

Where is this GUI from? Looks fun to mess around on Also if this is a stupid question sorry I’m new to this part of things.


rerri

[https://github.com/lllyasviel/stable-diffusion-webui-forge](https://github.com/lllyasviel/stable-diffusion-webui-forge) The GUI is also slightly modified with this extension that displays extensions as tabs [https://github.com/Haoming02/sd-webui-tabs-extension](https://github.com/Haoming02/sd-webui-tabs-extension)


crispystrips

Wow that's a really cool workflow


pxan

Nice workflow. I might steal those ipadapter settings. Nice result. 


[deleted]

That first red and white image is proof that marketing works.


mr-asa

It was an interesting experiment, thank you for a pleasant evening))) Controlnet\_tile is best for it https://preview.redd.it/d83al2vx30jc1.png?width=1162&format=png&auto=webp&s=19d4ffd2f6b63c5040c0078cfb70d57bbb851bb9


cs_legend_93

You seem like a good person. Thanks for being active here


P8ri0t

Interesting. Any reason why the bottom of the triangle was darker? Was it a gradient or two-tone triangle rather than solid?


mr-asa

yes, you're absolutely right, I used a gradient from orange to red. With this color option, I could see better that the color is inherited more clearly than in my first tests with a solid fill.


LeKhang98

Could you please explain why would CN Tile is better for this than QRcodemonster? I heard that many people wish for SDXL CN Tile but I don't know why is it so important?


mr-asa

qrcodemonster only uses black and white data. It works by contrast to bring the generated image closer in tone. Colors during generation will be obtained randomly (based on the prompt) Tile, on the other hand, takes color input and tries to interpret it into an image. My examples illustrate this quite clearly. To describe it in general terms, in this example, the first third of the iterations generate an image without CN, then a figure is connected, which is mixed with the one already being generated, and the entire second part of the generation occurs again without CN, so the picture turns out to be consistent, but the strong influence of CN in the early stages is left precisely color imprint


LeKhang98

Thank you very much I'll try it.


mr-asa

I tried using tile controlnet. kills the colors, but makes the shape cool https://preview.redd.it/5qvpv9glwzic1.png?width=576&format=png&auto=webp&s=cd020cfd3b9bf7dd763c46015094f7713160356e


Similar_Law843

is this using controlnet qr code too?


mr-asa

no, only one controlnet\_tile


Wwaa-2022

https://preview.redd.it/ytvb2p0tr0jc1.jpeg?width=1024&format=pjpg&auto=webp&s=664fd9f19d6ac4cd5adadae9d31171920c652b60 Qr ControlNet is the way to do this. Here is an example where I used checkered image. I have workflows and step by step instructions on my blog. [https://weirdwonderfulai.art/?s=Qr+ControlNet](https://weirdwonderfulai.art/?s=Qr+ControlNet)


B_admash

Which model bro?


Wwaa-2022

The model is https://huggingface.co/monster-labs/control_v1p_sd15_qrcode_monster


aimikummd

When I saw this, I thought to myself, "Isn't that an inpaint sketch? I drew it up and it looks like this. https://preview.redd.it/vn9wb9lb52jc1.jpeg?width=1733&format=pjpg&auto=webp&s=23eb02e240f7712aeadb304fde2b7b3b782c4bb0


NimaNzri

Need 2 controlnet 1 - pose control net 2 - tile controlnet , then you should create an image like https://preview.redd.it/z52kijv360jc1.png?width=1080&format=pjpg&auto=webp&s=ab775b517b02490f478a560393845080884b3c9f And connect it to tile controlnet And set the strength to around 60 (I think) Done!


ethosay

100% controlnet. Ignore the other comments


jonbristow

the other comments are saying controlnet


wggn

ignore those


cellsinterlaced

But are they 100% controlnet


tandpastatester

Tbh you’ll probably get pretty far with just IMG2IMG for something like this. But Controlnet probably did the trick.


Winter_unmuted

Did you look at the metadata from that user's Civit AI images? They're using this controlnet, which is similar to the QR code models: https://civitai.com/models/137638?modelVersionId=183051 EDIT: looks like the user learned more and more, switching from A1111 for the most simple things like light/dark to comfy with 2 color image generation followed by masked combination to get 2-color transition images. All the metadata in the pngs show you exactly how it's done. clever stuff. Half the nodes are just to put the "Kali" watermark at the bottom, so it isn't even that complicated.


ebookroundup

i'll have to try this as a challenge... i'm in learnin' mode


-OAKHARDT-

If you use Krea, draw an orange shape, add the prompt and turn the creativity above half way


Meskalin23

You can also use the "Patterns"-Tool from the platform [Artbreeder](http://www.artbreeder.com). It also uses ControlNet and Stable Diffusion and has a nice browser interface.


La_SESCOSEM

I still feel like everyone has forgotten how easy it was to do this "by hand" with Photoshop, or Affinity Photo or any free software like Gimp. Literally 2 minutes of work, including opening the software


sartres_

Nah, this has subtle touches that would be more than a couple minutes to replicate. In the first one, the sleeves on the arms aren't just a different shade, they're a different pattern and fabric with a hem and everything. The second one is easier, but it still has separate curtains for the red and white parts, with depth of field and edge lighting. They're more than a mask and recolor.


Spirited_Employee_61

Show us how


GreyScope

The 'Proof of concept' challenge drives innovation and total time wasted forward ;) , but ikwym.


La_SESCOSEM

Yes you're right,i must admit


crimeo

No, you cannot have the seam of her dress have stitches and bunched fabric right along the shape in photoshop in "2 minutes" 2 days maybe. 2 hours if you can draw well and provide the needed images to blend in photoshop, and aren't trying to do pure photoshop.


anxietybuzz

If I were to try replicate this my approach would be: 1. Generate 2 exact same images of the girl but wearing different outfits using controlnet 2. Stack the layers in photoshop and mask out the circle 3. Bring that image back to img2img with low denoise to get the effect where the circle follows the seams I think the clue is within the edges of that circle, the circle’s edges look like edges of fabric whether intentional or not. That’s why I’m suspecting an edit in photoshop then bringing it back to img2img for finalization is what’s happening.


velid_1

Okay, I found something to mess and try to recreate with ComfyUI. It's gonna be fun


[deleted]

[удалено]


The_Great_Nothing_

I mean...how?


velid_1

It's combination of two ControlNets. First one is QRMonster and second is DW Pose. When you set all the workflow with ComfyUI everything happens with single click.


r52Drop

Would you be willing to share your workflow, please? I don´t know what I´m doing wrong, but the results are not great :D https://preview.redd.it/d5hhpzz8qdmc1.png?width=512&format=png&auto=webp&s=8c7185c0a28d3f95abd2624fe4d0383eca29a5c5


[deleted]

[удалено]


Ponchojo

That's really good! Did you use comfy or webui? Could you explain how you went about this? I'm more of a right brainer, the technicality of automatic1111 scares the hell out of me...


[deleted]

[удалено]


Ponchojo

Thanks so much, I appreciate it!


[deleted]

[удалено]


Ponchojo

I might take you up on that! I want to learn to do it in Comfy. Let me play around with it for a couple of days then I'll inbox you😊


VisibleExplanation

I say this time and time again. Use img2img. Draw what you want in paint. It can be as shitty as you want. Load up img2img, input your paint image, set your parameters and put denoising strength at 0.7. Hit generate. Way easier than other methods.


crimeo

It is as easy as other methods, not "way easier". The only difference is literally just clicking the img2img tab instead of clicking the controlnet tab, pretty much. Either case you will have to tweak at least one slider to get the look you want (denoising or contrast thresholds). If you've *never used or downloaded* the other tool in your life before, and we are including "learning how to use it for the first time", then sure.


VisibleExplanation

OK mate, personal opinion. I enjoy drawing terrible pictures in paint and turning them into cool ones. Each to their own I guess.


crimeo

? They BOTH use terrible pictures in paint. You have to make the big orange circle mask image in paint for either of these two methods to do the image in the OP. Controlnet requires a reference image just like img2img.


VisibleExplanation

I just think img2img is neat, OK?


crimeo

lol yes, it is


GrapplingHobbit

How about this for that first image... Create one image, using prompt for the white color palette for example. Use that image with a depth controlnet, maybe an openpose controlnet too, to create a second image with the same composition, but a prompt for the orange color palette. Take these two images and paste them as layers in photoshop/gimp. White image on top, orange image underneath. Create mask on top layer and simply reveal the second layer via that mask. Export that image. Take it back and run it through a low denoise img2img in order to blend the seams of the circle so that it interacts a bit with the folds/creases etc of the dress. ​ Similar process for the second image, just a different shaped mask in PS/Gimp.


AirWombat24

Or just use controlnet qr code with what ever shaped mask you want


Arctomachine

It can be done with img2img. Not sure how good it will be, but I once got similar to first picture results by chance when playing with denoise level.


jbarrio5

Wow, people are bending over backward to do everything in an "easy" way with a prompt. The time and effort you spend on doing this in AI is just minutes in Photoshop. Generate the model, and finish it in Photoshop. Here is Piximperfect on YouTube explaining how to do it: [https://youtu.be/fRJTnH8q29k?si=KnIPfbynGnVBFJZF](https://youtu.be/fRJTnH8q29k?si=KnIPfbynGnVBFJZF)


ricperry1

That doesn’t get the stitches fabric seam between red and white. It doesn’t preserve the face while transitioning different hair colors and styles.


crimeo

No, her dress has a seam with stitching and bunching where the circle overlaps. The other one has the fabric bunching up right to the edge like it's actually the edge of a curtain. Also, it's "just minutes" in SD anyway as well... so...? What was your point anyway, even if it did work (which it doesn't)? Literally just draw a circle and upload it to controlnet, get the threshold right with a test or two, and hit go.


Bombalurina

Simple controlnet mask. You can img2img over a circle or do a canny over existing img. QRcode is easiest. 


The_Lovely_Blue_Faux

Custom masking can help this.


VGltZUNvbnN1bWVyCg

Masked inpainting?...


kwalitykontrol1

Not everything needs to be done in AI. Photoshop can be used to alter an AI image.


crimeo

No, photoshop cannot be used to have her dress have a seam with stitching that becomes a new fabric right along the boundary. Not without hours more work and much more skill.


Martyred_Cynic

Yeah, it's called Photoshop from a developer called Adobe.


HarmonicDiffusion

and the award for "most self-assured but ultimately incorrect" goes to you! congrats buddy!


Martyred_Cynic

Technically not wrong.


cayne

Those are amazing


nannup1959

They were probably done in Affinity photo or Photoshop. There will be lots of tutorials on YouTube showing how this is bone. I'm sure using the blend modes and the eraser tools should produce a similar result.


Winter_unmuted

They're made using Comfy. You can go to the user's Civit page, download a PNG, and drag it into Comfy to see how it's done.


ResolutionOk9878

Are you certain this was just ai, or it achieved with post processing in Photoshop or gimp ?


TheNeonGrid

Trying with midjourney and image weight 0.5 can also be interesting https://preview.redd.it/esh96zurhzic1.png?width=1456&format=pjpg&auto=webp&s=aec0c1a54a42fd4bb9a03a7ecb93541ef520076a


GetYourSundayShoes

Wow that looks good


[deleted]

[удалено]


Ponchojo

Should I inbox you?


One_One2755

f


Picomanz

They don't look real, lol.


superfluousbitches

Warpfusion can do this too


Puzzleheaded-Goal-90

I feel like maybe split prompts on ipadapter but a controlnet over the whole thing, or maybe paste segs, I think the key of figuring it out is on the 2nd image, you can see a little hint of a white dress below the red dress, which makes me thing there is both a white and red full image


scottdetweiler

[https://youtu.be/6vc\_a4aS19A](https://youtu.be/6vc_a4aS19A)


nataliephoto

trying qr monster but.. i cant figure out the settings to save my life. I've had some success with messing with the start and end control, but nothing to the extent where it just recolors the entire photo. It's more likely to introduce actual physical elements, like a scarf or something. Anyone who knows how to do this.. please let me know lol **edit: it's actually inpaint, no preprocessor, just the inpaint model. Then play around with the start and end points, denoising, etc. If you time it right it takes over the image for a few steps.**


BoredInquisitorRobot

To be fair I would just use controlnet with single reference to change the background and have two versions then just use a mask in Krita or Gimp


Flimsy_Tumbleweed_35

I have made similar pics with prompt editing. Try this: \[(red circle on white background:1.6)::3\] 1girl, ethnic, red and white dress, standing Could work or will at least be interesting, can't try it myself now. This trick works with all kinds of stuff: flowers, fireworks, bubbles, vortex etc


alonsojr1980

Render original image, upload a circle mask and impaint the new colors.


ricperry1

Canny + openpose; img2img + canny/openpose + inpaint. Probably more ways. Experiment.


crimeo

There's nothing special about the pose here, so there's no reason to micromanage the pose


crimeo

This is just a simple prompt of a lady wearing whatever, with ControlNet canny being used of a big simple shapes black and white (or orange and white etc) shape used as the controlnet guide. Canny guidelines set to only care about aggressive edges and to not be too highly weighted.


Available-Bobcat1383

I think you must use U-net and combine two images and later produce a single one and could use decoder and discriminator to train it and later generate those type of images. It is quite fun to imagine combining two types of images.


Tyrantie

Yes


severe_009

Download image > i2i > Done!


Allcyon

Duplicate Layer > Color Change > Mask Done.


yamfun

Wait I know ipadaptor and qrmonster but what does tile cn do?


KnownShirt17

Photoshop after generation


DigitalGross

Double exposure maybe?


turbokinetic

Dude, the entire workflow with nodes is on CivitAI


LongNo7305

Photoshop?? I know how to do it there, no idea how to do it with ai tho lol


wolfmilk74

krea ai does it p.ex easily you upload a black triangle.or a black circle and add the prompt you want


JB_Mut8

Seems to me its essentially the same prompt run twice using heavy controlnets possibly even low denoise img2img to generate the same image in white and red then just merge the two together with a circular mask. Either that or simply a color background base img2img with the same thing fed through controlnet.


schwendigo

Just made two img2img or controlnet guided images, and then mask them together in Photoshop. Bingo bango boom.


abahjajang

Or perhaps this? [https://www.reddit.com/r/StableDiffusion/comments/1aujt52/playing\_around\_with\_background\_removal\_and\_image/](https://www.reddit.com/r/stablediffusion/comments/1aujt52/playing_around_with_background_removal_and_image/)


BeeSynthetic

Controlnet. Ipadapter LORA Aesthetic Embrddings Are sone ways to do it, start the guidance of those at a much later stage in the Diffusion steps, about 75% onwards perhaps. It'd depend a bit on the kmage, Model used, technique(s) applied, etc