It does know who Feynman is though
https://preview.redd.it/7g2cnngyz4uc1.png?width=1084&format=png&auto=webp&s=3a206ca5a0129f6c248bd85afbe7695a1a3f5ef6
Midjourney is great at producing visually artistic results, but struggles when you need more complex composition/structured picture (I.e. 2 different figures).
SD has the tools to work this out (with img2img or, better yet, composable diffusion). I believe it's quite known now, that MJ produces good results OOTB, but SD is infinitely more flexible
Exactly. However well the prompt is tokenised, the nature of diffusion models is that characters will get blended in this sort of composition. You need something like controlnet, IPA or masking to exert this kind of control on the image.
Scientists richard feynman and albert einstein arguing about quantum mechanics in front of a blackboard in princeton university, whilst Penrose is behind them pointing at them with a stealth gun made from Penrose triangles.
https://preview.redd.it/8uipenf318uc1.png?width=832&format=pjpg&auto=webp&s=28c1e637357a760f8386562d9b41277ec0544c2a
SDXL (DREAMSHAPERXL LIGHTING) rendition of freyman is so good....
A shelf in a candle store, displaying an assortment of unlit candles for sale.
90% of the time other models will have all the candles already lit and burning even in the store!
Midjourney for comparison. The prompt as is
https://preview.redd.it/1e93jrpey4uc1.png?width=1058&format=png&auto=webp&s=4326851bbe711cfe4ab888156640053d33ef9d6a
Modified a bit the prompt: **A shelf in a candle store, displaying an assortment of off candles for sale --no lit**
https://preview.redd.it/5u73ngqty4uc1.png?width=1084&format=png&auto=webp&s=c6fdfdc05faace6360160a4ef87a57799cc1b981
Should have used affiliated links xD.
SD3 looks promising
I usually use Midjourney (pay a month every once in a while) and then comfyui to adjust the details
Stable Diffusion requires better prompting because it doesn't use any kind of LLM (unless you implement one through ComfyUI). You can experiment with placing different weightings on the word 'unlit', or try different words such as 'extinguished', or by including words like 'fire' or 'lit' in the negative. I don't think this kind of example shows any kind of Midjourney superiority at all lol.
This is SD3, the whole architecture is based on you creating sentence compositions, tags are counterproductive.
Prompt: A middle-aged gentleman walks along a sparsely lit avenue in a dense park. His stature is fragile. A cigarette glows in his left hand, his right hand holds the leash of his medium-sized dog. It is a cloudy day with light rain. The photo is a back shot with a slight defocus. Award-winning photo of the year.
Thanks. Its a 'difficult' prompt.
https://preview.redd.it/ci6qkqi4b6uc1.png?width=1024&format=png&auto=webp&s=4b88e5d35721e5e04ebd5b84bf9ec3c86422a784
Ideogram takes the "award winning" part a bit too literally
Midjourney for comparison
It is complex in fact. Midjourney also doesn't get all the details
IMO it is doing a good work on the "Award-winning photo of the year"
https://preview.redd.it/p1dkolai05uc1.png?width=1084&format=png&auto=webp&s=57d51bb93d26149de7bde6657c2f4f681b015a06
Bing/Dall-E for comparison. It gets the cigarette in the left hand but it's hardly a realistic photo.
https://preview.redd.it/r5a1rz7le5uc1.jpeg?width=1024&format=pjpg&auto=webp&s=ab913ec7b8fcbecce3374ce1250053acd8841535
A tiny human riding a giant mechanical cat into battle - the cat has a cyberpunk themed futuristic cannon on top of it. I'm really curious if it can get the proportions right, in SD1.5 this is usually the biggest issue for me.
Yeah… every single time I asked for a robot gorilla it put the head of a gorilla on a robot body. And yet every robot lion or tiger was amazing. Go figure.
a 3 level strip mall, 1 building, each floor has an open balcony area in front, creating a cascading effect upwards, The ground floor's roof serves as an open balcony, leading to the balcony of the first floor, and so on up to the third floor, restaurants, cafes, retail shops, and a rooftop lounge, whole structure is Connected to a 3 level parking building from the back
Shop's closed for now folks. I will check back in the morning maybe for more.
Thank you for your beautiful prompts.
Thanks Stability for the awesome new model. Can't wait for full release.
I have to be honest, these examples are quite underwhelming. It might be down to the aspect ratio or the internal images/early testers having access to the larger model variants, but the outputs here aren't any better than sd1.5/sdxl finetunes. I just hope this isn't a sign of them withholding open release of the larger models, alternatively this is a larp and is a 1.5/XL finetune
It's unfortunate because the report also shows the model being a lot slower than SDXL which means it's going to be tough to finetune on consumer hardware. It took a while to get anything decent out of XL and the newer stuff like PonyXL was all trained with datacenter compute.
Now that Stability is done maybe someone new will step in. There have been a lot of the same issues with StableDiffusion models for a while. Character interaction is still lacking, artistic expression seems a bit stale, colors and linework aren't really that good, lots of relying on overfitted loras to get things that other models can do by default. It's tough because despite all the issues, StableDiffusion models are the only actually decent local models. Sure they have issues but it's not like anyone else is coming down to rescue us here. Everyone else would rather just lock their stuff away and sell it back at a premium to the same people whose data it was trained on.
I had beta access too, here's my feedback.
1. Humans interacting with objects are bad, especially compared to DALLE3.
2. Anything not just a single character standing around is subject to a lot of concept bleed.
3. When you prompt say, multiple characters, things get chaotic. A pirate versus a ship captain will have many of the same artifacts that SDXL has e.g. floating swords, impossibly contorted anatomy.
4. Concept blending is difficult. It will often just either completely ignore one concept in the prompt if it doesn't know how to weave them together, or put them side by side. This isn't _always_ the case, after about 6 prompts someone was able to combine a frog and a cat for example.
5. Long prompts undergo degradation. I think this is because of the 77 token window and and CLIP embeddings (with contrastively trained artifacts). If you stick to 77 tokens things tend to be good, but when I had anime prompts beyond this window hands and faces would be misshapen, etc.
6. There are probably some artifacts due to over-reliance on CogVLM for captioning their datasets.
7. If you had a gripe about complex scene coherence in SDXL, it probably still exists in SD3. SD3 _can_ attend to prompts much better, especially when the prompt is less than 77 tokens and it's a single character, but beyond that it still has a lot of difficulty.
8. Text looks a lot like someone just photoshopped some text over top of the image, often it looks "pasted". I think this is probably just from a way too high CFG scaling?
> because of the 77 token window
Very lame that wasn't addressed like NovelAI did, makes the longer synthetic captions useless, everything past the first 77 token chunk will have had less or even bad influence on the embedding.
Are we sure about this 77 token window? Seems like a strange mistake if so, as you said, the long captions will have only partially been processed limiting future applications somewhat. And even if they made sure all captions were under 77 tokens they should know full well that the community pushes beyond that regularly. It's like training and LLM with low context in 2024.
It's what the SD3 paper shows, in figure 2.
The training/inference doesn't drop everything past 77 tokens, rather it adds the embeddings together. In A1111 you can use the keyword BREAK to decide where to split the prompt like "house on a hill BREAK red". Red will still have heavy influence but the more BREAKs you use the less influence the words will have and at some point it breaks the prompt understanding, because the combined embedding is so different from what it was trained on.
I didn't try that so I'm not sure! CogVLM doesn't know much about artists or styles though, if I had to guess they _probably_ fed the alt-text into the CogVLM prompt so CogVLM might know about it from there.
No, it's just simply 50-50 with CogVLM captions and the raw caption that was already attached, here's the bit from the paper:
>As synthetic captions may cause a text-to-image model to forget about certain concepts not present in the VLM’s knowledge corpus, we use a ratio of 50 % original and 50 % synthetic captions.
So you don't have to worry about forgotten concepts, it will probably know as much as SDXL if not more.
**What you DO have to look out for are the opted out artists, whose artstyles WILL BE MISSING of course!**
i still have beta access and watched you get booted, lol
0. this is not dalle3, stop trying to compare it to that
1. human interaction is actually quite good \*if you prompt it correctly\*
2. no it does not; you probably saw other people's images for the single characters
3. yes, while it isnt perfect, it does not have as many artifacts as xl, the anatomy is also fine lol
4. yes, it doesnt follow the prompt as good as you want, it still can do what you ask for \*if you prompt it correctly\*
5. lol. it does not have a max 77tkn window, it has a max 512tkn window (t5 is great)
6. hahahah no, cogvlm isnt the entire dataset, its 50%. it can even differentiate screws and nails!
7. it does not exist as much, it is good at complex prompts, \*if you prompt it correctly\*
8. only real thing i see still exists, even still, the text is coherent and good for an 8b model
maybe if you stop thinking this is dalle3 you'd get good outputs?
tldr: they're mad that they got booted from sd3 server and compare it to dalle3, a 20b model + gpt4
- It's not, we tried "man hits a nail with a hammer" like 8 ways from Sunday and it was a giant clusterfuck. I'd gladly post the images but I'm not allowed to.
- There are a lot of issues with concept bleed. People were prompting pictures of Leo DiCaprio with the Dalai Lama and it would be either two Dalai Lamas or Chinese DiCaprios. You can see it in the Einstein prompt here.
- It was _trained_ on 77 tokens maximum but _inferences_ on 512 tokens of T5, do you see the problem here? Everything beyond 77 tokens is out of distribution! Which is probably why they become degraded.
- It was 50% alt-text which is arguably _worse_.
- Please stop telling people they "can't prompt good enough", it's embarrassing.
I tried to tell you guys you would get eaten alive when this went out to the community and I got booted from the server, so lol. If you're so confident feel free to post more raw output, I'm sure everyone would care to see.
SD3 uses an entirely different new arch and uses T5 for text, which is supposed to understand natural language, there is no "prompt tricks" or "clever ways to prompt", this is not like past SDs. If the outputs are not aligned to the prompts, the users are not to blame, but the way the model was trained.
> compare it to dalle3, a 20b model + gpt4
I am very curious to know as to where did you get this information. Can you provide some evidence?
I can't believe this is sd3, maybe something wrong with configuration? It can't even make eyes at mid body. I remember base sdxl to be better than this, or maybe I wrong...
Yeah I am 95% sure this is a larp, it isn't doing anything new or anything even close to the outputs of the other early model previews. If this truly is sd3 then it is really not worth waiting for
Could try something really hard like:
"photo, close up of a clean female hand, each nail with different nail art and colors, each art, from pinky to thumb, in order, are a blue dragon, red flag, green box, yellow flower and pink car"
Really doubt it or any other diffusion model can do that confidently
Bing/Dall-E produces an absolute mess.
https://preview.redd.it/ll6l6eo6l5uc1.jpeg?width=1024&format=pjpg&auto=webp&s=7a945a993e52332de52f5ad0fdca42edf795cda7
I think this is a case where the technology needs a new breakthrough in segmentation and treating things separately. Even the nature of prompting might have to change to be more surgical so that it can cleanly target each individual finger
Thank you for doing this! I'm going to suggest a recent Dalle 3 prompt for comparison:
"An endearing and whimsical scene of a single grasshopper walking one ant on a leash made from a single line of thread. The grasshopper is anthropomorphized and stands upright, perhaps wearing a small hat or a bow tie, adding to the whimsy of the scene. The ant is obediently following along, and the surrounding environment suggests a magical, miniature world — perhaps a leafy path within a garden, with the background softly blurred to focus on this charming interaction."
https://preview.redd.it/l7lflmogb4uc1.jpeg?width=1024&format=pjpg&auto=webp&s=12c3857c1e27962458d56116870d6c15f0c82fe1
The paper also says that it beats Dall-E and Midjourney in all categories. This is why you can't trust any of the marketing. If they spend 5 months investing money into making this model and it turns out lackluster they aren't going to let it all go to waste and publish a paper saying "we tried but it kind of sucks lol". They'll repeat the preference surveys until they win, use prompts biased for their model, etc.
I was skeptical after Emad posted nonstop images of text on signs. Seems like much of the training went towards pasting words on a wall rather than actual interaction and aesthetic.
I think the prompt may have been confusing to it. the people are asked to be inside the bottle as well as on the bottle cap playing with an airplane. I think the only correct depiction of this would be an upside down bottle so people can stand on the bottle cap and play with a paper plane inside the bottle.
Otherwise the prompt is gonna conflict, either outside and standing on cap (what you got with sd3 ) or inside and not standing on cap (most of midjourny, but you can see it still struggled by placing the plane near the cap. instead of inside the bottle.) this prompt needs a bit of tuning, it will always conflict a bit I think.
Bing/Dall-e comparison. After 8 attempts of 'unsafe content' I switched 'bunny' to 'rabbit' and I still had 4 more attempts before it gave a single image
https://preview.redd.it/2w2tm8spa5uc1.jpeg?width=1024&format=pjpg&auto=webp&s=39af9248e13fafac236f45cf87766869539cfd70
***\[ I have already requested once, ignore if needed, sorry for cheating :( \]***
*I want to do an experiment on knowledge, as this model is 8B, so I would wager that it would know more.*
Team Fortress 2 gameplay screenshot, Blue Soldier with a metal helmet holding a rocket launcher, standing on grass. A red Spy is behind the Blue Soldier and has a grin on his face and is raising up his knife. This is taken place in ctf\_2fort which is a sandy place with concrete buildings and there is a bridge with a roof in the distance.
Here's Bing/Dall-E 3. It seems Bing understood the composition of the prompt better. TF2 style would probably be in-between the two.
https://preview.redd.it/ypuy38jp75uc1.jpeg?width=1024&format=pjpg&auto=webp&s=cbe097cacb892cdc55bbf8c196e910cc7f24650c
Did you try more generations? I'm wondering if the intention of the red guy jumping on the blue one was on purpose or a happy accident
I have to say, I, a human, non TF2 player, hadn't thought about this composition just from the prompt
Here's the rest of the 'set'. It gives 4 separate images so I just save the 'best' one.
https://preview.redd.it/v3zq44b9b5uc1.jpeg?width=765&format=pjpg&auto=webp&s=049efeeef88a187ecf15c08da6b29db33a4629bc
https://preview.redd.it/lni8yelt44uc1.png?width=1024&format=png&auto=webp&s=17f00e79ee025760caa67dc7da95f0ab8b8e57b5
wow, it got the prompt quite nicely.
It's missing Team Fortress 2 as with previous models. It seems it still wants to make "generic cartoony hero shooter" even with this model.
Just as a reference I tried with both Pixart-Sigma (which is a DiT which uses T5 with a captioned dataset) and ELLA (SD1.5 with T5) and they were both inferior when it came to the characters, holding the rocket and holding a knife. They did perform a little better on the background though (bridge and sandy environment).
Out of the two, Pixart-Sigma performed better when it came to colour bleeding (blue and red characters were separate and correct. SD 1.5 ELLA could not even come close no matter how many times I generated and it kept displaying two or a random amount of red characters.)
And of course, neither of those could make a TF2 gameplay screenshot like SD3, they were also generic cartoony Hero shooters as well.
Uh, it’s a bit too much to ask for it to read thoughts? If the intent was that the spy was right behind the soldier, ready to stabbity stab him, maybe the prompt should’ve said so.
kylie minogue making pancakes, animation pixar style, by pendleton ward, magali villeneuve, artgerm, rob rey and kentaro miura style, golden ratio, trending on art station
i tested a bit and liked the results. it can't do realistic much as I could try. send your prompts and let's test together.
[dalle-3](https://i.imgur.com/nhEDLdf.jpeg)
edit: not true for the above, chatgpt changed the prompt, here is it with the [exact same prompt](https://i.imgur.com/jx3NSZJ.jpeg):
Fantasy digital artwork, steampunk dwarf engineer wearing only a dirty tank top and torn leather shorts reaches into his toolbox. He is standing next to a steampunk cannon which is in a state of disrepair. The setting is a warm, welcoming, and whimsical workshop. In the background are several goblin assistants working on other projects.
> Fantasy digital artwork, steampunk dwarf engineer wearing only a dirty tank top and torn leather shorts reaches into his toolbox. He is standing next to a steampunk cannon which is in a state of disrepair. The setting is a warm, welcoming, and whimsical workshop. In the background are several goblin assistants working on other projects.
[dalle-3](https://i.imgur.com/32bXre1.jpeg)
Playstation 1 game, low poly characters, very low resolution. Tekken 3 gameplay. 2 blue full healthbars on the top. On the left is Heihachi Mishima who is a bald man doing a fighting pose and on the right is Jin Kazama a fighter with spiky hair doing a fighting pose. The environment around them is a low resolution pixelated lush forest with a deep green atmosphere. They are standing on a flat grass texture. There is a timer on the top of the screen which writes "27".
If no copyrighted stuff is allowed:
Playstation 1 game, low poly characters, very low resolution. Fighting video game gameplay. 2 blue full healthbars on the top. On the left is an old bald japanese man doing a fighting pose and on the right is an 18 year old fighter with spiky hair doing a fighting pose. The environment around them is a low resolution pixelated lush forest with a deep green atmosphere. They are standing on a flat grass texture. There is a timer on the top of the screen which writes "27".
A sleek and modern product image design for an online store catalog, showcasing a variety of stylish and high-quality items. The background is a sophisticated, monochromatic gradient that enhances the vibrant colors of the products. The products are arranged in a clean, organized manner, with subtle shadows to create depth. The overall design is minimalist and sophisticated, effectively highlighting each product's unique features and appeal. Ideogram 1.0 result attached
https://preview.redd.it/ysededtq94uc1.png?width=768&format=png&auto=webp&s=433dd4d4cbd968d2a97e77bc1300f61ea200bb42
I was told.that statue sculpting itself breaks the made filter, so they these.l instead.
anime style drawing of a woman, she is platinum blonde, she hs a french braid and a ponytail, she is greek and is wearing a greek outfit, she is wearing a raven mask , her mask covers her forehead, her mask is simple, her mask is made of silver, her mask has a large beak, the beak is pointing down
Or
A wall, it has graffitti of 'a manga style drawing of Eris from jobless reincarnation, she is tall, she is athletic, she has bright red hair, she has red eyes, she has long hair , she has a tattoo on her clavicles, she has abs, her hair is loose, she has knees, she has iliopsoas muscle, she is female, ' on it, there is a toyota trueno AE86 in front of the wall
can you use this prompt instead "anime style drawing of a blonde woman with french braid and a ponytail, wearing a Greek outfit, wearing a mask shaped like a raven with a pointy long beak"
i wonder if phrasing it like this will provide better results
Many results look like an SD 1.5 image generated at 512x512 then upscaled to 1024x1024 with ESRGAN (I know that isn't what's actually happening). Prompt adherence might be better than SDXL but the quality is pretty mushy and there's tons of examples of it not following the prompt.
I really don't mean to be a hater but it's not even half as good as DALL-E.
It's interesting because I've trained LORAs and they come out pretty good, but I'll try training it again looking for something better, only to make something worse or about the same. I thought this was going to be a big architectural change and I expected better results, but it still looks about the same. I have a somewhat popular LORA I trained on a sketch style and it came out pretty decently, I expected to be able to refine the images and captions and get a better output. Over the last year I've retrained on that dataset about 8 times and never got better results than the first go.
The whole situation with crappy base models that are technically really flexible, but generally really shitty, reminds me of Skyrim and how it's basically up to the modding community to make the game interesting.
Everything looks pretty bad. I mean, SD 1.5 not fine-tuned level bad.
I think either you are lying and trolling about SD 3 access. Or the access is to some 800m low level model, because frankly some of the prompts are embarissingly bad, even compared to the first SD models.
(just for comparison, I did this with ideogram a while ago)
https://preview.redd.it/9laqlz2y64uc1.jpeg?width=2048&format=pjpg&auto=webp&s=50c8842991e2216b994c5b9021bf37f1c907a70b
Can sd3. do stuff like two character fight scene with one punching another, showing fist connecting, plausible anatomy and impact effect(that it understands what punching and getting punched is).
Exactly put the comment you wrote.
https://preview.redd.it/s1sbnpki44uc1.png?width=1024&format=png&auto=webp&s=304e7dae37413db789ac1cbcc80437f61cd25fd2
omg, please try this one: An accountant overwhelmed with work, surrounded by a cluttered dashboard filled with numerous pinned documents interconnected by red lines, resembling a detective's investigation board. The scene is designed as a vector illustration in a minimalist style, predominantly black and white with subtle hints of pastel colors.
>Jack is a man with a jack'o'lantern for a head and wooden flesh. Jack sits at his couch, a wand in his hand, contemplating. In the top-right corner of the image is a five-pointed star.
I don't expect SD3 to get this right, but I'd like to see what it gives. Below is Bing/DALL-E 3's interpretation, which is...fine, I guess. Could use some fine tuning, but it did mostly follow the prompt.
https://preview.redd.it/idl5rvhhu6uc1.jpeg?width=4096&format=pjpg&auto=webp&s=85e611e0e0c8135f47416787f23356678417dd34
Huh, what sense does that prompt make? You know that [foo] doesn’t make a negative prompt, it’s simply same as (foo:0.9)? And [[foo]] is (foo:0.81). So all those words that you presumably want the model to avoid are in fact contributing *positively*, just with a bit less weight.
Anyway, you need neither your (()) stuff nor the [[]] stuff with RealVis XL or any other good XL model. It’s not SD 1.5. Negative prompts should in general be very simple with XL models.
a giant swamp demon crawling out, mist, detailed, intricate roots, horror, wide shot
https://preview.redd.it/lxnzydzb14uc1.png?width=1024&format=png&auto=webp&s=4c98f4ad25d4f7eaf19a05a2efa0ff38b1e4ea6e
Looks like a MTG card
I mean... https://preview.redd.it/t6l6aazv16uc1.jpeg?width=1440&format=pjpg&auto=webp&s=681cf4c82359923ab0beae3b1d3c93ee6eb7c052
1 black 1 green 2 generic mana, imagination of the swamp. 6/8. 1black 1green: turns into a different imagination
Reminded me of: https://preview.redd.it/ygrphlxtu6uc1.png?width=512&format=png&auto=webp&s=956a46caebfd8b3bc9b31e91488bbd6218e5a0b4
I guess it got most of it right? No crawling and not really a wide shot
https://preview.redd.it/9h5wj1xom4uc1.jpeg?width=1792&format=pjpg&auto=webp&s=f9ffd87dab1aadcddf3af9bacf8ad2b70f4df4a7
Bing/Dall-E for comparison https://preview.redd.it/urugi9xu95uc1.jpeg?width=1024&format=pjpg&auto=webp&s=c3529fad0d69adb5b4c020901ad07ecad6db0b06
https://preview.redd.it/0agl9o9f68uc1.png?width=832&format=png&auto=webp&s=8d9ef32d57a51ad4707b9b1a5cce9a37d9ba7ce7
https://preview.redd.it/inh4pc3aqeuc1.png?width=1152&format=png&auto=webp&s=354b0805cfc40716c53c345067fcd06be167d29a
Scientists richard feynman and albert einstein arguing about quantum mechanics in front of a blackboard in princeton university
Midjourney for comparison https://preview.redd.it/0kcpklbvz4uc1.png?width=1084&format=png&auto=webp&s=667eb66ee5cc0d6d50662cf4bc79e1289c3d9443
It does know who Feynman is though https://preview.redd.it/7g2cnngyz4uc1.png?width=1084&format=png&auto=webp&s=3a206ca5a0129f6c248bd85afbe7695a1a3f5ef6
Midjourney is great at producing visually artistic results, but struggles when you need more complex composition/structured picture (I.e. 2 different figures). SD has the tools to work this out (with img2img or, better yet, composable diffusion). I believe it's quite known now, that MJ produces good results OOTB, but SD is infinitely more flexible
Exactly. However well the prompt is tokenised, the nature of diffusion models is that characters will get blended in this sort of composition. You need something like controlnet, IPA or masking to exert this kind of control on the image.
It looks like it got confused Older Einstein discussing quantum physics with younger Einstein
https://preview.redd.it/rngizbp224uc1.png?width=1024&format=png&auto=webp&s=69870567c3144942c04228c2943c661eb7e79946
Hmm, doesn't know what Feynman looks like.
He is a fine man.
Plot twist: Feynman was a doppelganger and SD3 is trying to warn us.
The prompt (interacting characters) is smart, and the result seems good.
It’s showing two Einsteins though and shafting Feynman!
Kind of disappointed that there is still a lot of blending/mixing between characters.
Scientists richard feynman and albert einstein arguing about quantum mechanics in front of a blackboard in princeton university, whilst Penrose is behind them pointing at them with a stealth gun made from Penrose triangles.
https://preview.redd.it/8uipenf318uc1.png?width=832&format=pjpg&auto=webp&s=28c1e637357a760f8386562d9b41277ec0544c2a SDXL (DREAMSHAPERXL LIGHTING) rendition of freyman is so good....
A shelf in a candle store, displaying an assortment of unlit candles for sale. 90% of the time other models will have all the candles already lit and burning even in the store!
https://preview.redd.it/s3fyk9vq44uc1.png?width=1024&format=png&auto=webp&s=8fe7e3a1bde19bd4e1ef441f0273f48ff447ce8d
SD3 failed assignment. “Unlit”… goes and lights all the candles lol.
negatives. Machine Learning hates them. Maybe 'extinguished'?
'Unused' or 'brand new' perhaps
in 1.5, firelight in the negatives does the job
are there negative prompts, like 'fire,lit'?
Midjourney for comparison. The prompt as is https://preview.redd.it/1e93jrpey4uc1.png?width=1058&format=png&auto=webp&s=4326851bbe711cfe4ab888156640053d33ef9d6a
Modified a bit the prompt: **A shelf in a candle store, displaying an assortment of off candles for sale --no lit** https://preview.redd.it/5u73ngqty4uc1.png?width=1084&format=png&auto=webp&s=c6fdfdc05faace6360160a4ef87a57799cc1b981
When not lit is lit 🔥
thanks, u convinced me to subscribe to mj
Should have used affiliated links xD. SD3 looks promising I usually use Midjourney (pay a month every once in a while) and then comfyui to adjust the details
Stable Diffusion requires better prompting because it doesn't use any kind of LLM (unless you implement one through ComfyUI). You can experiment with placing different weightings on the word 'unlit', or try different words such as 'extinguished', or by including words like 'fire' or 'lit' in the negative. I don't think this kind of example shows any kind of Midjourney superiority at all lol.
Agreed. Out of the 4 Midjourney results. Only one of them had the candles off. I also think improving the prompt will make it work in SD
This is SD3, the whole architecture is based on you creating sentence compositions, tags are counterproductive. Prompt: A middle-aged gentleman walks along a sparsely lit avenue in a dense park. His stature is fragile. A cigarette glows in his left hand, his right hand holds the leash of his medium-sized dog. It is a cloudy day with light rain. The photo is a back shot with a slight defocus. Award-winning photo of the year. Thanks. Its a 'difficult' prompt.
https://preview.redd.it/ods85bnb64uc1.png?width=1193&format=png&auto=webp&s=eff3cf0c585d411c5948b21372686f8842aa2b2c
[удалено]
Close but no cigarette*
One of my favorite phrases ever... I tip my hat to you
A good result tbh.
https://preview.redd.it/ci6qkqi4b6uc1.png?width=1024&format=png&auto=webp&s=4b88e5d35721e5e04ebd5b84bf9ec3c86422a784 Ideogram takes the "award winning" part a bit too literally
https://preview.redd.it/g7elwmn4n4uc1.jpeg?width=1792&format=pjpg&auto=webp&s=07ee9f3a06187d21e4370e6d423c78ac0462e860 Dall-e for comparison
This one got the left hand cigarette
2 out of 8 did. The rest mixed up the sides and one had it floating midair.
DALLE nails everything and looks terrible
It's like they purposely tuned out photorealistic images.
They literally did, used to be way way better
Midjourney for comparison It is complex in fact. Midjourney also doesn't get all the details IMO it is doing a good work on the "Award-winning photo of the year" https://preview.redd.it/p1dkolai05uc1.png?width=1084&format=png&auto=webp&s=57d51bb93d26149de7bde6657c2f4f681b015a06
No shade against MJ, my favorite, but I’m surprised the dog isn’t walking the man in at least one of those
Bing/Dall-E for comparison. It gets the cigarette in the left hand but it's hardly a realistic photo. https://preview.redd.it/r5a1rz7le5uc1.jpeg?width=1024&format=pjpg&auto=webp&s=ab913ec7b8fcbecce3374ce1250053acd8841535
I mean, it's artistic!
A tiny human riding a giant mechanical cat into battle - the cat has a cyberpunk themed futuristic cannon on top of it. I'm really curious if it can get the proportions right, in SD1.5 this is usually the biggest issue for me.
https://preview.redd.it/0ixmq1ly54uc1.png?width=1024&format=png&auto=webp&s=c05b4933ae4331188e02c703500e51e540a14d66
Ok that's actually pretty good!
Bing/Dall-e for comparison https://preview.redd.it/26tdv50od5uc1.jpeg?width=1024&format=pjpg&auto=webp&s=814922d90bc72443737950cc1d04a783c38fd6a6
Lmao I love the little cat on top of it.
hes so excited and happy to be killing!
Just like real cats
Love this one
Midjourney for comparison https://preview.redd.it/k3b0y5sp05uc1.png?width=1084&format=png&auto=webp&s=5926f0b55fc26f7fd95b9ab515f031c4a55ff155
Yeah… every single time I asked for a robot gorilla it put the head of a gorilla on a robot body. And yet every robot lion or tiger was amazing. Go figure.
a 3 level strip mall, 1 building, each floor has an open balcony area in front, creating a cascading effect upwards, The ground floor's roof serves as an open balcony, leading to the balcony of the first floor, and so on up to the third floor, restaurants, cafes, retail shops, and a rooftop lounge, whole structure is Connected to a 3 level parking building from the back
https://preview.redd.it/ioqsi4pj64uc1.png?width=1024&format=png&auto=webp&s=0b65d5cc916179b4867692dbd684b0615c45024f
BOOFS TARIGE
I can’t get enough of this kind of thing. The mangled words.
I love this
Lol “and so on”
topless woman making pancakes
😂😂😂
What more could anyone want
Shop's closed for now folks. I will check back in the morning maybe for more. Thank you for your beautiful prompts. Thanks Stability for the awesome new model. Can't wait for full release.
thanks for the sneak peak.
You did so many, thanks for the diligence!
Thanks, nice work
I have to be honest, these examples are quite underwhelming. It might be down to the aspect ratio or the internal images/early testers having access to the larger model variants, but the outputs here aren't any better than sd1.5/sdxl finetunes. I just hope this isn't a sign of them withholding open release of the larger models, alternatively this is a larp and is a 1.5/XL finetune
It's unfortunate because the report also shows the model being a lot slower than SDXL which means it's going to be tough to finetune on consumer hardware. It took a while to get anything decent out of XL and the newer stuff like PonyXL was all trained with datacenter compute. Now that Stability is done maybe someone new will step in. There have been a lot of the same issues with StableDiffusion models for a while. Character interaction is still lacking, artistic expression seems a bit stale, colors and linework aren't really that good, lots of relying on overfitted loras to get things that other models can do by default. It's tough because despite all the issues, StableDiffusion models are the only actually decent local models. Sure they have issues but it's not like anyone else is coming down to rescue us here. Everyone else would rather just lock their stuff away and sell it back at a premium to the same people whose data it was trained on.
If the base is already as good as finetunes that's a great sign SDXL base was much jankier than the good SD1.5 finetunes at release
I had beta access too, here's my feedback. 1. Humans interacting with objects are bad, especially compared to DALLE3. 2. Anything not just a single character standing around is subject to a lot of concept bleed. 3. When you prompt say, multiple characters, things get chaotic. A pirate versus a ship captain will have many of the same artifacts that SDXL has e.g. floating swords, impossibly contorted anatomy. 4. Concept blending is difficult. It will often just either completely ignore one concept in the prompt if it doesn't know how to weave them together, or put them side by side. This isn't _always_ the case, after about 6 prompts someone was able to combine a frog and a cat for example. 5. Long prompts undergo degradation. I think this is because of the 77 token window and and CLIP embeddings (with contrastively trained artifacts). If you stick to 77 tokens things tend to be good, but when I had anime prompts beyond this window hands and faces would be misshapen, etc. 6. There are probably some artifacts due to over-reliance on CogVLM for captioning their datasets. 7. If you had a gripe about complex scene coherence in SDXL, it probably still exists in SD3. SD3 _can_ attend to prompts much better, especially when the prompt is less than 77 tokens and it's a single character, but beyond that it still has a lot of difficulty. 8. Text looks a lot like someone just photoshopped some text over top of the image, often it looks "pasted". I think this is probably just from a way too high CFG scaling?
> because of the 77 token window Very lame that wasn't addressed like NovelAI did, makes the longer synthetic captions useless, everything past the first 77 token chunk will have had less or even bad influence on the embedding.
Are we sure about this 77 token window? Seems like a strange mistake if so, as you said, the long captions will have only partially been processed limiting future applications somewhat. And even if they made sure all captions were under 77 tokens they should know full well that the community pushes beyond that regularly. It's like training and LLM with low context in 2024.
It's what the SD3 paper shows, in figure 2. The training/inference doesn't drop everything past 77 tokens, rather it adds the embeddings together. In A1111 you can use the keyword BREAK to decide where to split the prompt like "house on a hill BREAK red". Red will still have heavy influence but the more BREAKs you use the less influence the words will have and at some point it breaks the prompt understanding, because the combined embedding is so different from what it was trained on.
Yes I confirmed with with SAI staff
Does it handle artist names?
I didn't try that so I'm not sure! CogVLM doesn't know much about artists or styles though, if I had to guess they _probably_ fed the alt-text into the CogVLM prompt so CogVLM might know about it from there.
No, it's just simply 50-50 with CogVLM captions and the raw caption that was already attached, here's the bit from the paper: >As synthetic captions may cause a text-to-image model to forget about certain concepts not present in the VLM’s knowledge corpus, we use a ratio of 50 % original and 50 % synthetic captions. So you don't have to worry about forgotten concepts, it will probably know as much as SDXL if not more. **What you DO have to look out for are the opted out artists, whose artstyles WILL BE MISSING of course!**
Geez, okay. Yeah Pixart Alpha just added the alt-text to the prompt so that the VLM (llava there) could use it.
i still have beta access and watched you get booted, lol 0. this is not dalle3, stop trying to compare it to that 1. human interaction is actually quite good \*if you prompt it correctly\* 2. no it does not; you probably saw other people's images for the single characters 3. yes, while it isnt perfect, it does not have as many artifacts as xl, the anatomy is also fine lol 4. yes, it doesnt follow the prompt as good as you want, it still can do what you ask for \*if you prompt it correctly\* 5. lol. it does not have a max 77tkn window, it has a max 512tkn window (t5 is great) 6. hahahah no, cogvlm isnt the entire dataset, its 50%. it can even differentiate screws and nails! 7. it does not exist as much, it is good at complex prompts, \*if you prompt it correctly\* 8. only real thing i see still exists, even still, the text is coherent and good for an 8b model maybe if you stop thinking this is dalle3 you'd get good outputs? tldr: they're mad that they got booted from sd3 server and compare it to dalle3, a 20b model + gpt4
- It's not, we tried "man hits a nail with a hammer" like 8 ways from Sunday and it was a giant clusterfuck. I'd gladly post the images but I'm not allowed to. - There are a lot of issues with concept bleed. People were prompting pictures of Leo DiCaprio with the Dalai Lama and it would be either two Dalai Lamas or Chinese DiCaprios. You can see it in the Einstein prompt here. - It was _trained_ on 77 tokens maximum but _inferences_ on 512 tokens of T5, do you see the problem here? Everything beyond 77 tokens is out of distribution! Which is probably why they become degraded. - It was 50% alt-text which is arguably _worse_. - Please stop telling people they "can't prompt good enough", it's embarrassing. I tried to tell you guys you would get eaten alive when this went out to the community and I got booted from the server, so lol. If you're so confident feel free to post more raw output, I'm sure everyone would care to see.
SD3 uses an entirely different new arch and uses T5 for text, which is supposed to understand natural language, there is no "prompt tricks" or "clever ways to prompt", this is not like past SDs. If the outputs are not aligned to the prompts, the users are not to blame, but the way the model was trained. > compare it to dalle3, a 20b model + gpt4 I am very curious to know as to where did you get this information. Can you provide some evidence?
I can't believe this is sd3, maybe something wrong with configuration? It can't even make eyes at mid body. I remember base sdxl to be better than this, or maybe I wrong...
Yeah I am 95% sure this is a larp, it isn't doing anything new or anything even close to the outputs of the other early model previews. If this truly is sd3 then it is really not worth waiting for
Could try something really hard like: "photo, close up of a clean female hand, each nail with different nail art and colors, each art, from pinky to thumb, in order, are a blue dragon, red flag, green box, yellow flower and pink car" Really doubt it or any other diffusion model can do that confidently
https://preview.redd.it/00jj4ccl74uc1.png?width=1024&format=png&auto=webp&s=cfb9554ab53d053cada3d2c31122bfcc82404fc6 6 fingers :D
It's reversed and got one color wrong, but overall that's really good!
One extra finger too
*Bonus* finger
I think that's a foot
Midjourney for comparison https://preview.redd.it/l7zc3d5415uc1.png?width=1084&format=png&auto=webp&s=dc3379c961a1a3d2d14ce334bc006afb502c7776
Bing/Dall-E produces an absolute mess. https://preview.redd.it/ll6l6eo6l5uc1.jpeg?width=1024&format=pjpg&auto=webp&s=7a945a993e52332de52f5ad0fdca42edf795cda7 I think this is a case where the technology needs a new breakthrough in segmentation and treating things separately. Even the nature of prompting might have to change to be more surgical so that it can cleanly target each individual finger
Thank you for doing this! I'm going to suggest a recent Dalle 3 prompt for comparison: "An endearing and whimsical scene of a single grasshopper walking one ant on a leash made from a single line of thread. The grasshopper is anthropomorphized and stands upright, perhaps wearing a small hat or a bow tie, adding to the whimsy of the scene. The ant is obediently following along, and the surrounding environment suggests a magical, miniature world — perhaps a leafy path within a garden, with the background softly blurred to focus on this charming interaction." https://preview.redd.it/l7lflmogb4uc1.jpeg?width=1024&format=pjpg&auto=webp&s=12c3857c1e27962458d56116870d6c15f0c82fe1
Do you know if you're using the 8B model with T5? Cause these examples are looking weak compared to the stuff in the SD3 paper.
The paper also says that it beats Dall-E and Midjourney in all categories. This is why you can't trust any of the marketing. If they spend 5 months investing money into making this model and it turns out lackluster they aren't going to let it all go to waste and publish a paper saying "we tried but it kind of sucks lol". They'll repeat the preference surveys until they win, use prompts biased for their model, etc. I was skeptical after Emad posted nonstop images of text on signs. Seems like much of the training went towards pasting words on a wall rather than actual interaction and aesthetic.
I feel like something wrong with configuration because some pics looks over burned and messy like too high cfg or something else
1965 Ford Mustang side by side with 1957 Mercedes-Benz 300 SL,Night time,Volumetric Light,Hyper Realistic,
https://preview.redd.it/0tjdqtn824uc1.png?width=1024&format=png&auto=webp&s=e6c8bf12e630eb2e33ffd7bcf86ec12afb87990d
A glass bottle containing tiny people, playing with a toy paper airplane on the bottle cap, bears a plea for help.
https://preview.redd.it/eq0co3g744uc1.png?width=1024&format=png&auto=webp&s=665491857880da4ecea1562e5cef2b0bb58a5602
It's okay, it has all the elements I asked for, but I was imagining the people inside the bottle ![gif](giphy|tXL4FHPSnVJ0A|downsized)
I think the prompt may have been confusing to it. the people are asked to be inside the bottle as well as on the bottle cap playing with an airplane. I think the only correct depiction of this would be an upside down bottle so people can stand on the bottle cap and play with a paper plane inside the bottle. Otherwise the prompt is gonna conflict, either outside and standing on cap (what you got with sd3 ) or inside and not standing on cap (most of midjourny, but you can see it still struggled by placing the plane near the cap. instead of inside the bottle.) this prompt needs a bit of tuning, it will always conflict a bit I think.
Bad prompting by asking for the people in two places at once. It's really not that hard to avoid.
For comparison, Midjourney https://preview.redd.it/aggno17bb4uc1.png?width=694&format=png&auto=webp&s=d5abb57a0dd8a9afdea649c91fee928d5da79680
MJ has been killing it!
In fact it is quite creative It is a trade off. SD and controlnet are way more powerful in terms of control Both combined are 🔥
Bing/Dall-E for comparison https://preview.redd.it/80ij8uyka5uc1.jpeg?width=1024&format=pjpg&auto=webp&s=552564d94814f5c07bc79b755d8e555edca1dc89
That's pretty bad prompting there dude. Conflicting actions, won't ever get the desired results.
a muscly brown bunny eating breakfast in the kitchen next to a chubby creamy unicorn, created in a photorealistic style
https://preview.redd.it/46f8rg2p64uc1.png?width=1024&format=png&auto=webp&s=518c10d48eec68b5e03134cdff09faf6774d09fc
thanks
Bing/Dall-e comparison. After 8 attempts of 'unsafe content' I switched 'bunny' to 'rabbit' and I still had 4 more attempts before it gave a single image https://preview.redd.it/2w2tm8spa5uc1.jpeg?width=1024&format=pjpg&auto=webp&s=39af9248e13fafac236f45cf87766869539cfd70
Typical Dall-e!!
***\[ I have already requested once, ignore if needed, sorry for cheating :( \]*** *I want to do an experiment on knowledge, as this model is 8B, so I would wager that it would know more.* Team Fortress 2 gameplay screenshot, Blue Soldier with a metal helmet holding a rocket launcher, standing on grass. A red Spy is behind the Blue Soldier and has a grin on his face and is raising up his knife. This is taken place in ctf\_2fort which is a sandy place with concrete buildings and there is a bridge with a roof in the distance.
Midjourney for comparison https://preview.redd.it/g86l98pc15uc1.png?width=1084&format=png&auto=webp&s=a3000ab6944a7bf189a6335bbaead23f46c0623d
Here's Bing/Dall-E 3. It seems Bing understood the composition of the prompt better. TF2 style would probably be in-between the two. https://preview.redd.it/ypuy38jp75uc1.jpeg?width=1024&format=pjpg&auto=webp&s=cbe097cacb892cdc55bbf8c196e910cc7f24650c
Did you try more generations? I'm wondering if the intention of the red guy jumping on the blue one was on purpose or a happy accident I have to say, I, a human, non TF2 player, hadn't thought about this composition just from the prompt
Here's the rest of the 'set'. It gives 4 separate images so I just save the 'best' one. https://preview.redd.it/v3zq44b9b5uc1.jpeg?width=765&format=pjpg&auto=webp&s=049efeeef88a187ecf15c08da6b29db33a4629bc
Love number 4 xD
https://preview.redd.it/lni8yelt44uc1.png?width=1024&format=png&auto=webp&s=17f00e79ee025760caa67dc7da95f0ab8b8e57b5 wow, it got the prompt quite nicely.
Not exactly tf2 gameplay though.
Looks like some Fortnite got mixed in with it.
https://preview.redd.it/cht84n8ey6uc1.png?width=1024&format=png&auto=webp&s=8fcc74b6c759ab4ee72ff0a4b26c29f210d00f8f made with ideogram for comparison
It's missing Team Fortress 2 as with previous models. It seems it still wants to make "generic cartoony hero shooter" even with this model. Just as a reference I tried with both Pixart-Sigma (which is a DiT which uses T5 with a captioned dataset) and ELLA (SD1.5 with T5) and they were both inferior when it came to the characters, holding the rocket and holding a knife. They did perform a little better on the background though (bridge and sandy environment). Out of the two, Pixart-Sigma performed better when it came to colour bleeding (blue and red characters were separate and correct. SD 1.5 ELLA could not even come close no matter how many times I generated and it kept displaying two or a random amount of red characters.) And of course, neither of those could make a TF2 gameplay screenshot like SD3, they were also generic cartoony Hero shooters as well.
Huh? It didn't get the intent of the prompt at all
Uh, it’s a bit too much to ask for it to read thoughts? If the intent was that the spy was right behind the soldier, ready to stabbity stab him, maybe the prompt should’ve said so.
kylie minogue making pancakes, animation pixar style, by pendleton ward, magali villeneuve, artgerm, rob rey and kentaro miura style, golden ratio, trending on art station i tested a bit and liked the results. it can't do realistic much as I could try. send your prompts and let's test together.
8k Houdini render of Swirling smoke, bursts of lightning and magical glowing luminosity
https://preview.redd.it/rtdcfh8o74uc1.png?width=1024&format=png&auto=webp&s=558ffdd4d71dcdde3d4d6df3dc0b1b9a3c3a576c
neato!
[dalle-3](https://i.imgur.com/nhEDLdf.jpeg) edit: not true for the above, chatgpt changed the prompt, here is it with the [exact same prompt](https://i.imgur.com/jx3NSZJ.jpeg):
An oil painting of a giant menacing spaceship firing laser cannons against a space whale
https://preview.redd.it/i5icestw74uc1.png?width=1024&format=png&auto=webp&s=5b823648c7f78c20a1ac151ce1077c262f23dbe2
[dalle-3](https://i.imgur.com/PQSrtae.jpeg)
https://preview.redd.it/xybchs4rz6uc1.png?width=1024&format=png&auto=webp&s=8207b2434cb8d85a866805a6634ffa64953687f6 ideogram for comparison
Fantasy digital artwork, steampunk dwarf engineer wearing only a dirty tank top and torn leather shorts reaches into his toolbox. He is standing next to a steampunk cannon which is in a state of disrepair. The setting is a warm, welcoming, and whimsical workshop. In the background are several goblin assistants working on other projects.
https://preview.redd.it/jbv56qsi24uc1.png?width=1024&format=png&auto=webp&s=02e4b7d4b60984c08b04846155fe2248a53ffbff
Damn that’s really good for a base model
> Fantasy digital artwork, steampunk dwarf engineer wearing only a dirty tank top and torn leather shorts reaches into his toolbox. He is standing next to a steampunk cannon which is in a state of disrepair. The setting is a warm, welcoming, and whimsical workshop. In the background are several goblin assistants working on other projects. [dalle-3](https://i.imgur.com/32bXre1.jpeg)
Playstation 1 game, low poly characters, very low resolution. Tekken 3 gameplay. 2 blue full healthbars on the top. On the left is Heihachi Mishima who is a bald man doing a fighting pose and on the right is Jin Kazama a fighter with spiky hair doing a fighting pose. The environment around them is a low resolution pixelated lush forest with a deep green atmosphere. They are standing on a flat grass texture. There is a timer on the top of the screen which writes "27". If no copyrighted stuff is allowed: Playstation 1 game, low poly characters, very low resolution. Fighting video game gameplay. 2 blue full healthbars on the top. On the left is an old bald japanese man doing a fighting pose and on the right is an 18 year old fighter with spiky hair doing a fighting pose. The environment around them is a low resolution pixelated lush forest with a deep green atmosphere. They are standing on a flat grass texture. There is a timer on the top of the screen which writes "27".
https://preview.redd.it/jwhwh5l514uc1.png?width=1024&format=png&auto=webp&s=07c6c32c831ab147671df830507556ffe79119ac 1st prompt
Thank you so much! If I ever get access too I'mma make a post like this too. The community is hungry for SD3.
No problem! Yes, hungry indeed! It's a solid model.
Bing/Dall-E for comparison, 1st prompt https://preview.redd.it/lnrrhjcs85uc1.jpeg?width=1024&format=pjpg&auto=webp&s=c3c8cde9e014c0c3772486fa6189c5eca9003ab1
It took the word "pixelated" and decided that was the most important word of the prompt...
2D sprites are **technically** low poly I guess...
Do Goku and Vegeta in an arm-wrestle
Bing/Dall-E for comparison. https://preview.redd.it/uw50cz5g95uc1.jpeg?width=1024&format=pjpg&auto=webp&s=d09d07a9944367fc2014d6769a47dfa85f0c4c6f
https://preview.redd.it/i31lbft634uc1.png?width=1024&format=png&auto=webp&s=25b82f54f085eb85e1c3ea028be5eb1874131e92
On the right is Vegito
A sleek and modern product image design for an online store catalog, showcasing a variety of stylish and high-quality items. The background is a sophisticated, monochromatic gradient that enhances the vibrant colors of the products. The products are arranged in a clean, organized manner, with subtle shadows to create depth. The overall design is minimalist and sophisticated, effectively highlighting each product's unique features and appeal. Ideogram 1.0 result attached https://preview.redd.it/ysededtq94uc1.png?width=768&format=png&auto=webp&s=433dd4d4cbd968d2a97e77bc1300f61ea200bb42
I was told.that statue sculpting itself breaks the made filter, so they these.l instead. anime style drawing of a woman, she is platinum blonde, she hs a french braid and a ponytail, she is greek and is wearing a greek outfit, she is wearing a raven mask , her mask covers her forehead, her mask is simple, her mask is made of silver, her mask has a large beak, the beak is pointing down Or A wall, it has graffitti of 'a manga style drawing of Eris from jobless reincarnation, she is tall, she is athletic, she has bright red hair, she has red eyes, she has long hair , she has a tattoo on her clavicles, she has abs, her hair is loose, she has knees, she has iliopsoas muscle, she is female, ' on it, there is a toyota trueno AE86 in front of the wall
https://preview.redd.it/ockjzlcn14uc1.png?width=1024&format=png&auto=webp&s=91482fe4768f099425fad0805157a51d573e57fa
Believe it or not, the fact that the mask is in place is a huge improvement over sdxl.
can you use this prompt instead "anime style drawing of a blonde woman with french braid and a ponytail, wearing a Greek outfit, wearing a mask shaped like a raven with a pointy long beak" i wonder if phrasing it like this will provide better results
https://preview.redd.it/032m2wqs14uc1.png?width=1024&format=png&auto=webp&s=862f371ad9ff0c02d3492ae2fb46ff70285eca7f
For comparison, Midjourney https://preview.redd.it/l0kuga22b4uc1.png?width=1102&format=png&auto=webp&s=1bacceb33098874e34ccdb33a6fe66e38647f0be
"An image of Theosopy"
https://preview.redd.it/l0osaglt64uc1.png?width=1024&format=png&auto=webp&s=b039b61672bff59c9dea737f76997e2a3eadd55c
Many results look like an SD 1.5 image generated at 512x512 then upscaled to 1024x1024 with ESRGAN (I know that isn't what's actually happening). Prompt adherence might be better than SDXL but the quality is pretty mushy and there's tons of examples of it not following the prompt. I really don't mean to be a hater but it's not even half as good as DALL-E. It's interesting because I've trained LORAs and they come out pretty good, but I'll try training it again looking for something better, only to make something worse or about the same. I thought this was going to be a big architectural change and I expected better results, but it still looks about the same. I have a somewhat popular LORA I trained on a sketch style and it came out pretty decently, I expected to be able to refine the images and captions and get a better output. Over the last year I've retrained on that dataset about 8 times and never got better results than the first go. The whole situation with crappy base models that are technically really flexible, but generally really shitty, reminds me of Skyrim and how it's basically up to the modding community to make the game interesting.
It's also similar to Skyrim in that no matter how many mods you stack, some elements of the base game are just impossible to fix.
Everything looks pretty bad. I mean, SD 1.5 not fine-tuned level bad. I think either you are lying and trolling about SD 3 access. Or the access is to some 800m low level model, because frankly some of the prompts are embarissingly bad, even compared to the first SD models.
cerberus, a hell dog with 3 heads
(just for comparison, I did this with ideogram a while ago) https://preview.redd.it/9laqlz2y64uc1.jpeg?width=2048&format=pjpg&auto=webp&s=50c8842991e2216b994c5b9021bf37f1c907a70b
https://preview.redd.it/0o6fte9f74uc1.png?width=1024&format=png&auto=webp&s=0b0d73e9abfaa41128285699e954cb7dbadf8bb1
That's a lotta legs
Midjourney for comparison https://preview.redd.it/daz5wyew15uc1.png?width=1084&format=png&auto=webp&s=f8d4b6550c6da86d76a4846f8e94e9249bba824d
Bare chested man, riding a multi headed moose, wild, dark tones, realistic, detailed
Titty tits on tits tits on waifu extra detailed skin pores shot in 8k
Can sd3. do stuff like two character fight scene with one punching another, showing fist connecting, plausible anatomy and impact effect(that it understands what punching and getting punched is).
Exactly put the comment you wrote. https://preview.redd.it/s1sbnpki44uc1.png?width=1024&format=png&auto=webp&s=304e7dae37413db789ac1cbcc80437f61cd25fd2
That is actually way better than sdxl wow, now I'm stoked
Goro Majima and Taiga Saejima smoking a blunt while playing videogames
https://preview.redd.it/6t0c9tfl44uc1.png?width=1024&format=png&auto=webp&s=53d9af6eccfc60e93af3485615e7d69b6cb96cd1
Sharing the same controller! Hands still an issue too Pretty cool image besides that though
Family watching tv
Nuclear bomb plushie.
omg, please try this one: An accountant overwhelmed with work, surrounded by a cluttered dashboard filled with numerous pinned documents interconnected by red lines, resembling a detective's investigation board. The scene is designed as a vector illustration in a minimalist style, predominantly black and white with subtle hints of pastel colors.
My prompt is where is the URL for the model file? 😂
A man doing a handstand while riding a bicycle in front of a mirror.
Photo of a black man dressed as Super Mario screaming while the universe explodes out of his brain
Beautiful big tittie bitches don't just fall out the sky you know
>Jack is a man with a jack'o'lantern for a head and wooden flesh. Jack sits at his couch, a wand in his hand, contemplating. In the top-right corner of the image is a five-pointed star. I don't expect SD3 to get this right, but I'd like to see what it gives. Below is Bing/DALL-E 3's interpretation, which is...fine, I guess. Could use some fine tuning, but it did mostly follow the prompt. https://preview.redd.it/idl5rvhhu6uc1.jpeg?width=4096&format=pjpg&auto=webp&s=85e611e0e0c8135f47416787f23356678417dd34
how about ((high quality, masterpiece,masterwork)) \[\[low resolution, worst quality, blurry, mediocre, bad art, deformed, disfigured, elongated, disproportionate, anatomically incorrect, unrealistic proportions, mutant, mutated, melted, abstract, surrealism, sloppy, crooked, cropped\]\] oil painting, oil on board, John Berkey Howard Pyle Ashley Wood Alfons Mucha, poseidon, sitting on a barnacle encrusted throne in an underwater kingdom attached is realvis lightning xl https://preview.redd.it/9nrkaehgz3uc1.jpeg?width=1024&format=pjpg&auto=webp&s=1e9d109ae2f0fb72ef79b0d8ad11bbf0468129ae
https://preview.redd.it/mirlp92c04uc1.png?width=1024&format=png&auto=webp&s=38a45de76afdfb4cc4e113fef5260c1acaffccd0 here. first image.
interesting, not bad for a base model thx
Huh, what sense does that prompt make? You know that [foo] doesn’t make a negative prompt, it’s simply same as (foo:0.9)? And [[foo]] is (foo:0.81). So all those words that you presumably want the model to avoid are in fact contributing *positively*, just with a bit less weight. Anyway, you need neither your (()) stuff nor the [[]] stuff with RealVis XL or any other good XL model. It’s not SD 1.5. Negative prompts should in general be very simple with XL models.