![](/static/61a827a1/assets/icons/icon-96x96.png)
![](https://lemmy.world/pictrs/image/44bf11eb-4336-40eb-9778-e96fc5223124.png)
1·
2 days agoDiffusion models have a very limited understanding of language compared to modern LLMs like GPT4 or Claus, etc.
https://huggingface.co/docs/transformers/model_doc/t5
Most likely use something like Google’s t5 here. This is basically only meant to translate sentences into something a diffusion model understands. Even chatgpt is just going to formulate a prompt for a diffusion model in the same way and isn’t going to inherently give it any more contextual understanding.
The simple answer is they are simply not there yet for understanding complex concepts. And I suspect that the most impressive images of impossible concepts they can drum up are mostly by chance or by numbers.
Nevertheless, these models are trained with broad yet shallow data. As such, they are glorified tech demos meant to wet the appetite of businesses to generate high value customers who could further tune a model for a specific purpose. If you haven’t already, I suggest you do the same. Curate a very specific dataset and very clear examples. The models can already demonstrate the warping of different types of lenses. I think it would be very doable to train one to better reflect the curving geometry you’re looking for.