Ah, noted. Thx.
The more I look at prompts, the more perplexed I become about what's going on.
3/4 of that prompt+style looks vague, repetitive, impossibly subjective, ambiguous ad absurdum, or contradictory.
For example: no background, city background (if order matters and "no" comes first...?)
Or: golden ratio (but the output dimensions are not golden)
Or: Depth of field (this is colloquially understood to mean shallow focus, but is technically understood as the opposite, the depth of focus ranging from specific distance to infinity, which if you look further into the topic becomes a balance between the resolution of the lens and the medium. —Before anyone tries to kick me for being didactic, recall that the "I" in AI stands for intelligence)
Or: trending on artstation (this sounds like it means something but must operate as an incantation)
Then there's seasoning like:
highly detailed and intricate, hyper maximalist, elite, glow (how do modifiers work? Is "turn glow to 11" useful? According to the overall premise of this tech it should be)
From here things get weirder.
Consider "a solo tall strong woman": Why is "solo" needed? it must be implied by "a" according to the rules if inference we depend on to make this model work. When you prompt "an elephant" you expect the model to start with the entire universe (a field of random noise) and chip away at everything that doesn't fit an elephant. So what can a "solo elephant" mean?
My own limited experience with building prompts showed me that tokens may extraneous, in the sense that it makes no difference if a token is present or not for a given prompt, but becomes significant if you change another part of the prompt.
In arithmetic, consider the two expressions:
2 / 1
2 / (1 + 1)
In the first case the 1 token doesn't affect the result. But in the second it does.
But if the 2 changed to 0, then it doesn't matter in either.
If prompts are computational, then how does the grammar work?
And if prompts are not computational (spells) then working with the models is magic.
The parable of The Sorcerer's Apprentice comes to mind.
This is why I find this tech perplexing.
The more I look at prompts, the more strange it all seems to become, like dreams.
This is a very awkward juncture for computer science, because so far the whole point of the field has been about predictability. But AI is weird, not just in the sense of its logic, but because the meanings of the transformations are completely subjective.
The Turing test is found to be a delineation of the test givers not the machine (entity) under test. Do you think that the output looks like Santa typing at a laptop in a snow globe? You're human!