I feel like the comment made by the author's friend captures a lot of my feelings on AI art. AI art is often extremely detailed, but that detail is often just random noise. It's art that becomes worse and makes less sense the more carefully you look at it.
Compared that to human art. When a person makes a highly detailed artwork, that detail rewards looking closely. That detail forms a cohesive, intentional vision, and incentivizes the viewer to spend the time and effort to take it all in. A person could spend hours looking at Bruegel's Tower of Babel paintings, or Bosch's "The Garden of Earthly Delights".
Overall, I've never felt the need to spend the time looking closely at AI art, even AI art that I couldn't tell was AI right away. Instead of rewarding close inspection with more detail, AI art punishes the viewer who looks closer by giving them undecipherable mush.
The gate picture has the same problem as as the cat one that he didn't filter out. There's a lot going on, and the lighting does seem to be one of the somewhat-inconsistent issues IMO, but it's just generally weird about why there's cats of all different sizes, why some of the smallest cats have the same coloring as the biggest ones, but some don't, what's going on with the arms of the two darker cats on the right, why aren't the sides of the throne symmetric, etc.
Everything is consistent in terms of "the immediate pixels surrounding it" but the picture as a whole is just "throw a LOT at the wall."
It passes the "looks cool" test but fails the "how likely would a human be to paint that particular composition" test.
The cat picture really shows the "noisy detail" problem with AI art. There's a lot going on on the area directly above the cat with a crown, as well as on the armrests and the upper areas of the wall background. But it's all random noise, which makes it both exhausting and distracting. A human artist would probably either make those areas less detailed, or give a more consistent pattern. Both would let those parts fade into the background, which would help draw our focus to the cats and the person.
There's other, more general issues too. The front paw on the big cat on the left is twisting unnaturally. The cat on the right with the pendant thing looks like it only has one front paw. The throne looks more like a canopy bed then a throne, with the curtains and the weird top area. The woman's face is oddly de-emphasized, despite being near the center of the piece.
Most of these things are subtle, and can be hard to articulate if you aren't looking closely. But the picture reeks of AI art, and it doesn't surprise me that the author was able to identify it as such right away.
Reading this comment chain is kinda confusing to me. Y'all really not aware that there are countless works of art that predate AI art that look literally just the same?
Missing symmetry is super frequent in hand drawn art, and overusing/adding too much detail that ultimately detract from the art is something any aspiring artist has done at some point.
I get that you don't like to look at pictures like that, but it's really not unique to AI art.
Asking LLM for thought processes only generate hallucinations. Spotting AI images are same. Those subtleties are justifications ex post facto, not necessarily the actual cues that trip BS detectors.
Maybe the author's friend is just way better than me at this, but I tried applying her advice to some of the other images and I don't feel like it would have helped me.
Looking at the human impressionist painting "Entrance to the Village of Osny" that lots of people (including me) thought was AI, it seems like there's lots of details which don't really make sense. The road seems to seamlessly become the wall of a house on the right side for instance. On the other hand, even looking at the details, there's no details in the cherub image that I could see which would give anything away.
It’s impressionist. It’s not supposed to make sense in the sense that it’s an accurate reflection of reality; it’s supposed to make sense in that you can understand why the details were drawn in the way they were because someone put thought and intention into them.
I guess a good example is things like the physical characteristics of a line drawn by the stroke of a paint brush. You can often see they all align generally in the same direction, and have a sort of "fingerprint"
I was able to tell because the distant houses are placed in a nonsensical formation in the AI image, but in the human image they make sense (they're more of a 'swoosh').
This is why I'm baffled when people want to put this kind of stuff on their Behance/Artstation profile.
Can AI art be useful? Sure but I'd argue only in the pursuit of something else (like a cute image to help illustrate a blog article), and certainly not for art's sake.
Posing it as "ART" means that the intent is for viewers to linger upon the piece, and the vast majority of AI art just wilts under scrutiny like that.
It's exactly what isn't captured in the training data. The AI knows what the final texture of an oil painting looks like but it doesn't, know if what it's creating isn't possible from the point of view of physical technique. Or, likewise, it doesn't see the translation from mental image to representation of that image that a human has. It's always working off the final product.
There's a lot of thought that goes into things like the greebles on a spaceship, like the shape language, the values and hues, etc.
Impressionism might seem "random" like what a model would output, but the main difference is the human deciding how that "randomness" should look.
The details on a model generated art piece are meaningless to me, no one sat down and thought "the speckles here will have to be this value to ensure they don't distract from the rest of the piece."
That's more what I look at when I digest art, the small, intentional design choices of the person making it.
Hmm?
Impressionism is noted for extreme lack of detail, that still is suggestive of something specific, because the artist knows what details your brain will fill in. (8bit pixel art is impressionistic :-) )
Yes, a suggestive smudge, a vague mark, as the artist in the article said (the one talking about the "ruined gate" picture). That's like an honest communication between artist and viewer, "this mark stands for something beyond the limit of my chosen resolution". It's like a deliberately non-committal expression, like saying "I don't know exactly, kinda one of these". In contrast, we have in AI art misleading details that contain a sort of confabulated visual nonsense, like word salad, except graphical. Similar to an LLM's aversion to admitting "I don't know".
You can't really draw many conclusions from this test, since the AI art has already been filtered by Scott to be ones that Scott himself found confusing. So What do any of the numbers at the end of this really mean? "Am I better than Scott at discerning AI art from human" is about the only thing this test says.
If you didn't filter the AI art first, people would do much better.
I had the same thought, but a counterargument is that the human art has also been filtered to be real artist stuff rather than what a random person would draw.
It's still impressive that pleasant AI art is possible.
The point isn't to compare random AI art with random human art. The overarching sentiment lately has been that AI art feels bad and has this "Fake" quality to it.
This survey is refuting that argument. AI art can be used in media just like human art and people can't really tell (or care if they can't tell the difference).
> This survey is refuting that argument. AI art can be used in media just like human art and people can't really tell (or care if they can't tell the difference).
Sure, as long as the AI art - and the human art it might be presented with - is presented without context and in low resolution.
Fine art is a matter of nuance, so in that sense I think it does matter that a lot of the "human art" examples are aggressively cropped (the Basquiat is outright cut in half) and reproduced at very low quality. That Cecily Brown piece, for example, is 15 feet across in person. Seeing it as a tiny jpg is of course not very impressive. The AI pieces, on the other hand, are native to that format, there's no detail to lose.
But those details are part of what make the human art interesting to contemplate. I wouldn't even think of buying an art book with reproductions of such low quality--at that point you do lose what's essential about the art work, what makes it possible to enjoy.
That’s a great point, in a similar vein I routinely see people post photos on social media taken with their phone, side by side with a photo taken by a high end camera, saying “I bet you can’t tell the difference, expensive cameras are a waste of money when phones are so good”.
Well of course, you’re comparing 1.5 megapixels compressed JPEGs. If you display those photos on a large monitor - let alone print them - the differences will be immediately obvious.
This article has an implicit premise that the ultimate judge of art is “do I/people like it” but I think art is more about the possibilities of interpretation - for example, the classics/“good art” lend themselves to many reinterpretations, both by different people and by the same person over time. When humans create art "manually" all of their decisions - both conscious and unconscious - feed into this process. Interpreting AI art is more of a data exploration journey than an exploration of meaning.
That's one of my problems with AI art. AI art promises to bring your ideas to life, no need to sweat the small stuff. But it's the small details and decisions that often make art great! Ideas are a dime a dozen in any artistic medium, it's the specific way those ideas are implemented that make art truly interesting.
I feel like AI art promises the ability to raise the baseline art available for people who want some artwork for some purpose from stick figures drawn in MS paint to something reasonably artful. The sort of things that would previously have been filled by a Google image search, ripping something off deviant art or just browsing a stock images / clip art website until you find something “good enough”. I think part of the problem with AI art discussions is we do these side by sides with “real art” while glossing over all the places where “art” is used all the time and doesn’t need to rise to the level of “something that will be displayed in a museum”.
When the quality AI images are created, like these in the post, that description doesn't really apply. If you hang out in those discords, you'll see people obsessing about details and inpainting things that don't look like what they wanted. The high end of results is very specific in the implementation.
There does not need to be intentionality for people to interpret it. Humans have interpreted intentionality behind natural phenomenon like the weather and constellations since pre-history, and continue to do so.
And I contest the original claim that AI art has no intentionality. A human provided a prompt, adjusted that prompt, and picked a particular output, all of which is done with intent. Perhaps there is no specific intent behind each individual pixel, but there is intent behind the overall creation. And that is no different to photography or digital art, where there is often no specific intent behind each individual pixel, as digital tools modify wide swathes of pixels simultaneously.
It would have been interesting to know how much time most people spent per picture because if you look at the quoted comment from the well scoring art interested person mentioned:
"The left column has a sort of door with a massive top-of-doorway-thingy over it. Why? Who knows? The right column doesn't, and you'd expect it to. Instead, the right column has 2.5 arches embossed into it that just kind of halfheartedly trail off."
You can find this in almost every AI generated picture. The picture that people liked most, AI generated with the cafe and canal, the legs on the chairs make little sense. Not as bad as in non-curated AI art, but still no human would paint like this. Same for the houses in the background. If you spend say a minute per picture with AI art you almost always find these random things, even if the image is stylized, unlike human art it has a weird uncanniness to it.
I agree that the cafe had tells, just like the city street. But Gauguin also ended up in my AI bin. With the latter I feel the cropping was very infavourable.
Even though I was warned of the cropping, I didn't think the works would be cut that badly. Since I was working under the assumption that good specimens of each category would be chosen, the cut Gauguin didn't make it.
But in the end I'd convinced myself that Osny also had tells apart from the composition. So what do I know?
AI Art can be hard to identify in the wild. But it still largely sucks at helping you achieve specific deliverables. You can get an image. But it’s pretty hard to actually make specific images in specific styles. Yes we have Loras. Yes we have control nets (to varying degrees) and ipadapter (to lesser degrees) and face adapters and what not. But it’s still frustrating to get something consistent across multiple images. Especially in illustrated styles.
AI Art is good if you need something in a general ballpark and don’t care about the specifics
So maybe some people hate AI because they have an artist's eye for small inadequacies and it drives them crazy.
This is it 100%.
When somebody draws something (in an active fashion), there is a significantly higher level of concentration and thought put towards the final output.
By its very nature, GenAI is mostly using an inadequately descriptive medium (e.g. text) which a user then must WAIT until an output that roughly matches your vision "pops" out. Can you get around this? Not entirely, though you can help mitigate this through inpainting, photobashing, layering, controlnets, loras, etc.
However, want to wager a guess what 99% of the AI art slop that people throw up all over the internet doesn't use? ANY OF THAT.
A conventional artist has an internal visualization that they are constantly mentally referring to as they put brush to canvas - and it shows in the finer details.
It's the same danger that LLMs have as coding assistants. You are no longer in the driver's seat - instead you're taking a significantly more passive approach to coding. You're a reviewer with a passivity that may lead to subtle errors later down the line.
And if you need any more proof, here's a GenAI image attached to _Karpathy_'s (one of the founding members of openAI) twitter post on founding an education AI lab:
Generative AI is so cool. My wife (a creative director) used it to help design our wedding outfits. We then had them embroidered with those patterns. It would have been impossible otherwise for us to have that kind of thing expressed directly. It’s like having an artist who can sketch really fast and who you can keep correcting till your vision matches the expression. Love it!
I don’t think there have been any transformative AI works yet, but I look forward to the future.
It’s unsurprising to me that AI art is often indistinguishable from real artists’ work but famous art is so for some reason other than technical skill. Certainly there are numerous replica painters who are able to make marvelous pieces.
Not sure I understand the article. The author specifically chose art from humans and AI that he found difficult to categorize into human or AI art.
The fact that people had a 60% success rate suggest that they are a little better in seeing the difference then he was himself?
(What am I missing? This is not like "take 50 random art objects from humans and AI", but take 50 most human like AI, and non-obvious human art from humans)
Eh, this is pretty unfair. That's a test of how good humans are at deceiving other humans, not a of how hard it is to distinguish run-of-the-mill AI art from run-of-the-mill human art in real life.
First, by their own admission, the author deliberately searched for generative images that don't exhibit any of the telltale defects or art choices associated with this tech. For example, they rejected the "cat on a throne" image, the baby portrait, and so on. They basically did a pre-screen to toss out anything the author recognized as AI, hugely biasing the results.
Then, they went through a similar elimination process for human images to zero in on fairly ambiguous artwork that could be confused with machine-generated. The "victorian megaship" one is a particularly good example of such chicanery. When discussing the "angel woman" image, they even express regret for not getting rid of that pic because of a single detail that pointed to human work.
Basically, the author did their best to design a quiz that humans should fail... and humans still did better than chance.
I think it's fair. It's the same thing humans do with their own art. You don't release the piece until you like it. You revise until you think it's don't. If a human wants to make AI art, they aren't just going to drop the first thing they generated. They're going to iterate. I think it's just as unfair to include the worst generations, because people are going to release the highest quality they can come up with.
> I think it's fair. It's the same thing humans do with their own art.
No, hold on. The key part is that you have a quiz that purports to test the ability of an average human to tell AI artwork from human artwork.
So if you specifically select images for this quiz based on the fact that you, the author of the quiz, can't tell them apart, then your quiz is no longer testing what it's promised to. It's now a quiz of "are you incrementally better than the author at telling apart AI and non-AI images". Which is a lot less interesting, right?
I'm not saying the quiz has to include low-quality AI artwork. It also doesn't need to include preschoolers' doodles on the human side. But it's one thing to have some neutral quality bar, and another thing altogether to choose images specifically to subvert the stated goal of the test.
But they didn't do this at all. They picked the most human-like AI images (usually high quality), and the most AI-like human images (usually mid).
The anime pictures are particularly poor and look much worse than commercial standard work (e.g. https://pbs.twimg.com/media/FwWPeNhXoAQZGW8?format=jpg&name=...) -- but of course those would be too easy to classify, wouldn't they? I wouldn't fault anyone for thinking the provided examples are AI.
It's my opinion, but... him saying he "[took] prestigious works that had survived the test of time" isn't so believable, when he starts off with something from /r/ImaginaryWarhammer and immediately follows it up with a piece from "an unknown Italian Renaissance painter".
Part of it is he's handicapped by having to avoid famous pieces -- but you can still easily find work that outshines these examples. For digital fantasy, art for card games like Magic: the Gathering. For anime, the art for gachapon games is wonderful. For landscapes, he chose a relatively weak Hudson River School painting, and many have more striking composition and lighting that seem very hard to mistake for AI (e.g. https://collectionapi.metmuseum.org/api/collection/v1/iiif/1...).
Based on what I've empirically seen out in the world most people posting AI art are not using the same filtering as the author of this test. Plus the human choices used probably skew more towards what people think of as classic AI art than all human art as a whole.
The test was interesting to read about, but it didn't really change my mind about AI art in general. It's great for generating stock images and other low engagement works, but terrible as fine art that's meant to engage the user on any non-superficial level.
> It's the same thing humans do with their own art.
How so? Humans distributed all those "I filtered them out because they were too obvious" AI ones that aren't in the test too. So they passed someone's "is this something that should get released" test.
What we aren't seeing is human-generated art that nobody would confuse with a famous work - which of course there is a lot of out there - but IMO it generally looks "not famous" in very different ways. More "total execution issues" vs detail issues.
I appreciate this survey for how thought-provoking it is. Ironically, I'd say that the survey is itself art. And not a piece of art that AI in it's current state could ever pull off. Maybe that's when the AI art turing test will truly be passed, when AI is capable of curating such a survey.
For me what really distinguished the more obvious human art is that it had a story. It was saying something more than the image itself. This is why Meeting at Krizky stands out as obviously human, and so is The Wounding of Christ whereas muscular man is not.
As with other commenters, I'm surprised the author liked the big gate so much. To me it was one of the easier AI pieces just by virtue of it's composition. It's a big gate. With no clear reason for being there, there are no characters that the gate means something to. It's just a big gate. Obvious slop. Paris scene on the other hand, did convince me. It does a pretty good job of capturing a mood, it sort of feels a bit Lowry but more french impressionist.
I think this has similar parallels to good character writing. A few words of dialogue of action can reveal complex inner beliefs and goals. The absence of those can feel hollow. It's why "have the lambs stopped screaming?" is more compelling than "somehow, palpatine returned".
To some extent, we already have had this competition between human made high art and human made generic slop for hundreds of years. The slop has always been more popular to the chagrin of those that consider high art to be superior. I don't blame anyone for consuming slop. I do. It's fun.
This is a bit of a ramble but I honestly appreciate that this survey genuinely adds another perspective to the question of what art is. Sorry if that sounds extremely pretentious. But then again, I like slop.
66% here. I was pretty much scrolling through and clicking on first instinct instead of looking in any detail.
Interestingly I did a lot better in the second half than the first half - without going through and counting them up again I think somewhere around 40% in the first half and 90% in the second half. Not sure if it's because of the selection/order of images or if I started unconsciously seeing commonalities.
Even with this hand-picking I got 70% and am nowhere near an expert on either AI or human art, having dabbled in it for a day or two back when DallE and Midjourney first became popular. I'm sure someone who's into the image generation field could score 80%+ consistently even over a larger dataset just as handpicked as this one.
Telltale signs of AI:
- Cliches. "Punk Robot" is a good example, a human artist capable of creating that would've somehow been more creative.
- Obviously facial expressions and limbs are telltale signs, very uncanny valley
- If there's an accurate geomtric repeating pattern, it's extremely unlikely to be AI. Something like "Giant Ship" is afaik still practically impossible to generate.
- Weird unfinished parts of a world. See "Leafy Lane". Why is there a vantablack hole at the end of the road? It's not physically impossible but it makes little sense and a human wouldn't put it there in that painting.
Like all ai-or-not tests this fails to keep a similar high quality threshold for both kinds so it intends to waste time not appreciation of either kind of art.
The curator was selecting human output for overlap with ai flaw/artifacts that are likely to confuse at a glance. He wasn't selecting randomly above a high quality threshold for both kinds as implied.
Typically AI is boring, takes the easy way out upon further inspection, likes lone straight lines and face front on shots and it just so happens there are many tests which he found old human examples of this, with large perspective/lighting flaws as well.
I don't care what the point of art is consesused to be, or if elephant-made art is distinguishable from a 5th grader's art.
The turing test was "obsolete" before eliza time, the solution was: it doesn't matter to me because i'm using it as if it were human.
"The average participant scored 60%" and "many of them can’t tell AI art" cannot be true at the same time. One is data and the other is just an insistence, so it has to be the insistence that has to be wrong here.
> So maybe some people hate AI because they have an artist's eye for small inadequacies and it drives them crazy.
Did the author really needed hard data to accept this?
so much of the value of art, which Scott has actually endowed on these AI generated pieces, is the knowledge that other people are looking at the same thing as you.
I think what gave AI away the most was mixed styles. If one part of the painting is blurred, and another part is very focused, you can tell it's AI. People don't do that.
I got all of Jack Galler's pictures wrong though. The man knows how to do it.
Wringing our hands so much about the art as Art. This thing in general where people feel the need to validate or justify AI art in the same terms one does other art is like the antithesis of anything you could remotely call the avant garde. To, in fact, take surveys about it, calculate conclusions... To care about what people think at all, about what they simply see with their retinas, as if art is somehow for them. It all smacks of precisely the kind of thing he hated, or at least tried to distinguish himself from.
I feel like I've seen some stuff like this circulating around, mostly used to mic drop anti-AI people like "100% of people against AI art can't even accurately point out AI art."
I'm an anti-AI person, and this misses the point entirely. I'm not diminishing the technology--I think it's amazing that you can generate this kind of stuff in moments; it's truly incredible. The fact that it's so convincing is the point though. It's true that I can't tell whether another human was trying to communicate something to me in digital art, or if it's just AI generated, but up until a very short while ago I could always assume. I can't anymore, so now whenever I see an image online, I have to consider if it's AI or not before interpreting it artistically, and since I can't reliably do that, I can't interpret it artistically at all. It's like we suddenly found a way to destroy all art online. It's... honestly abominable.
The only way I can explain people getting 98% accuracy on this is being familiar with the handful of AI artists submitting their work for this competition.
It's a google form with no apparent time limit. It wouldn't surprise me if some people could do this (think of it like how older special effects in TV/movies look dated), but most likely they did an image search on each one and got one wrong.
I didn't say it can't retrospect. What it can't do is retrospect as a human mind, it can only read the intepretation a human mind has of its retrospection, and the human mind can't fully explain what its way of thinking is. So it doesn't have a useful model of the human mind, that it would need for the strategy. And strategy is a whole complex feature, that would use overlapping models for the ambiguity.
An AI art Turing Test would be interactive with me telling it what to draw and what changes to make and see if what is producing the art is human or AI.
This species of test would also need a multi-day turnaround period on each image. And/or a video stream of the work being drawn.
"Changes" are an interesting one, honestly as a professional artist who has to pay her rent, there is certain complexity of change beyond which I am likely to say "look, we're going to need to renegotiate the budget on this if you want this much of a change from the sketch you already approved", or even "no".
It's interesting that the impressionist-styled pieces mostly fooled people. I think this is because the style requires getting rid of one of the hallmarks of AI imagery: lots and lots of mostly-parallel swooshy lines, at a fairly high frequency. Impressionism's schtick is kind of fundamentally "fuck high-frequency detail, I'm just gonna make a bunch of little individual paint blobs".
One of the other hallmarks of AI imagery was deliberately kept out of this test. There's no shitposts. There's one, as an example of "the dall-e house style". It's a queen on a throne made of a giant cat, surrounded by giant cats, and it's got a lot of that noodly high-frequency detail that looks like something, but it is also a fundamentally goofy idea. Nobody's gonna pay Michael Whelan to paint the hell out of this and yet here it is.
I feel like the “test” is ruined by inclusion of “ai artists” which is to say people who are dedicating all their time and effort to deliberately filter, prompt engineer and tweak AI to get a result that looks like it’s not AI. I’m sure that if the first pass of any of those works was included instead it would have been a completely different result.
I feel like the comment made by the author's friend captures a lot of my feelings on AI art. AI art is often extremely detailed, but that detail is often just random noise. It's art that becomes worse and makes less sense the more carefully you look at it.
Compared that to human art. When a person makes a highly detailed artwork, that detail rewards looking closely. That detail forms a cohesive, intentional vision, and incentivizes the viewer to spend the time and effort to take it all in. A person could spend hours looking at Bruegel's Tower of Babel paintings, or Bosch's "The Garden of Earthly Delights".
Overall, I've never felt the need to spend the time looking closely at AI art, even AI art that I couldn't tell was AI right away. Instead of rewarding close inspection with more detail, AI art punishes the viewer who looks closer by giving them undecipherable mush.
The gate picture has the same problem as as the cat one that he didn't filter out. There's a lot going on, and the lighting does seem to be one of the somewhat-inconsistent issues IMO, but it's just generally weird about why there's cats of all different sizes, why some of the smallest cats have the same coloring as the biggest ones, but some don't, what's going on with the arms of the two darker cats on the right, why aren't the sides of the throne symmetric, etc.
Everything is consistent in terms of "the immediate pixels surrounding it" but the picture as a whole is just "throw a LOT at the wall."
It passes the "looks cool" test but fails the "how likely would a human be to paint that particular composition" test.
The cat picture really shows the "noisy detail" problem with AI art. There's a lot going on on the area directly above the cat with a crown, as well as on the armrests and the upper areas of the wall background. But it's all random noise, which makes it both exhausting and distracting. A human artist would probably either make those areas less detailed, or give a more consistent pattern. Both would let those parts fade into the background, which would help draw our focus to the cats and the person.
There's other, more general issues too. The front paw on the big cat on the left is twisting unnaturally. The cat on the right with the pendant thing looks like it only has one front paw. The throne looks more like a canopy bed then a throne, with the curtains and the weird top area. The woman's face is oddly de-emphasized, despite being near the center of the piece.
Most of these things are subtle, and can be hard to articulate if you aren't looking closely. But the picture reeks of AI art, and it doesn't surprise me that the author was able to identify it as such right away.
Reading this comment chain is kinda confusing to me. Y'all really not aware that there are countless works of art that predate AI art that look literally just the same?
Missing symmetry is super frequent in hand drawn art, and overusing/adding too much detail that ultimately detract from the art is something any aspiring artist has done at some point.
I get that you don't like to look at pictures like that, but it's really not unique to AI art.
My point wasn't that you can't find art made by a human that has the characteristics of AI art. My point was more that it's just bad art.
Asking LLM for thought processes only generate hallucinations. Spotting AI images are same. Those subtleties are justifications ex post facto, not necessarily the actual cues that trip BS detectors.
Maybe the author's friend is just way better than me at this, but I tried applying her advice to some of the other images and I don't feel like it would have helped me.
Looking at the human impressionist painting "Entrance to the Village of Osny" that lots of people (including me) thought was AI, it seems like there's lots of details which don't really make sense. The road seems to seamlessly become the wall of a house on the right side for instance. On the other hand, even looking at the details, there's no details in the cherub image that I could see which would give anything away.
It’s impressionist. It’s not supposed to make sense in the sense that it’s an accurate reflection of reality; it’s supposed to make sense in that you can understand why the details were drawn in the way they were because someone put thought and intention into them.
I guess a good example is things like the physical characteristics of a line drawn by the stroke of a paint brush. You can often see they all align generally in the same direction, and have a sort of "fingerprint"
I was able to tell because the distant houses are placed in a nonsensical formation in the AI image, but in the human image they make sense (they're more of a 'swoosh').
The arms look, er, not very feminine.
This is why I'm baffled when people want to put this kind of stuff on their Behance/Artstation profile.
Can AI art be useful? Sure but I'd argue only in the pursuit of something else (like a cute image to help illustrate a blog article), and certainly not for art's sake.
Posing it as "ART" means that the intent is for viewers to linger upon the piece, and the vast majority of AI art just wilts under scrutiny like that.
It's exactly what isn't captured in the training data. The AI knows what the final texture of an oil painting looks like but it doesn't, know if what it's creating isn't possible from the point of view of physical technique. Or, likewise, it doesn't see the translation from mental image to representation of that image that a human has. It's always working off the final product.
That makes it sound like impressionism. But the phony details have a more intense bullshitting quality, like the greebles on a Star Wars spaceship.
There's a lot of thought that goes into things like the greebles on a spaceship, like the shape language, the values and hues, etc.
Impressionism might seem "random" like what a model would output, but the main difference is the human deciding how that "randomness" should look.
The details on a model generated art piece are meaningless to me, no one sat down and thought "the speckles here will have to be this value to ensure they don't distract from the rest of the piece."
That's more what I look at when I digest art, the small, intentional design choices of the person making it.
Hmm? Impressionism is noted for extreme lack of detail, that still is suggestive of something specific, because the artist knows what details your brain will fill in. (8bit pixel art is impressionistic :-) )
Yes, a suggestive smudge, a vague mark, as the artist in the article said (the one talking about the "ruined gate" picture). That's like an honest communication between artist and viewer, "this mark stands for something beyond the limit of my chosen resolution". It's like a deliberately non-committal expression, like saying "I don't know exactly, kinda one of these". In contrast, we have in AI art misleading details that contain a sort of confabulated visual nonsense, like word salad, except graphical. Similar to an LLM's aversion to admitting "I don't know".
You can't really draw many conclusions from this test, since the AI art has already been filtered by Scott to be ones that Scott himself found confusing. So What do any of the numbers at the end of this really mean? "Am I better than Scott at discerning AI art from human" is about the only thing this test says.
If you didn't filter the AI art first, people would do much better.
I had the same thought, but a counterargument is that the human art has also been filtered to be real artist stuff rather than what a random person would draw.
It's still impressive that pleasant AI art is possible.
The point isn't to compare random AI art with random human art. The overarching sentiment lately has been that AI art feels bad and has this "Fake" quality to it.
This survey is refuting that argument. AI art can be used in media just like human art and people can't really tell (or care if they can't tell the difference).
> This survey is refuting that argument. AI art can be used in media just like human art and people can't really tell (or care if they can't tell the difference).
Sure, as long as the AI art - and the human art it might be presented with - is presented without context and in low resolution.
Fine art is a matter of nuance, so in that sense I think it does matter that a lot of the "human art" examples are aggressively cropped (the Basquiat is outright cut in half) and reproduced at very low quality. That Cecily Brown piece, for example, is 15 feet across in person. Seeing it as a tiny jpg is of course not very impressive. The AI pieces, on the other hand, are native to that format, there's no detail to lose.
But those details are part of what make the human art interesting to contemplate. I wouldn't even think of buying an art book with reproductions of such low quality--at that point you do lose what's essential about the art work, what makes it possible to enjoy.
That’s a great point, in a similar vein I routinely see people post photos on social media taken with their phone, side by side with a photo taken by a high end camera, saying “I bet you can’t tell the difference, expensive cameras are a waste of money when phones are so good”.
Well of course, you’re comparing 1.5 megapixels compressed JPEGs. If you display those photos on a large monitor - let alone print them - the differences will be immediately obvious.
Yes, but sometimes that’s moving the goalposts when the purpose was to post a picture on the web.
This article has an implicit premise that the ultimate judge of art is “do I/people like it” but I think art is more about the possibilities of interpretation - for example, the classics/“good art” lend themselves to many reinterpretations, both by different people and by the same person over time. When humans create art "manually" all of their decisions - both conscious and unconscious - feed into this process. Interpreting AI art is more of a data exploration journey than an exploration of meaning.
That's one of my problems with AI art. AI art promises to bring your ideas to life, no need to sweat the small stuff. But it's the small details and decisions that often make art great! Ideas are a dime a dozen in any artistic medium, it's the specific way those ideas are implemented that make art truly interesting.
I couldn't agree more; I love what you said in your other reply: "AI art punishes the viewer who looks closer"
I feel like AI art promises the ability to raise the baseline art available for people who want some artwork for some purpose from stick figures drawn in MS paint to something reasonably artful. The sort of things that would previously have been filled by a Google image search, ripping something off deviant art or just browsing a stock images / clip art website until you find something “good enough”. I think part of the problem with AI art discussions is we do these side by sides with “real art” while glossing over all the places where “art” is used all the time and doesn’t need to rise to the level of “something that will be displayed in a museum”.
When the quality AI images are created, like these in the post, that description doesn't really apply. If you hang out in those discords, you'll see people obsessing about details and inpainting things that don't look like what they wanted. The high end of results is very specific in the implementation.
Eh. That’s an artificial goalpost. Realistically, it’s a tool in the toolkit.
There does not need to be intentionality for people to interpret it. Humans have interpreted intentionality behind natural phenomenon like the weather and constellations since pre-history, and continue to do so.
And I contest the original claim that AI art has no intentionality. A human provided a prompt, adjusted that prompt, and picked a particular output, all of which is done with intent. Perhaps there is no specific intent behind each individual pixel, but there is intent behind the overall creation. And that is no different to photography or digital art, where there is often no specific intent behind each individual pixel, as digital tools modify wide swathes of pixels simultaneously.
The Rorschach test is quite literally an example of people finding meaning in randomness.
Agreed. AI art subtracts intentionality.
It would have been interesting to know how much time most people spent per picture because if you look at the quoted comment from the well scoring art interested person mentioned:
"The left column has a sort of door with a massive top-of-doorway-thingy over it. Why? Who knows? The right column doesn't, and you'd expect it to. Instead, the right column has 2.5 arches embossed into it that just kind of halfheartedly trail off."
You can find this in almost every AI generated picture. The picture that people liked most, AI generated with the cafe and canal, the legs on the chairs make little sense. Not as bad as in non-curated AI art, but still no human would paint like this. Same for the houses in the background. If you spend say a minute per picture with AI art you almost always find these random things, even if the image is stylized, unlike human art it has a weird uncanniness to it.
I agree that the cafe had tells, just like the city street. But Gauguin also ended up in my AI bin. With the latter I feel the cropping was very infavourable.
Even though I was warned of the cropping, I didn't think the works would be cut that badly. Since I was working under the assumption that good specimens of each category would be chosen, the cut Gauguin didn't make it.
But in the end I'd convinced myself that Osny also had tells apart from the composition. So what do I know?
AI Art can be hard to identify in the wild. But it still largely sucks at helping you achieve specific deliverables. You can get an image. But it’s pretty hard to actually make specific images in specific styles. Yes we have Loras. Yes we have control nets (to varying degrees) and ipadapter (to lesser degrees) and face adapters and what not. But it’s still frustrating to get something consistent across multiple images. Especially in illustrated styles.
AI Art is good if you need something in a general ballpark and don’t care about the specifics
Yep, that's why you see AI art a lot as generic blog hero/banner images.
From the article:
So maybe some people hate AI because they have an artist's eye for small inadequacies and it drives them crazy.
This is it 100%.
When somebody draws something (in an active fashion), there is a significantly higher level of concentration and thought put towards the final output.
By its very nature, GenAI is mostly using an inadequately descriptive medium (e.g. text) which a user then must WAIT until an output that roughly matches your vision "pops" out. Can you get around this? Not entirely, though you can help mitigate this through inpainting, photobashing, layering, controlnets, loras, etc.
However, want to wager a guess what 99% of the AI art slop that people throw up all over the internet doesn't use? ANY OF THAT.
A conventional artist has an internal visualization that they are constantly mentally referring to as they put brush to canvas - and it shows in the finer details.
It's the same danger that LLMs have as coding assistants. You are no longer in the driver's seat - instead you're taking a significantly more passive approach to coding. You're a reviewer with a passivity that may lead to subtle errors later down the line.
And if you need any more proof, here's a GenAI image attached to _Karpathy_'s (one of the founding members of openAI) twitter post on founding an education AI lab:
https://x.com/karpathy/status/1813263734707790301
Generative AI is so cool. My wife (a creative director) used it to help design our wedding outfits. We then had them embroidered with those patterns. It would have been impossible otherwise for us to have that kind of thing expressed directly. It’s like having an artist who can sketch really fast and who you can keep correcting till your vision matches the expression. Love it!
I don’t think there have been any transformative AI works yet, but I look forward to the future.
It’s unsurprising to me that AI art is often indistinguishable from real artists’ work but famous art is so for some reason other than technical skill. Certainly there are numerous replica painters who are able to make marvelous pieces.
Anyway, I’m excited to see what new things come.
Not sure I understand the article. The author specifically chose art from humans and AI that he found difficult to categorize into human or AI art. The fact that people had a 60% success rate suggest that they are a little better in seeing the difference then he was himself?
(What am I missing? This is not like "take 50 random art objects from humans and AI", but take 50 most human like AI, and non-obvious human art from humans)
Eh, this is pretty unfair. That's a test of how good humans are at deceiving other humans, not a of how hard it is to distinguish run-of-the-mill AI art from run-of-the-mill human art in real life.
First, by their own admission, the author deliberately searched for generative images that don't exhibit any of the telltale defects or art choices associated with this tech. For example, they rejected the "cat on a throne" image, the baby portrait, and so on. They basically did a pre-screen to toss out anything the author recognized as AI, hugely biasing the results.
Then, they went through a similar elimination process for human images to zero in on fairly ambiguous artwork that could be confused with machine-generated. The "victorian megaship" one is a particularly good example of such chicanery. When discussing the "angel woman" image, they even express regret for not getting rid of that pic because of a single detail that pointed to human work.
Basically, the author did their best to design a quiz that humans should fail... and humans still did better than chance.
Also impressionism is probably one of the most favorable art styles for AI. The lack of detail means there are fewer places for AI to fuck up.
A street with cafe chairs and lights, that's like an entire genre of impressionist paintings.
I think it's fair. It's the same thing humans do with their own art. You don't release the piece until you like it. You revise until you think it's don't. If a human wants to make AI art, they aren't just going to drop the first thing they generated. They're going to iterate. I think it's just as unfair to include the worst generations, because people are going to release the highest quality they can come up with.
> I think it's fair. It's the same thing humans do with their own art.
No, hold on. The key part is that you have a quiz that purports to test the ability of an average human to tell AI artwork from human artwork.
So if you specifically select images for this quiz based on the fact that you, the author of the quiz, can't tell them apart, then your quiz is no longer testing what it's promised to. It's now a quiz of "are you incrementally better than the author at telling apart AI and non-AI images". Which is a lot less interesting, right?
I'm not saying the quiz has to include low-quality AI artwork. It also doesn't need to include preschoolers' doodles on the human side. But it's one thing to have some neutral quality bar, and another thing altogether to choose images specifically to subvert the stated goal of the test.
I don't see why you wouldn't use the highest quality possible for both.
But they didn't do this at all. They picked the most human-like AI images (usually high quality), and the most AI-like human images (usually mid).
The anime pictures are particularly poor and look much worse than commercial standard work (e.g. https://pbs.twimg.com/media/FwWPeNhXoAQZGW8?format=jpg&name=...) -- but of course those would be too easy to classify, wouldn't they? I wouldn't fault anyone for thinking the provided examples are AI.
> They picked the most (…) the most AI-like human images
Why do you think so? I didn't see that explicitly claimed in the post (or did I miss it?)
It's my opinion, but... him saying he "[took] prestigious works that had survived the test of time" isn't so believable, when he starts off with something from /r/ImaginaryWarhammer and immediately follows it up with a piece from "an unknown Italian Renaissance painter".
Part of it is he's handicapped by having to avoid famous pieces -- but you can still easily find work that outshines these examples. For digital fantasy, art for card games like Magic: the Gathering. For anime, the art for gachapon games is wonderful. For landscapes, he chose a relatively weak Hudson River School painting, and many have more striking composition and lighting that seem very hard to mistake for AI (e.g. https://collectionapi.metmuseum.org/api/collection/v1/iiif/1...).
Based on what I've empirically seen out in the world most people posting AI art are not using the same filtering as the author of this test. Plus the human choices used probably skew more towards what people think of as classic AI art than all human art as a whole.
The test was interesting to read about, but it didn't really change my mind about AI art in general. It's great for generating stock images and other low engagement works, but terrible as fine art that's meant to engage the user on any non-superficial level.
> It's the same thing humans do with their own art.
How so? Humans distributed all those "I filtered them out because they were too obvious" AI ones that aren't in the test too. So they passed someone's "is this something that should get released" test.
What we aren't seeing is human-generated art that nobody would confuse with a famous work - which of course there is a lot of out there - but IMO it generally looks "not famous" in very different ways. More "total execution issues" vs detail issues.
I appreciate this survey for how thought-provoking it is. Ironically, I'd say that the survey is itself art. And not a piece of art that AI in it's current state could ever pull off. Maybe that's when the AI art turing test will truly be passed, when AI is capable of curating such a survey.
For me what really distinguished the more obvious human art is that it had a story. It was saying something more than the image itself. This is why Meeting at Krizky stands out as obviously human, and so is The Wounding of Christ whereas muscular man is not.
As with other commenters, I'm surprised the author liked the big gate so much. To me it was one of the easier AI pieces just by virtue of it's composition. It's a big gate. With no clear reason for being there, there are no characters that the gate means something to. It's just a big gate. Obvious slop. Paris scene on the other hand, did convince me. It does a pretty good job of capturing a mood, it sort of feels a bit Lowry but more french impressionist.
I think this has similar parallels to good character writing. A few words of dialogue of action can reveal complex inner beliefs and goals. The absence of those can feel hollow. It's why "have the lambs stopped screaming?" is more compelling than "somehow, palpatine returned".
To some extent, we already have had this competition between human made high art and human made generic slop for hundreds of years. The slop has always been more popular to the chagrin of those that consider high art to be superior. I don't blame anyone for consuming slop. I do. It's fun.
This is a bit of a ramble but I honestly appreciate that this survey genuinely adds another perspective to the question of what art is. Sorry if that sounds extremely pretentious. But then again, I like slop.
66% here. I was pretty much scrolling through and clicking on first instinct instead of looking in any detail.
Interestingly I did a lot better in the second half than the first half - without going through and counting them up again I think somewhere around 40% in the first half and 90% in the second half. Not sure if it's because of the selection/order of images or if I started unconsciously seeing commonalities.
Even with this hand-picking I got 70% and am nowhere near an expert on either AI or human art, having dabbled in it for a day or two back when DallE and Midjourney first became popular. I'm sure someone who's into the image generation field could score 80%+ consistently even over a larger dataset just as handpicked as this one.
Telltale signs of AI:
- Cliches. "Punk Robot" is a good example, a human artist capable of creating that would've somehow been more creative. - Obviously facial expressions and limbs are telltale signs, very uncanny valley - If there's an accurate geomtric repeating pattern, it's extremely unlikely to be AI. Something like "Giant Ship" is afaik still practically impossible to generate. - Weird unfinished parts of a world. See "Leafy Lane". Why is there a vantablack hole at the end of the road? It's not physically impossible but it makes little sense and a human wouldn't put it there in that painting.
Likewise I was at 70% and am by no means an expert on art. Although I do generate a lot of AI art for my work
Like all ai-or-not tests this fails to keep a similar high quality threshold for both kinds so it intends to waste time not appreciation of either kind of art.
The curator was selecting human output for overlap with ai flaw/artifacts that are likely to confuse at a glance. He wasn't selecting randomly above a high quality threshold for both kinds as implied.
Typically AI is boring, takes the easy way out upon further inspection, likes lone straight lines and face front on shots and it just so happens there are many tests which he found old human examples of this, with large perspective/lighting flaws as well.
I don't care what the point of art is consesused to be, or if elephant-made art is distinguishable from a 5th grader's art.
The turing test was "obsolete" before eliza time, the solution was: it doesn't matter to me because i'm using it as if it were human.
"The average participant scored 60%" and "many of them can’t tell AI art" cannot be true at the same time. One is data and the other is just an insistence, so it has to be the insistence that has to be wrong here.
> So maybe some people hate AI because they have an artist's eye for small inadequacies and it drives them crazy.
Did the author really needed hard data to accept this?
so much of the value of art, which Scott has actually endowed on these AI generated pieces, is the knowledge that other people are looking at the same thing as you.
I think what gave AI away the most was mixed styles. If one part of the painting is blurred, and another part is very focused, you can tell it's AI. People don't do that.
I got all of Jack Galler's pictures wrong though. The man knows how to do it.
Duchamp rolling in his grave about this post!
Can you elaborate on why?
Wringing our hands so much about the art as Art. This thing in general where people feel the need to validate or justify AI art in the same terms one does other art is like the antithesis of anything you could remotely call the avant garde. To, in fact, take surveys about it, calculate conclusions... To care about what people think at all, about what they simply see with their retinas, as if art is somehow for them. It all smacks of precisely the kind of thing he hated, or at least tried to distinguish himself from.
> By this standard, I submit that Sam Altman is the greatest artist of the 21st century.
That was below the belt!
I feel like I've seen some stuff like this circulating around, mostly used to mic drop anti-AI people like "100% of people against AI art can't even accurately point out AI art."
I'm an anti-AI person, and this misses the point entirely. I'm not diminishing the technology--I think it's amazing that you can generate this kind of stuff in moments; it's truly incredible. The fact that it's so convincing is the point though. It's true that I can't tell whether another human was trying to communicate something to me in digital art, or if it's just AI generated, but up until a very short while ago I could always assume. I can't anymore, so now whenever I see an image online, I have to consider if it's AI or not before interpreting it artistically, and since I can't reliably do that, I can't interpret it artistically at all. It's like we suddenly found a way to destroy all art online. It's... honestly abominable.
The only way I can explain people getting 98% accuracy on this is being familiar with the handful of AI artists submitting their work for this competition.
It's a google form with no apparent time limit. It wouldn't surprise me if some people could do this (think of it like how older special effects in TV/movies look dated), but most likely they did an image search on each one and got one wrong.
Easy to defeat. AI can't come up with ambiguous art:
https://en.wikipedia.org/wiki/Ambiguous_image
There is a strategic feature to it based on retrospection.
Diffusion Illusions: Hiding Images in Plain Sight
https://arxiv.org/pdf/2312.03817
That is cool, but it's mostly merging, that is a partial solution and the strategy is defined by the developer. The results aren't good enough.
Not "out of the box" maybe, but yes, it can. It can even do it in ways which humans find impossible.
Proof: https://www.youtube.com/watch?v=FMRi6pNAoag&list=LL&index=74...
Same as the other comment. It's merging, not construction from a self-generated strategic plan to cause ambiguity.
So were you were just saying that LLMs aren't AGI?
Or was there something more to it, specifically related to ambiguity/illusions?
Why wouldn't AI be able to retrospect?
I didn't say it can't retrospect. What it can't do is retrospect as a human mind, it can only read the intepretation a human mind has of its retrospection, and the human mind can't fully explain what its way of thinking is. So it doesn't have a useful model of the human mind, that it would need for the strategy. And strategy is a whole complex feature, that would use overlapping models for the ambiguity.
Previous submission: https://news.ycombinator.com/item?id=42202288 (1 comment)
I don’t think that is AI art Turing test.
An AI art Turing Test would be interactive with me telling it what to draw and what changes to make and see if what is producing the art is human or AI.
This species of test would also need a multi-day turnaround period on each image. And/or a video stream of the work being drawn.
"Changes" are an interesting one, honestly as a professional artist who has to pay her rent, there is certain complexity of change beyond which I am likely to say "look, we're going to need to renegotiate the budget on this if you want this much of a change from the sketch you already approved", or even "no".
It's interesting that the impressionist-styled pieces mostly fooled people. I think this is because the style requires getting rid of one of the hallmarks of AI imagery: lots and lots of mostly-parallel swooshy lines, at a fairly high frequency. Impressionism's schtick is kind of fundamentally "fuck high-frequency detail, I'm just gonna make a bunch of little individual paint blobs".
One of the other hallmarks of AI imagery was deliberately kept out of this test. There's no shitposts. There's one, as an example of "the dall-e house style". It's a queen on a throne made of a giant cat, surrounded by giant cats, and it's got a lot of that noodly high-frequency detail that looks like something, but it is also a fundamentally goofy idea. Nobody's gonna pay Michael Whelan to paint the hell out of this and yet here it is.
I feel like the “test” is ruined by inclusion of “ai artists” which is to say people who are dedicating all their time and effort to deliberately filter, prompt engineer and tweak AI to get a result that looks like it’s not AI. I’m sure that if the first pass of any of those works was included instead it would have been a completely different result.
I love that the green hill image ended up being AI - that was my favorite.
Oh come on. I guess I missed the part in the "Turing test" where a human filters out 99.999% of the machine's output prior to the test.
They did the same with human-produced art.