Saturday, April 29, 2017

Ain't but a good nothing man bad photography feeling

"If I knew how to take a good photograph, I'd do it every time." said Robert Doisneau long ago. True dat. But isn't dat part of photography's charm? Some photos work and some don't, and at the point of exposure it's hard to know which will be which. Make a photo too "good" and you might kill your darling. Spice it up with imperfection and you could do the same. Even if you stage the thing like Doisneau there's generally no telling until later how it turned out. Sometimes it's years later. Evaluation gets downright murky when you consider variables like context, reproduction quality, sequencing, intention, appropriation, or whether the viewer just went through a bad breakup, or forgot to feed the cat earlier, or is just plain sick of the color blue, or whatever. Yes, "good" is a goddamn mystery and that's how it should be hallelujah. 

But don't tell that to the computer programmers and stock photo companies. Both worlds operate under clearly delineated rules regarding "good". Pair them up and you might get something like the EveryPixel Aesthetics Test, a plug-in evaluative tool which measures a stock photo's "awesome" rating on a scale of 1 to 100. Just drag and drop any photo into the site and its algorithm returns a number. Then you toss the photos with low numbers and Presto —only the nuggets remain! Suddenly, winnowing out the "good" photos is as easy as reading a kitchen thermometer. Doisneau, you missed out. 

I know, I know, the test is silly. But still incredibly tantalizing for someone like me who doesn't know how to take a good photo every time. No sooner had Karl sent me a link to the beta version, along with a DP Review blurb, than I was feeding cows into the machine, images into the machine.

What would the algorithm think of, say, Daisuke Yokota?



According to Everypixel this image has a 12.1% chance of being "awesome". 



Eggleston?



Odds of awesome: 79.8%. Hmm. Not bad. 

Cathie Opie with a mustache, on the other hand? 



The computer's not feeling it. Just 0.7% awesome. 

How about Todd Hido?



Worst so far, 0.2% chance of being awesome. 

Darnit if this thing ain't harder to pin down than a harpooned hippo on a banana tree. How about Kendall Jenner holding a Pepsi? By my own reckoning, and judging by the recent backlash against this scene, the odds of this image being awesome should be quite low. Can we get computer confirmation?



Everydaypixel disagrees. 97.4% chance of being awesome! 

But hold on. Change the scene just slightly...



...and the photo returns a near opposite result, 0.0% chance of awesome. 

Wha? Does not compute. Then again I'm not a computer. But from where this human sits, the algorithm appears to judge arbitrarily. A dartboard, coin flip, or international panel of judges might return similar verdicts. Perhaps the program follows some digital Potter Stewart litmus, "I can't define awesome but I know it when I see it." I honestly doisneau. Nor do I know how to make a good photo any better than before, and I don't think Everypixel's creators know either. 

In situations like this there's only one thing to do. Feed some porn into the machine.



Is this a great photo? Even without a computer I'd say nope. The composition is terrible. Why are all the faces cut off, and what's with the big vacant space to the right? And the whole thing suffers from overexposure. 

Everypixel agrees: big fat 0.0% chance of awesome. 



I should note that this shot isn't a total loss. It generates some positive keywords: Togetherness, Relaxation, and Group of People, for example. I'd think that when considered as an online jpg, the tag Alone And Naked might also apply. But for some reason it's not included in the list. It doesn't matter. Despite all the great keywords —Lifestyles?— they're not enough to return an "awesome" verdict. And I agree with the computer on this one. Looking at this photo now it seems hard to remember what made it so thrilling just a few short minutes ago. It was 100% awesome then! But now it's just kinda, meh, whatever. 

Any porn image easily belies the myth that a computer can evaluate aesthetics. Because a photograph is more than a list of keywords. Its power depends sometimes on emotion, mood, libido, and a thousand other human variables — things no algorithm can measure. With porn, duh. But the same is true about any photo. 

As suggested earlier, context plays a role too. A photo from an old science experiment might be 0.0% awesome if it's archived in a drawer in an institutional setting. Put that same photo in a book by Sultan and Mandel, and it turns out it was 100% awesome all along. But beware. If you forget your copy of Evidence in the YMCA shower stall, the photo slinks back again to 40% awesome, if you can unstick the pages. 

I'm not the first to gripe about the Everypixel ratings. Many readers of the DP Review article had the same initial impulse as me: toss photos into it and see what happens. And like me, many commenters questioned the results. This one sums it up: "tl;dr: If your goal is art, this is not your rating site. If your goal is to sell stock images, it might be." At this point it's probably good to take a step back and remember the Everypixel algorithm was designed only to judge stock photos. Fine art and porn are different animals entirely, requiring different levels of bestiality, discourse, and intercourse.


U.S.A.'s Most Wanted Painting

Still, the question remains, what exactly is "awesome"? Is there any way to measure it? Several years back Vitaly Komar and Alexy Melamid applied the question to paintings. Their Most Wanted Paintings project used professional market research to determine which paintings were "good" and "bad" according to general aesthetic preference. As with the Everypixel algorithm, quality was broken down into a list "good" metrics —for example preferred size of painting, sharp angles vs. curves, and preferred season. The compiled results, organized by country, are perhaps unsurprising. People in America like medium-sized pastoral scenes, and dislike small abstractions. Fair enough. Whether or not that's a scientific measurement of "good" is another question. 

Komar and Melamid also studied songs using the same research methods. They polled musical taste, then created songs to match general preferences. Surprise, surprise, turns out people really don't like to hear bagpipes, kids singing, accordion, wildly fluctuating tempos, or songs that last forever. Komar and Melamid's Most Unwanted Song incorporates all of these elements and more. By all accounts it should be terrible, and a computer algorithm might rate it poorly. 


Vitaly Komar and Alexander Melamid, 1984

But the thing is, The Most Unwanted Song is actually pretty interesting. Some (like myself) might even call it "good". It's got a bit of everything, bouncing through all sorts of motifs, rhythms, and styles over 21+ minutes. Perhaps that's why the The Most Unwanted Song has become a staple of underground radio over the past twenty years. I've played it on my own show, and just last week I heard it on KWVA while driving. The song gets around, at least on the left side of the dial. In almost every way it's more enjoyable than its terrible partner generated by the same methods, The Most Wanted Song. I dare anyone out there to like it. So we're back to square one. What is good music? Who knows?

Of course, polling human taste is slightly different than using a computer algorithm. A musical version of Everypixel which attempted to identify "good" music based on digital sound tests would likely return ridiculous results. It might claim, for example, that Grammy winning songs are 100% awesome, or that all Auto-Tuned songs are awesome, or that John Cage or Harry Partch are 0% awesome. As with photos, the aesthetic variables surpass the capability of computers, at least for now.

One key aspect of The Most Unwanted Song's "goodness" is its originality. The song marches to the beat of its own competing drummers. I know of no other song which sounds remotely similar. Originality and authenticity are highly valued in all the arts, photography included. But they might be hard for a computer program to measure. An algorithm which blindly assigns "awesome" to original works without taking into account other factors might value just about anything different. Strawberries on bubblewrap? Old album covers turned into poetry? Cutout photos of a giant spider and termite mounds? 


Pattern of Activation (jumping spider, termite cathedral mounds, growth potential), 2015, Katja Novitskova


Judging music as an art form may be more problematic than photography, because "bad" music is so easy to enjoy. Anyone can be entertained by Mrs. Miller or Sam Sacks. By any objective standard these outsider songs are awful, but that's exactly what makes them "good". I regularly improve songs by running them through an MP3 reverser. They're better almost every time. Everyone loves to sing along to bad songs on the radio alone in the car when no one's listening. It feels "good". And don't get me started on Blues music, which ain't nothing but a good man feelin' bad, even when the MP3 is reversed.

The thing about music is that most people have an inner barometer. Play someone a song and they'll tell you within a few seconds if it's "good" or not. Show those same people a photo —any photo, but especially a fine art photo— and they'll have no idea. So if music has issues, photography's are even worse, on the human front as well as the digital realm.

But that hasn't kept folks from trying to formulate "good" photos. Ken Rockwell has given it a shot. So have various others. Just last week, Mike Johnston weighed in. By his reasoning "Cool with a warm accent" is one avenue to photo goodness. Could be. Depends. I've seen this book kicking around bookstores recently. The title cuts right to the chase: Read This If You Want To Take Great Photographs. I haven't read this so I can't comment in depth, but I suspect a book called Use This Title If You Want To Sell Books would be more likely to fulfill its promise. Or maybe a title like TogethernessRelaxation, And Group of People, the cover helped along with a naked couch scene?
le Flamant rose, Camargue, 1947, Robert Doisneau

Thinking about what a "good" photo is or isn't, I'm reminded of the first and only photo class I took about twenty-five years ago. One of the last assignments was to take a "bad" photo on purpose. A bad photo? Why, that's easy. You shake the camera during exposure, or set the meter wrong, or crop out the subject, or mis-develop the film. There are all sorts of ways to screw up. 

I think you can guess what happened. That assignment produced the most interesting photographs of the entire class, the photographic equivalent of outsider music. Were they "good"? Hard to say, but they were 100% awesome to us in that moment.

The good/bad equation hasn't changed much since the advent of computers. Making a good photo now is just as hard as it was during Doisneau's lifetime. It's as futile as trying to winnow out good people from bad ones. How do you draw a line in the sand through a person? Such a clean dichotomy is ridiculous, the province of racists, xenophobes, or the poor lonely simpleton in the White House. As elections sometimes show, good things happen to bad men and woman regularly, which they may indeed feel good about. Religions have never successfully explained that one, nor why bad things happen to good people. As for good photos which fare worse over time, it's best not to judge unless you're a machine, which was Doisneau's point all along.

Would a "good" person inject porn filth into blog post, knowing that post was likely to be shared with young children during family prayer that evening? Would he release the drivel early Saturday morning during the news cycle's cellar, then tweet and hype it like crazy on social media? Would a "good" person do that? Isn't that something a bad hombre would do? And if that person knew how to write a good post, wouldn't he do it every time? Goodness knows.

1 comment:

Chris Arts said...

Thought provoking - one of the more logical essays I've read on what makes a photo 'good'. We all know awesomeness when we see it, but why is it so hard to describe? Zen and the art of photography.