One of the hardest and most lucrative task in photography, is editing.
It is hard because it is not a science with clear established rules. Trends, fashion, moods, seasons, local culture, a lot can interfere in the process of finding the best image.
It is lucrative, not because it’s a profession that pays well (some, like photographers, do not get paid to find their best images in a shoot), but because it can find the gold nugget in an incessant photo stream.
While it is extremely subjective exercise, we do all seem to agree on the majority of the basic foundation : composition, lighting, subject, contrast, focus, all have to be balanced in a coordinated resonance. And it seems to work since the greatest we all seem to agree on the majority of good to great images. We all like them.
Regardless, it is a time consuming, labor intensive activity. In the microstock, RF or Commercial Stock world, it is the only non automated, high cost division. A editor can kill or save a shoot. Not unlike book editors.
Enters science and computers: How can we automate editing ? How do we teach a computer to learn what is a good image from a bad one? How do we make such a subjective task into one that a computer can understand?
Since 2005, professors at Penn State University have been working on that task, with some limited but interesting success. It is called Acquine, and its an automated photo editing engine. To simplify the process to its extreme, the system will not judge an image based on its cultural significance, nor by the beauty of the human subject. It is purely a “dumb” editing that seems to only worry about colors, light, form, contrast, rather than any identification of object or person in an image.
It will not be judging the World Press images anytime soon.
On the foreground, it is very simple . Upload an image and it will grade it anywhere from 1 to 100. The rules applied are not divulged publicly. The result is then publicly shown with a tool for a human being to also rate the image. The assumption is that this information is then retrieved to compare the computer and human reaction to an image and adjust the algorithm.
It is a slow and long process. The results are somewhat predictable. For example, the computer seems to give high ratings to photographs with a frame . Why? because people like the delimitation of a frame. It is a known fact. It also seem to appreciate Black and White images over color, probably because it easier to analyse and simpler in contrast and definition. A lot of computer assisted object recognition cameras only work in B/W for the same reason. Finally, it seems to prefer simple compositions with not too much content and color variations. I wouldn’t be surprised if object shots would get a high rating.
This is a very new and unexplored field. After computer assisted keywording with content recognition, automated editing is another el dorado of the photography workflow. Imagine if all you had to do was shoot and leave the process of keywording and editing to a computer that would automatically select your best images for upload. The time and the reduction of cost, specially for photo agencies who deal with a large volume of images, would be tremendous.
In searches, it could quickly retrieve the “right” images in libraries of millions of images. In fact, one of the professors has been hired by Google. It could help during a shoot, by telling a photographer if the image they are seeing is even worth taking. Here again, another professor from this project has been hired by Kodak. It could be embedded in a program like photoshop to not only select the best images, but alsio advise on the changes that could be done to make the image more perfect.
We are still a very long way to see this applied in our daily lives as it is still very much in its infancy, but make no mistake about it, it’s coming our way. You can see and play with Acquine here. And you can learn more about it here.