It started quite innocently. A family picture for Mother’s Day to share with close ones and the world. Just before sending, a quick alteration here and there to make everyone look better, remove distractions, and voila. Posted on social media as well.Except this mother is not your everyday mom. She’s a queen in waiting, and the world out there has become extremely sensitive to disinformation. At least part of the world.

PHOTOGRAPH: PRINCE AND PRINCESS OF WALES/INSTAGRAM

PHOTOGRAPH: PRINCE AND PRINCESS OF WALES/INSTAGRAM

The story of Kate Middleton’s Photoshop misapp resonated further than most royal handouts and has probably taken root as a landmark in the history of photography. Why,? Because it has many layers.

First and foremost, let’s reset our perspective: Photography has always been editable since it has become reproducible. From the moment of capture to the darkroom, there are many occasions to edit an image. With digital technology, it has not only become easier but also more common. Every photo app on everyone’s cellphone comes with editing features begging to be used. In the latest phones, it is done automatically, using AI. So that someone would use it is no surprise.

The issue here is who ( and maybe why)?

If you and I edit our family pictures and post them on Social media, no one cares. If it’s the queen in waiting or any other public figure, it’s a crime. But only if it’s not declared. It is a crime because a public figure, especially a head of state, can influence people’s perception of the world and affect their decisions. It can alter the course of history. Obviously, less so if it’s Kate Middleton than if it’s Vladimir Putin. But still. So we expect from them a higher level of behavior that is to not lie- or attempt to deceive- .Some more than others. In this case, we expect the royal family to behave like royals and have the decorum to not falsify the information they share with us.

Screenshot of AP Kill notice

Screenshot of AP Kill notice

The very good news is that, in a world of increasingly threatening deepfakes, our detection barriers work fine. The alteration was discovered by AP and other news outlets and publicly exposed. Apologies were given; case closed. Well, somewhat.

This could have gone much better if its creator had openly revealed the modification. If the image carried the information that a part of the image had been modified to eliminate a distracting element, then it would have gone as smoothly as a letter at the post. No one would have blinked. So, the error here was not in modifying the image; it was not telling us. We have seen this before, and we will see it again. The solution resides not in anyone ever retouching an image—that will not happen—but in us, the consumers, being informed when one is done, where, and how. Then, we can judge whether the photo is deceptive or not. In the case of Kate M, it clearly wasn’t.

To navigate the flood of AI-generated and hybrid images—those perplexing edits of reality and AI—in our current landscape, where news is often manipulated for political or commercial advantage, it’s crucial for us to discern the differences. The same way we need to know what is in the food that we eat, with labels clearly identifying what kind of image it is, taken by whom, whether it was edited or not, and where. And then we can decide to consume it. Or not.

Share Button

Comments are closed.

Post Navigation