You are currently viewing Trusting the Image: The Rise of AI-Generated Photo Editing
Representation image: This image is an artistic interpretation related to the article theme.

Trusting the Image: The Rise of AI-Generated Photo Editing

The Clean Up feature in Apple’s latest software update caught my attention this weekend, and I decided to try it out. As I delved deeper, I discovered that this feature uses generative artificial intelligence (AI) to remove unwanted elements from a photo, a tool that’s available in several countries. But what does this mean for our trust in images and videos?

Country Availability
Australia Available for Apple customers with certain hardware and software capabilities since December
New Zealand Available
Canada Available
Ireland Available
South Africa Available
United Kingdom Available
United States Available

The Clean Up feature is just one example of AI-generated photo editing tools that have become increasingly popular. These tools allow users to remove distracting elements from their photos, and they’re available directly in the smartphone’s default photo app, without the need for a separate app. However, this raises important questions about the trustworthiness of photographs and videos.

  • Removing distracting elements can be attractive, but it can also be used to deceive.
  • The use of AI to edit photos or create new images entirely raises pressing questions around the trustworthiness of photographs and videos.
  • These tools can be used to remove watermarks, alter evidence, and even create fake receipts.

These tools have become so widespread and easy to use that they’ve opened up new avenues for deception. For example, someone might use the Clean Up feature to remove watermarks from a photo, making it less obvious that the image has been tampered with. Others might use these tools to alter evidence, such as editing a photo of a damaged good to make it look like it was in good condition before shipping.

“If advances in tech are eroding our trust in pictures and even video, we have to rethink what it means to trust our eyes.”

The use of AI-generated photo editing tools raises fundamental questions about the nature of visual proof. If a photo might be edited, zooming in can sometimes reveal anomalies where the AI has stuffed up. However, it’s often easier to manipulate one image than to convincingly edit multiple images of the same scene in the same way. This means that asking to see multiple outtakes that show the same scene from different angles can be a helpful verification strategy. To verify the authenticity of an image, we need to rely on multiple approaches, including manual verification techniques such as fact-checking and researching the context. For example, if someone presents a fake receipt, we might ask if the restaurant even exists, was it open on the day shown on the receipt, does the menu offer the items allegedly sold, and does the tax rate match the local area’s? Trustworthy systems that can automate these mundane tasks are likely to grow in popularity as the risks of AI editing and generation increase.
Regulators also play a crucial role in ensuring that people don’t misuse AI technology. In the European Union, Apple’s plan to roll out its Apple Intelligence features, which include the Clean Up function, was delayed due to “regulatory uncertainties”. The use of AI-generated photo editing tools has the potential to revolutionize the way we create and edit images, but it also raises important questions about the trustworthiness of visual evidence. As we continue to rely on these tools, we need to rethink what it means to trust our eyes and develop new strategies for verifying the authenticity of images and videos.

Leave a Reply