By now I am pretty sure all of you have already heard of ChatGPT, Midjourney or any of the other artifical intelligences you can now talk to or make use of. Where it got interesting for me, was in early 2023, when Adobe released a Photoshop Beta version with AI features and staggering capabilities. Over the past months I have been using it (for professional work and private stuff) so I wanted to show you some examples where it was useful to me and also how it works.
One thing is for sure, this will vastly disrupt image editing workflows.
Removing unwanted objects
This is for me the most useful option. You might have already used content aware fill in the past and if you did I guess you have a similar feeling about it that I have: sometimes okayish but most of the time not really.
We will use this picture as an example. When I took it I wasn’t even sure if I should bother to press the shutter button, as there are many distracting elements in the background that are at first sight not easy to get rid of.
But let’s first see how things work with the content aware fill. I marked the pole and the car behind it and let it do its job, this is what it came up with:
The car is gone but there are plenty of issues. Somewhat that advertising poster was used as sample area and the buildings in the back also look odd. When looking more closely you would also notice, that the lines on the ground are wrong and don’t fit the perspective. Fixing this properly with content aware fill, it would take some time. I would probably need ~ 2 hours.
Now the new AI-based generate fill is something else entirely. It can analyze a scene with regards to light direction, shadows, size relations, depth of field, perspective and content. I again mark pole and car, just like before, and all I do now is write in that box “remove car and pole”:
And this is what the new generative fill comes up with:
You usually get three options to choose from, but if there is none that you are happy with you can let the AI generate more. In 99% of cases I was already very happy with one of these first three options though.
Compared to the content aware fill we see huge differences: perspective and lighting look flawless, the buildings have been completed in a way that makes sense and everything simply looks real.
I did clean up some further areas, the only one that didn’t work right away was removing the license plate. Here I needed two steps, but in total I needed about 5 minutes to get to this point. It would have been hours without this new technology.
This one I didn’t even expect to work out well. I told the AI to remove the pole as well as the lock and it was even able to generate something as complex as an out-of-focus bike wheel.
Also removing this rainbow artefact was easily possible by just marking it and entering “remove lens flare”. Another useful application I could think of would be to remove sweat stains a on shirt, an otherwise tedious and time consuming task.
Extending the Canvas
I am not sure if any other AIs offers this feature yet, but I was also amazed by it. You take an existing picture, extend the canvas size and let the Photoshop AI fill in the blanks around your picture.
This is a good time to tell you I wasn’t super happy with my wedding photographer. There were plenty of pictures with too tight framing where something was missing and I wasn’t quite happy about it, so we use one of those as an example.
The car is slightly cut off on the right side and the framing is also too tight at the bottom, so we load this picture into Photoshop and extend the canvas by 20% in all directions:
We again mark the outer area:
When it comes to extending the canvas I had the best results by not giving any prompt, so I just clicked on generate and was presented with the following three options:
Let me first say, I think all three results are astonishing. I decided to go with the second one, as the wall on the right looked best to me and this is the final picture (or the picture as it should have been) after some minor adjustments:
For weddings, reportage, or whenever something has been cut on the edge, this feature can easily turn an otherwise unusable picture into an actually good one.
Now if you look at a 100% crop from this picture you can see the seam between the original and the generated part, but if that is the only complaint…
I used this feature with a lot of pictures from my archives to see how it works, a few I will share with you here, moving from easy to difficult – at least from my point of view, maybe the AI sees things differently.
Now with that repeating background this might not look that astonishing at first sight, but the AI recognized the pattern of having a vertical pillar between 7 columns of windows, and that I find quite astonishing.
Thinks are starting to get more interesting here: a defocused complex background with a lot of perspective distortion. Here I first noticed that the AI has some issues with hands. Hands often look distorted and wrong, sometimes even with too many or too few fingers, so I went with this one where you only see the arm, but not the hand.
Now here I find the result spectacular. Unusual, complex out of focus background and I cannot see anything that is wrong with the generated part.
So far we talked about removing unwanted parts and extending already existing parts, but elements can also be added. Personally, I don’t have much use for this application, but I can see people using this a lot for stock photos or advertising purposes.
What if we want to see some people walking threw this tunnel? I marked the area and told AI to “add a couple walking”:
At first sight, one of the three options looks pretty convincing actually:
Now if you look more closely, you start to see some real issues here:
As said, with human proportions this AI is not doing a perfect job (yet).
Also here I was looking for a few possible candidates in my picture archives again.
In this scenery I always found the sky a bit messy, so I marked it and told AI to “add dramatic sunset sky”. Pretty convincing result…
Talking about my wedding pictures again, I took this picture of my wife only, because the photographer didn’t understand the concept of a “silhouette”, which is why I was never in that picture.
I marked the area next to my wife and told AI to “add silhouette of a groom next to the bride”. Neither me nor my wife think that guy looks like me, but if someone was looking for a stock photo like this, I think it would pass.
I am pretty sure I have only touched the surface of what is possible with this new technology in Photoshop, but it already changed my workflow forever: the generative fill has immediately become my go-to option for removing unwanted objects and by extending the canvas I could save some pictures with imperfect framing that really bothered me.
Now when it comes to adding objects that haven’t been in the scene to begin with, this does not only raise the question if we are still talking about photography or rather digital art, it may also raise further moral/ethical questions, but I will quote another article of mine here: Every picture that has ever been taken or will ever be taken can be (re)created in Photoshop today.
AI just made Photoshop a lot easier to use and more accessible now.
To me this AI based processing is the biggest leap in the history of Photoshop we have ever seen, if not in digital editing in general. And I am sure this journey has barely just begun.
- Discuss the usage of AI with our Discord community
- All our lens reviews
- Bokeh explained
- Technical Knowledge