Game Changer

The subject of artificial intelligence - AI - is dominating many aspects of the news lately, with headline writers making a beeline for over-dramatic statements referring to the technology used in the Terminator movies where Skynet took over the defence systems of the world and turned them against humanity, or The Matrix movies where AI took over everything and turned it against humanity, or the movie I-Robot, where the worker-robots rebelled and turned against humanity. You get the picture.

In reality, AI has a long way to go before it turns against humanity from what I've seen so far. It does, however, have the capacity to render some tech jobs obsolete in the immediate future, and its application has become a major part of my workflow.

I was asked to revisit a £2m house in the swanky Ramside Park development in Durham to get some sunny exteriors during the summer months. The house was shot in winter initially, and the external shots looked a bit grim. The owners were away for a few days, and they'd left the gates open for me to do my thing. I turned up on a Friday afternoon, and immediately noticed a Range Rover the size of a tank parked right outside the front door. Great start. I had two choices: abandon the job and return the following week, or just shoot it as is. Every single shot had the car dominating the view - including the aerial views taking in the Ramside Hall golf course behind the property. This is a good 2.5-hour return drive, so I took the shots.

When I loaded the images on my desktop Mac, my thoughts turned to some YouTube videos I'd seen regarding the feature called "generative fill" in the beta rollout of Adobe Firefly AI - the culmination of the 10-year project known as Adobe Sensei. Beta versions of the software are released to test features and identify bugs when the app is being used in the real world by public testers before the bugs are ironed out and the final version is released to everybody. I thought I'd give it a bash, as the results on the videos had people literally screaming with joy. Once I'd downloaded the beta version of Photoshop, I opened the image, quickly drew a selection around the car, and clicked the generative fill prompt which appeared. A progress bar took about 20 seconds to complete, and the corrected silently appeared:

My jaw literally dropped. This would have taken hours to produce myself using the proven retouching techniques or cost a fair few quid using an overseas editing service. It's not absolutely perfect, but it will absolutely do for a web-sized image. I used the same process for all the shots and got the same miraculous results every time - saving hours and removing the necessity of yet another revisit.

Since then, I've used the feature to remove plugs and wires from walls, and behind tables, removed a huge hamster run from a living room, and eliminated building sites which ruined aerial shots, as well as many smaller retouching jobs. It's not always perfect - getting rid of a simple notice board on a kitchen wall resulted in a man-sized door handle taking its place no matter what I did, but generally, it has been a game changer. Editing houses could be looking at vastly reduced workloads once the tech is rolled out widely. It will inevitably be monetised as an additional subscription to the Adobe CC suite, but I shall be subscribing without hesitation.

Another AI service in the news is the Chat GPT phenomenon from the Open AI platform which responds to all manner of requests using a database of the entire internet. I'll not bleat on about all the uses it has, but for my workflow, it has slashed a good couple of minutes per image processed by creating some very specific javascript code that stacks image layers, renames them, runs an action depending on the number of layers, and so forth - as soon as the image is exported from Adobe Lightroom into Photoshop. It took a few tries, as you need to give it extremely specific instructions, but it's something I don't have the knowledge or time to do myself, and outsourcing the task would be costly. This is the snippet of code:

var doc = app.activeDocument;

var numLayers = doc.layers.length;

if (numLayers === 2) {

    app.doAction("2 Layer Image", "PROPERTY ACTIONS");

} else if (numLayers === 3) {

    app.doAction("3 Layer Image", "PROPERTY ACTIONS");

} else if (numLayers === 4) {

    app.doAction("4 Layer Image", "PROPERTY ACTIONS");

} else if (numLayers === 5) {

    app.doAction("5 Layer Image", "PROPERTY ACTIONS");

} else if (numLayers === 6) {

    app.doAction("6 Layer Image", "PROPERTY ACTIONS");

} else if (numLayers === 7) {

    app.doAction("7 Layer Image", "PROPERTY ACTIONS");

} else if (numLayers === 8) {

    app.doAction("8 Layer Image", "PROPERTY ACTIONS");

} else {

    // Ignore single layer files

}

I'm looking at other ways I can use scripts to speed things up even further, as every minute of time-saving helps. At the minute I can go from bed to work to bed most days of the week, and clawing some personal time back would be lovely. 

I've got today off - no work at all, so I'm doing this blog task, and nothing else. Time to ask my smart speaker to play some music, catch up on the first episode of Star Trek - Strange New Worlds on my smart TV, and reflect on when Chinese AI will turn our power system off and coerce our nuclear defences to turn against us. Until then, have a lovely day.

Previous
Previous

2023 - The Year in Review

Next
Next

The Covid Diaries Revisited