As AI tools continue to evolve at an exponential rate, creative practitioners of all disciplines are both curious about its potential and concerned about its longer-term implications. And with increasingly high-profile case studies to stoke debate, it’s fair to say that artificial intelligence has enjoyed its share of the spotlight in 2022.
For instance, Jason Allen’s Theatre D’opera Spatial, entirely created in AI-powered generative art tool Midjourney, sent shockwaves through the creative community when it won first-place in Colorado State Fair’s Fine Art category. Its origins were made clear, putting the ball firmly in the judges’ court.
Philosophical concerns such as whether machine-generated imagery meets the definition of ‘fine art’ are at the extreme end of the debate for many creative professionals. But news stories like this invite predictions for what AI can and will become for all of us in the future.
“Right now, AI is your creative co-pilot,” says David Wadhwani, Adobe’s chief business officer, in his keynote at Adobe MAX, Adobe’s annual global creativity conference. There, several innovations were announced regarding Sensei – Creative Cloud’s built-in AI. Using intelligent masks and powerful presets, Sensei can now speed up once laborious image-editing tasks considerably.
“AI can automate mundane tasks, but it’s not solely about productivity: it’s about possibility,” adds Wadhwani. “Creating with AI can feel like dreaming out loud.”
Be transparent about AI’s role
Whereas ‘creative co-pilots’ such as Adobe Sensei provide background support, other tools put AI and machine learning front and centre in the image creation process. It’s clear that AI played a pivotal part in creating Theatre D’opera Spatial, but it would also never have existed in that award-winning form without human input.
“The artist spent a massive amount of time training and curating his prompts,” points out MDavid Low, head of experience design at Atypical Digital. Low ran a session at Adobe MAX entitled Leveraging AI to Extend Your Creative Toolkit, in which he considers several prominent AI tools on the market, what they can help you achieve, and the wider ethical implications of doing so.
One example is Cosmopolitan’s recent AI-themed issue, which broke new ground with the world’s first AI-generated magazine cover – a collaboration with award-winning director Karen X. Cheng. After extensive experimentation, Cheng inputted the following complex text prompt into another leading AI art generator, DALL·E 2, to produce the final image: “Wide-angle shot from below of a female astronaut with an athletic feminine body walking with swagger toward camera on Mars in an infinite universe, synthwave digital art”.
Clearly, crafting the perfect text prompt can be described as a creative skill. “When cameras first came out, no one considered that art. It was just pushing a button,” says Low. “AIs and neural networks can generate outstanding imagery. Almost limitless. It’s like dancing with an AI: a magic ‘ah ha’ moment.”
Make ethical creative choices
The Economist followed Cosmopolitan with its own AI-generated cover in June, this time created with Midjourney. “All of this raises a bunch of questions about ethics, legality and job security,” adds Low. “Especially for those of us who are used to doing stuff by hand.”
While DALL·E 2 has the edge on rendering more literal, photorealistic imagery, Midjourney is more artistic. In his session, Low challenges both tools to simulate the experience of speaking live on stage at Adobe MAX. After exploring a wide variety of outcomes, from abstract and atmospheric to strikingly specific, he settles on DALL·E 2’s close-up render of a man at a lectern – only to fall foul of the tool’s upload restrictions on photorealistic faces, designed to combat the risks associated with advanced deepfake tech.
Choosing the right platform is an ethical choice in itself: another AI tool, Stable Diffusion, draws on a vast open-source data set of over two billion images and has no such restrictions. “Stable Diffusion shows us where the edges of the darkness are,” suggests Low. “Ethically sourced datasets help, but using these tools ethically helps even more.”
Launched in 2019 by Adobe in partnership with the New York Times and Twitter, the Content Authenticity Initiative (CAI) is tackling such issues head-on. “Content credentials mean we always know whose story we are experiencing,” explains Scott Belsky, Adobe’s chief product offer, in his Adobe MAX keynote.
CAI bakes information about an artwork’s provenance into the file, recording not just who made it, but crucially, how they did so. “For generative art, that means telling what was made by a human, and what by a machine,” adds Belsky. “In a world of social media misinformation, this can really help.”
Find how AI can assist your process
For many creative professionals, AI already plays a day-to-day role as a near-invisible time-saving assistant. For designers working with photography, for instance, the new Adobe Sensei developments represent a powerful behind-the-scenes boost to slash the time required for mundane tasks.
In a practical session entitled Editing Techniques and Tools to Improve Your Photos, Adobe digital imaging evangelist Julieanne Kost explores Adobe Sensei’s contributions to the image editing process. In it, she walks through several of her Lightroom projects – from precise colour-correction of landscape shots to advanced portrait retouching.
For the latter in particular, Adobe Sensei can handle some serious heavy lifting in the background. Load up a portrait photo and Lightroom not only detects the presence of a person, but also auto-generates masks for all the most important areas for retouching: face skin, body skin, eyebrows, the whites of the eyes, the iris and pupil, plus lips, teeth, and hair.
“Transform night into day; move shadows; change the weather. It’s all possible with the latest advances in generative tech,” concludes David Wadhwani. “But AI should always enhance human creativity, not replace it.”