These places aren't real.
Experiments with AI generated landscapes
I’ve recently been interested in the growth of generative AI art and the impacts it might have on photography. AI generated imagery is often easy to identify, but most of the imagery we consume is in little 512x512 squares on our phones. So...I only posted AI generated photography to my Instagram (@kylefrost) for a week. No one noticed.
It’s been a fun little experiment. I’ve been building a library of images, and turned it into a microsite called ‘These places aren’t real’.
What is “AI photography”?
I used a few different tools to achieve the best results, including DALL-E, Midjourney, and Stable Diffusion. These generative AI tools allow you to enter a set of keywords/phrases (and additional parameters) which the “AI” will interpret and output an image. Depending on the platform you can then create variations, remix, and upscale the output image to your liking. While these platforms are often used to create digital art in various styles, I’ve been focused on trying to create photorealistic landscape photography. There’s definitely a level of technique required to understand how the AI interprets various words and the relationship between them in order to get the best result.
Most of what I’ve been posting is 100% AI generated. However, you can also use processes called ‘outpainting’ and ‘inpainting’ that are similar to Photoshop’s “content-aware-fill”. A few of the images I used on Instagram are below – if you view this on web, you can likely identify the oddities and artifacts that don’t seem quite right, but it’s pretty hard to tell on your phone. More photos available here.
This last one is a photo that my partner Sarah expanded with outpainting. I was almost completely cropped out of the original shot, but as you can see, DALL-E did an impressive job filling in the rest of the image.
What is AI good at?
It’s quite good at landscapes. Landscape photography already has elements that create a lot of visual noise, like mountainsides, rocks, snow, grass, and trees. Because these elements already tend to blend together, they’re easier to fake. Incorporating elements like fog, haze, and clouds also makes images naturally more obscure and harder to identify as generative (as seen in the “Scottish” landscape below). Sometimes it takes a bit of fiddling with the prompt to dial them back to ‘believable’, as they often bias towards creating more “fantastical” mountain landscapes.
It’s not great at getting ‘exactly’ what you want
While generators are great at “generating”, it takes some skill to get exactly what you’re looking for. They sometimes have trouble with humans and faces, although you can get around this by providing reference images. Humans in motion can get a bit wonky as well. AI isn’t going to put the latest season trail shoes or the newest version/color of your puffy jacket on a well-known athlete and output a photorealistic shots of that person running in the French Alps. Yet. But it can get close…
With the pace of innovation in the space, I don’t think it will be long before you can input a studio shot of your product + model and output them in any scene you want. As a test, I tried to create a few Pit Vipers shots, since their sunglasses are a bold, unique design. I had mixed success, although I think some if this is due to me slowly learning how to coax exactly what I want out of Midjourney.
How might AI be utilized in the outdoor/travel industry?
If you’re in marketing/social, it would be irresponsible to not learn how to utilize these tools. I imagine within the next 12 months, it’s going to be a suggested/required skill on every job application. It’ll be heavily used in marketing content — think of the blog posts that currently use the same 5 pictures people grab from Unsplash (you know the ones). With content teams across the industry getting slimmer and slimmer, I think we’re more likely to see a turn to AI as budgets continue to be slashed. And at the end of the day, does it really *matter* what the hero image is for the latest listicle?
As a photographer, I’m not sure what the full impact will be yet. As I said above, there will continue to be demand for in-season photography featuring specific gear, specific people, and specific places. Much like digital cameras and editing tools, it’s likely to become just “part of the process”.
Some form of AI, whether integrated into editing tools like Lightroom or utilized separately is likely to become an unavoidable part of your workflow.
I think studio and product photography is likely to be highly impacted/disrupted by AI because it is so formulaic. It seems within reason to generate a render of a product by taking pictures with my iPhone and then feed that through a generative AI to place it in any background scene I can imagine.
The technology is rapidly getting better at re-creating real places accurately, so pure ‘landscape’ photographers, are probably at risk eventually. However, I think there is such power in seeing an incredible image that you *know* was a moment captured a real place, somewhere in the world.
I could see destination marketing organizations eventually training their own AI models using a dataset of local imagery — so they could spin up fake (but accurate) photos for marketing purposes any time they want.
I’ll leave you with a few more complicated questions to ponder.
If the goal of photography is to make you *feel* something, and that is accomplished through the combination of imagery shared (often on Instagram) and the associated story/caption provided…does it really matter where the image came from? I’m sure there will be a controversy about the “ethics” of AI imagery and the value of “realness”, but I’ve been on that Instagram long enough to remember when it was a major faux pas to post anything that wasn’t taken with your phone camera. Times change — is Instagram a vehicle for “pure” imagery (which we already spend a ton of time editing), or for “stories”?
Which takes more creativity — imagining and creating a complex scene with AI or taking the millionth photo with the exact same framing of Taft Point/that one barn in the Tetons? It’s surprisingly hard to think up/compose things from scratch in your head.
For better or worse, I think social media has actually made it harder to identify what might be fake. We’ve been exposed to so many incredible photographers visiting insane places around the world that we’ve become desensitized to incredible images. People aren’t surprised if I post a crazy picture from the “alps” because they know I’ve been in that area and I already take pretty good photos. I’ve now completely destroyed their trust but 🤷♂️. Whoops.
Who gets “credit” for these images? If I’m an art director providing a detailed shot list to a photographer, I don’t get the credit for the images they take. It definitely takes some skill to coax great images out of these programs, but there is an element of interpretation and “creativity” on the AI side…is Midjourney the “photographer”?
I’m working on a new landing page for Here & There. If you’ve been enjoying reading my newsletter, I’d love to get a short quote from you about why! You can leave a comment or just respond to this email.
Also, thank you to those of you that have been sharing this newsletter recently, it’s amazing to see new subscribers coming in and passing things along 🙏.