Privacy Policy
Adobe is working on a camera app designed to take your smartphone photography to the next level.
Within the next year or two, the company plans to release an app that marries the computing smarts of modern phones with the creative controls that serious photographers often desire, said Marc Levoy, who joined Adobe two years ago as a vice president to help spearhead the effort.
Levoy has impeccable credentials: He previously was a Stanford University researcher who coined the term computational photography and helped lead Google’s respected Pixel camera app team.
“What I did at Google was to democratize good photography,” Levoy said in an exclusive interview.
“What I’d like to do at Adobe is to democratize creative photography, where there’s more of a conversation between the photographer and the camera.” If successful, the app could extend photography’s smartphone revolution beyond the mainstream abilities that are the focus of companies like Apple, Google and Samsung.
Computational photography has worked wonders in improving the image quality of small, physically limited smartphone cameras.
And it’s unlocked features like panorama stitching, portrait mode to blur backgrounds and night modes for better quality at night. Camera app ‘dialogue’ with the photographer Adobe isn’t making an app for everyone, but instead for people willing to put in a bit more effort up front to get the photo they want, something matched to the enthusiasts and pros who often already are customers of Adobe’s Photoshop and Lightroom photography software.
Such photographers are more likely to have experience fiddling with traditional camera settings like autofocus, shutter speed, color, focal length and aperture. Several camera apps, like Open Camera for Android and Halide for iPhones, offer manual controls similar to those on traditional cameras.
Adobe itself has some of those in its own camera app, built into its Lightroom mobile app. But with its new camera app, Adobe is headed in a different direction — more of a “dialogue” between the photographer and the camera app when taking a photo to get the desired shot.
Adobe is aiming for “photographers who want to think a little bit more intently about the photograph that they’re taking and are willing to interact a bit more with the camera while they’re taking it,” Levoy said.
“That just opens up a lot of possibilities. That’s something I’ve always wanted to do and something that I can do at Adobe.
” In contrast, Google and its smartphone competitors don’t want to confuse their more mainstream audience. “Every time I would propose a feature that would require more than a single button press, they would say, ‘Let’s focus on the consumer and the single button press,'” Levoy said. Adobe camera app features and ideas Levoy won’t yet be pinned down on his app’s features, though he did say Adobe is working on a feature to remove distracting reflections from photos taken through windows.
Adobe’s approach adds new artificial intelligence methods to the challenge, he said. “I would love to be able to remove window reflections,” Levoy said. “I would like to ship that, because it ruins a lot of my photographs.” But there are plenty of areas where Levoy expects improvements: “Relighting” an image to get rid of problems like harsh shadows on faces.
The iPhone’s lidar sensor or other ways of building a 3D “depth map” of the scene can help inform the app where to make such scene illumination decisions.
A new approach to “superresolution,” the computational generation of new pixels to try to offer higher-resolution photos or more detail when digitally zooming. Google’s Super Res Zoom combines multiple shots to this end, as does Adobe’s AI-based image enhancement tool, but both the multiframe and AI approaches could be melded, Levoy said.
“Adobe is working on improving it, and I’m working with the people who wrote that,” he said. Merging several shots into one digital photo montage with the best elements of each photo, for example, making sure everybody is smiling and nobody is blinking in a group shot.
It’s difficult technology to get working reliably: “Google launched it in Google Photos a long time ago. Of course we de-launched it after people started posting all kinds of horrible creations,” Levoy said. The methods of computational video — applying the same tricks to video as are now common with photos — “has barely been scratched,” Levoy said. For example, he’d like to see an equivalent of the Google Pixel Magic Eraser feature to remove distractions from videos, too.
Video is only getting more important, as the rise of TikTok illustrates, he said. Photos that adapt to the screens where people see them. People naturally prefer more contrast and richer colors when seeing photos on small phone screens, but that same photo on a laptop or TV can look garish. Adobe’s DNG file format could allow viewer-based tweaks to dial such adjustments up or down to suit their presentation, Levoy said.
A mixture of real images and synthetic images like those generated by OpenAI’s DALL-E AI system, a technology Levoy calls “amazing.” Adobe has a strong interest in creativity, and AI-generated images could be prompted not just with text but with your own photos, he said.