Instead of focusing on any hardware aspect of iPhone photography, Apple’s engineers and managers aim to control how the company handles every step of taking a photo.

With the launch of the iPhone 12 Pro Max, Apple introduced the largest camera sensor ever installed on an iPhone. However, rather than bragging about it, the company says it’s part of a philosophy that sees camera designers work on every possible aspect, from hardware to software.

Speaking with photography site PetaPixel, Francesca Sweet, product line manager for iPhone, and Jon McCormack, vice president of camera software engineering, stressed that they work on all designs to make it easier to take photos.

“As photographers, we tend to think a lot about things like ISO, subject movement, etc.,” said Job McCormack. “And Apple wants to allow people to stay in the moment, take a great picture and get back to what they’re doing.”

“It is no longer that meaningful for us to talk about a particular speed and feed of an image or a camera system,” he continued. “We think about what the goal is, and the goal is not to have a bigger sensor that we can boast of.”

“The goal is to ask how we can take more beautiful photos in the more conditions people are in,” he said. “It was this thinking that led to Deep Fusion, Night Mode and more.”

Apple’s overall goal, both McCormack and Sweet say, is to “automatically replicate as much as possible what the photographer will do in post production.” So, with machine learning, Apple’s camera system breaks down an image into elements that it can then process.

“The background, foreground, eyes, lips, hair, skin, clothes, skies,” McCormack lists. “We process all of these elements independently as you would in Adobe Lightroom with a series of local adjustments. We adjust everything from exposure, contrast and saturation, and combine them all together ”.

Subscribe To Our Tech News & Newsletters

Join our mailing list to receive the latest tech news and updates from our team.

You have Successfully Subscribed!

Share This