Got camera? If so, you have photos. A few years ago my photo collection from iPhone’s camera topped the number of photos shot through the years on SLRs and DSLs. Which of today’s most popular cameras shoot the best photos and videos?
Frankly, even for an avid photographer and wannabe videographer, it’s becoming difficult to tell the difference between cameras. All the good ones are very good and even experienced photographers might be challenged to determine which photo came from which DSLR camera, or from which premium smartphone.
Experienced eyes might be able to differentiate good smartphone photos and videos from mid-range DSLrs, but the differences are becoming less each year. Check out the iPhone 7 Plus camera vs a $50,000 RED Weapon professional video camera and you’ll see what I mean.
Good photography is no longer all about physics.
The blend of photographer, camera, and lens gets disrupted when computational photography and sensors are thrown into the mix.
Computational photography or computational imaging refers to digital image capture and processing techniques that use digital computation instead of optical processes. Computational photography can improve the capabilities of a camera, or introduce features that were not possible at all with film based photography, or reduce the cost or size of camera elements.
Today’s new crop of iPhones– iPhone Xs, iPhone Xs Max (each with both wide-angle and telephoto lenses)– have the capability to create photos, with a single click and no consideration of exposure or focus or other adjustments, which actually look better than the scene, object, or person in the photo.
I like this definition better.
Computational photography is the use of computer processing capabilities in cameras to produce an enhanced image beyond what the lens and sensor pics up in a single shot.
Think of things like artificial depth of field, automated panoramic capture, high dynamic range imaging, and, for video, image stabilization as being the basics of computational photography.
Apple designs the CPUs in iPhone models and the new A12 Bionic found in the new iPhone Xs models and iPhone Xr help to process what the camera and sensor capture. Instead of capturing a single photo image, the camera can capture multiple images with different exposures and automatically blend them together to create a photo that is better than what the sensor captured.
Cameras used to be about these buzzwords; photographer, film, aperture, shutter speed, exposure, lens, and developing. Smartphone cameras have made software the single most important component. iPhones once had the smartphone camera crown, but these days all premium smartphones have excellent cameras and very good computational capability, which explains why almost anyone with a new iPhone can take a good photo with little more than point and click.
The rest of the photo is handled by software. Apple’s hardware captures the image and software computes it into a photo.
That’s the new world order.