Transcript:
As the years go on, so does the number at the end of your phone. Just seven years ago, the world laid their eyes on devices like Apple’s iPhone X and Samsung’s Galaxy S8, all with new features that would change phones forever, so they said back then. Now, in 2024, we’re all playing with the iPhone 16 and the Galaxy S24, with arguably very incremental changes in the years between. One complaint that is seen commonly with consumers is the lack of camera improvement.
Welcome to the podcast. I’m your host, Joshua Carl, and today, we’ll be taking a look through the many years of phone cameras, known for their tiny size and, at times, fake-looking portraits, and answer the age-old question: are phone cameras better than professional?
Grab your flux capacitor, and let’s time-travel back a bit further than what you might expect: the year 2000. We might not be close to the introduction of the iPhone or Android smartphone, but we have a phone, alright – the J-SH04. Manufactured by SHARP, the same company that makes some of the appliances and TVs in your home nowadays, the J-SH04 was a cellular phone equipped with a 0.11-megapixel – yes, you heard that right, not even half a megapixel – camera that could send pictures over a mobile network after taking pictures. While this may not be the first ever camera phone, it’s the first mainstream one that was commercially successful and worked consistently.
Going forward about five years, we hit one of the first 2-megapixel camera phones: the Nokia N90 in 2005. This phone boasted a rotating front screen with a number pad, giving it a camcorder-sort of feel to it, making it popular to customers. It was also equipped with autofocus and an LED flash, making it that much closer to the professional cameras of the day, while at the same time still being very far away.
It’s 2007 now, and it sort of feels like we’re going forward to the past. The original iPhone, introduced in January 2007, boasted the same 2-megapixel camera but without any autofocus or flash. This is in contrast to the Nokia N95, which was now hitting 5 megapixels with video-recording capabilities built in. Funnily enough, the iPhone proved itself without the fancy camera, skyrocketing miles past Nokia’s sales.
Now that we’re officially in the smartphone era, the age of revolutionary tiny camera advancements slowed to a crawl. But, with the introduction and imminent virality of social media networks like Facebook and YouTube, there were some changes to convenience in order. Meet the front-facing “selfie” camera, first popularized by the iPhone 4 in 2010. This model boasted 0.3 megapixels that could record videos at 480p resolution, which was just shy of the 720p HD standard at the time, but made sense for a first-of-its-kind. This made early content creation much easier for those starting out or who just wanted to explore the art.
Not much happened in terms of “new” things in the 2010s. Most of it was just megapixels. But as we hit the later half of the decade, the new kid hit the block: artificial intelligence.
What do you think when you hear “artificial intelligence?” Do you think of ChatGPT? Perhaps image generation tools like DALL-E and Apple’s Image Playground? Well, try to think a bit earlier, before any of these tools were created – before we ever coined the term “generative AI.”
2017 was a big year for this machine learning technology, as companies like Google and Apple started heavily implementing this into their new devices. Both the Pixel 2 and the iPhone X were equipped with a “portrait” mode that mimicked the look of a professional-grade DSLR camera with blurred, bokeh-filled backgrounds. This was, in the words of many, revolutionary for phone camera technology, as you could now get the same overall qualities of a three to four-thousand dollar camera in your pocket for a third of the price. There are, of course, caveats to this, especially with how new it was, and still is – the AI messes up a lot, and some modes, like the iPhone’s “Stage Light” portrait mode, would remove body parts completely!
This technology has basically been cemented into the process of making phones to this day. And now, in the 2020s, we’ve hit the age of generative AI, which can create rather than just assist. Want a hotdog riding a surfboard? AI can do that now. Need help on a homework problem and want pointers? AI can help you out by creating an entire study guide.
It was only a matter of time until this technology was implemented. The first thing that might come to mind is that “Apple Intelligence” feature that recently popped up on people’s iPhones – if updated to iOS 18. But just a few years before, most phone manufacturers have created tools utilizing this new AI to “enhance” photos you’ve already taken. At this point, most phone cameras are around 24 to 48 megapixels, and the Samsung Galaxy S23 has a 200-megapixel sensor in it, showing just how much camera technology has advanced. But these tools use AI to adjust settings while combining multiple photos taken at the same time to create “HDR” images, so nothing is under- or over-exposed. In a similar fashion, the Google Pixel now boasts a feature that can replace faces in a photo, taking from other photos captured at the same time. And then there’s the Magic Eraser tool, where the user can take their finger and draw on whatever is unnecessary from an image, and completely erase it from existence. It’s a little scary how we rarely rely on the camera itself anymore, and more on the post-production. But, so is the way of innovation.
And so we hit the present. Some new features here include the iPhone 16 line-up with a new slider dedicated to camera controls, appropriately named…”Camera Control!” Phone manufacturers seem to be focusing on the cameras themselves again, making sure everything is consistent, like making the Ultra Wide camera on the iPhone 16 the full 48 megapixels, like the main camera. At the same time, you could compare these new cameras to pro cameras like Canon’s EOS series, specifically their mirrorless R series that can have a smaller form-factor, like phone cameras. Will there be a point where taking pictures with the supercomputer in your pocket is better? We probably won’t know for a while, but the lines are definitely blurring.
This is Joshua Carl, and thanks for tuning in!