Multi-camera instead of megapixels
Technologies

Multi-camera instead of megapixels

Photography in mobile phones has already passed the great megapixel war, which no one could win, because there were physical limitations in the sensors and size of smartphones that prevented further miniaturization. Now there is a process similar to a competition, who will put the most on camera (1). In any case, in the end, the quality of the photos is always important.

In the first half of 2018, due to two new camera prototypes, an unknown company Light spoke quite loudly, which offers multi-lens technology - not for its time, but for other smartphone models. Although the company, as MT wrote at the time, already in 2015 model L16 with sixteen lenses (1), only in the last few months has multiplication of cameras in cells become popular.

Camera full of lenses

This first model from Light was a compact camera (not a cell phone) about the size of a phone that was designed to deliver the quality of a DSLR. It shot at resolutions up to 52 megapixels, offered a focal length range of 35-150mm, high quality in low light, and adjustable depth of field. Everything is made possible by combining up to sixteen smartphone cameras in one body. None of these many lenses differed from the optics in smartphones. The difference was that they were collected in one device.

2. Multi-lens light cameras

During photography, the image was recorded simultaneously by ten cameras, each with its own exposure settings. All photographs taken in this way were combined into one large photograph, which contained all the data from single exposures. The system allowed editing the depth of field and focus points of the finished photograph. Photos were saved in JPG, TIFF or RAW DNG formats. The L16 model available on the market did not have the typical flash, but photographs could be illuminated using a small LED located in the body.

That premiere in 2015 had the status of a curiosity. This did not attract the attention of many media and mass audiences. However, given that Foxconn acted as Light's investor, further developments did not come as a surprise. In a word, this was based on the growing interest in the solution from companies collaborating with the Taiwanese equipment manufacturer. And Foxconn's customers are both Apple and, in particular, Blackberry, Huawei, Microsoft, Motorola or Xiaomi.

And so, in 2018, information appeared about Light's work on multi-camera systems in smartphones. Then it turned out that the startup collaborated with Nokia, which introduced the world's first five-camera phone at the MWC in Barcelona in 2019. Model 9 PureView (3) equipped with two color cameras and three monochrome cameras.

Sveta explained on the Quartz website that there are two main differences between the L16 and the Nokia 9 PureView. The latter uses a newer processing system to stitch photos from individual lenses. In addition, Nokia's design includes cameras different from those originally used by Light, with ZEISS optics to capture more light. Three cameras capture only black and white light.

The array of cameras, each with a resolution of 12 megapixels, provides greater control over image depth of field and allows users to capture details that are normally invisible to a conventional cellular camera. What's more, according to published descriptions, the PureView 9 is capable of capturing up to ten times more light than other devices and can produce photos with a total resolution of up to 240 megapixels.

The abrupt start of multi-camera phones

Light is not the only source of innovation in this area. A Korean company LG patent dated November 2018 describes combining different camera angles to create a miniature movie reminiscent of Apple Live Photos creations or images from Lytro devices, which MT also wrote about a few years ago, capturing a light field with an adjustable field of view.

According to the LG patent, this solution is able to combine different data sets from different lenses to cut out objects from the image (for example, in the case of portrait mode or even a complete background change). Of course, this is just a patent for now, with no indication that LG plans to implement it in a phone. However, with the escalating smartphone photography war, phones with these features could hit the market faster than we think.

As we will see in studying the history of multi-lens cameras, two-chamber systems are not new at all. However, the placement of three or more cameras is the song of the last ten months..

Among major phone makers, China's Huawei was the fastest to bring a triple-camera model to market. Already in March 2018, he made an offer Huawei P20 Pro (4), which offered three lenses - regular, monochrome and telezoom, introduced a few months later. Mate 20, also with three cameras.

However, as it has already happened in the history of mobile technologies, one had only to boldly introduce new Apple solutions in all the media in order to start talking about a breakthrough and a revolution. Just like the first model iPhone'а in 2007, the market for previously known smartphones was "launched", and the first IPad (but not the first tablet at all) in 2010, the era of tablets opened, so in September 2019, the multi-lens iPhones "eleven" (5) from the company with an apple on the emblem could be considered an abrupt beginning of the era of multi-camera smartphones.

11 Pro Oraz 11 Pro Max equipped with three cameras. The former has a six-element lens with a 26mm full-frame focal length and f/1.8 aperture. The manufacturer says it features a new 12-megapixel sensor with 100% pixel focus, which could mean a solution similar to those used in Canon cameras or Samsung smartphones, where each pixel consists of two photodiodes.

The second camera has a wide-angle lens (with a focal length of 13 mm and a brightness of f / 2.4), equipped with a matrix with a resolution of 12 megapixels. In addition to the described modules, there is a telephoto lens that doubles the focal length compared to a standard lens. This is an f/2.0 aperture design. The sensor has the same resolution as the others. Both the telephoto lens and the standard lens are equipped with optical image stabilization.

In all versions, we will meet Huawei, Google Pixel or Samsung phones. night mode. This is also a characteristic solution for multi-objective systems. It consists in the fact that the camera takes several photos with different exposure compensation, and then combines them into one photo with less noise and better tonal dynamics.

The camera in the phone - how did it happen?

The first camera phone was the Samsung SCH-V200. The device appeared on store shelves in South Korea in 2000.

He could remember twenty photos with a resolution of 0,35 megapixels. However, the camera had a serious drawback - it did not integrate well with the phone. For this reason, some analysts consider it a separate device, enclosed in the same case, and not an integral part of the phone.

The situation was quite different in the case of J-Phone'a, that is, a phone that Sharp prepared for the Japanese market at the end of the last millennium. The equipment took photos at a very low quality of 0,11 megapixels, but unlike Samsung's offering, the photos could be transferred wirelessly and conveniently viewed on a mobile phone screen. J-Phone is equipped with a color display that displays 256 colors.

Cell phones have quickly become an extremely trendy gadget. However, not thanks to Sanyo or J-Phone devices, but to the proposals of mobile giants, mainly at that time Nokia and Sony Ericsson.

Nokia 7650 equipped with a 0,3 megapixel camera. It was one of the first widely available and popular photo phones. He also did well in the market. Sony Ericsson T68i. Not a single phone call before him could receive and send MMS messages at the same time. However, unlike the previous models reviewed in the list, the camera for the T68i had to be bought separately and attached to the mobile phone.

After the introduction of these devices, the popularity of cameras in mobile phones began to grow at a tremendous pace - already in 2003 they were sold worldwide more than standard digital cameras.

In 2006, more than half of the world's cell phones had a built-in camera. A year later, someone first came up with the idea to place two lenses in a cell ...

From mobile TV through 3D to better and better photography

Contrary to appearances, the history of multi-camera solutions is not so short. Samsung offers in its model B710 (6) double lens back in 2007. Although at that time more attention was paid to the capabilities of this camera in the field of mobile television, but the dual lens system made it possible to capture photographic memories in 3D effect. We looked at the finished photo on the display of this model without the need to wear special glasses.

In those years there was a big fashion for 3D, camera systems were seen as an opportunity to reproduce this effect.

LG Optimus 3D, which premiered in February 2011, and HTC Evo 3D, released in March 2011, used dual lenses to create 3D photographs. They used the same technique used by the designers of "regular" 3D cameras, using dual lenses to create a sense of depth in images. This has been improved with a 3D display designed to view received images without glasses.

However, 3D turned out to be only a passing fashion. With its decline, people stopped thinking about multicamera systems as a tool for obtaining stereographic images.

In any case, not more. The first camera to offer two image sensors for purposes similar to today's was HTC One M8 (7), released in April 2014. Its 4MP main UltraPixel sensor and 2MP secondary sensor have been designed to create a sense of depth in photos.

The second lens created the depth map and included it in the final image result. This meant the ability to create an effect background blur , refocusing the image with a touch of the display panel, and easily manage photos while keeping the subject sharp and changing the background even after shooting.

However, at that time, not everyone understood the potential of this technique. The HTC One M8 may not have been a market failure, but it has not been particularly popular either. Another important building in this story, LG G5, was released in February 2016. It featured a 16MP main sensor and a secondary 8MP sensor, which is a 135-degree wide-angle lens that the device could be switched to.

In April 2016, Huawei offered the model in collaboration with Leica. P9, with two cameras on the back. One of them was used to capture RGB colors (), the other was used to capture monochrome details. It was on the basis of this model that Huawei later created the aforementioned P20 model.

In 2016 it was also introduced to the market iphone 7 plus with two cameras on the back - both 12-megapixel, but with different focal lengths. The first camera had a 23mm zoom and the second a 56mm zoom, ushering in the era of smartphone telephotography. The idea was to allow the user to zoom in without losing quality - Apple wanted to solve what it considered a major problem with smartphone photography and developed a solution that matched consumer behavior. It also mirrored HTC's solution by offering bokeh effects using depth maps derived from data from both lenses.

The arrival of the Huawei P20 Pro at the beginning of 2018 meant the integration of all the solutions tested so far in one device with a triple camera. A varifocal lens has been added to the RGB and monochrome sensor system, and the use of Artificial Intelligence it gave much more than the simple sum of optics and sensors. In addition, there is an impressive night mode. The new model was a great success and in the market sense it turned out to be a breakthrough, and not a Nokia camera blinding by the number of lenses or a familiar Apple product.

The forerunner of the trend to have more than one camera on a phone, Samsung (8) also introduced a camera with three lenses in 2018. It was in the model Samsung Galaxy A7.

8. Samsung Dual Lens Manufacturing Module

However, the manufacturer decided to use lenses: regular, wide angle and third eye to provide not very accurate "depth information". But another model Galaxy A9, a total of four lenses are offered: ultra-wide, telephoto, standard camera and depth sensor.

It's a lot because For now, three lenses are still standard. In addition to the iPhone, their brands' flagship models such as the Huawei P30 Pro and Samsung Galaxy S10+ have three cameras on the back. Of course, we don't count the smaller front-facing selfie lens..

Google seems indifferent to all this. His pixel 3 he had one of the best cameras on the market and could do "everything" with just one lens.

Pixel devices use custom software to provide stabilization, zoom, and depth effects. The results weren't as good as they could have been with multiple lenses and sensors, but the difference was small, and Google phones made up for the small gaps with excellent low-light performance. As it seems, however, recently in the model pixel 4, even Google finally broke down, although it still only offers two lenses: regular and tele.

Not rear

What gives the addition of additional cameras to one smartphone? According to experts, if they record at different focal lengths, set different apertures, and capture entire batches of images for further algorithmic processing (compositing), this provides a noticeable increase in quality compared to images obtained using a single phone camera.

Photos are sharper, more detailed, with more natural colors and greater dynamic range. Low light performance is also much better.

Many people who read about the possibilities of multi-lens systems associate them mainly with blurring the background of a bokeh portrait, i.e. bringing objects beyond the depth of field out of focus. But that is not all.

Cameras of this type are performing an increasingly wide range of functions, including more accurate XNUMXD mapping, introducing augmented reality and better recognition of faces and landscapes.

Previously, with the help of applications and artificial intelligence, the optical sensors of smartphones have taken on tasks such as thermal imaging, translating foreign texts based on images, identifying star constellations in the night sky, or analyzing the movements of an athlete. The use of multi-camera systems greatly enhances the performance of these advanced features. And, above all, it brings us all together in one package.

The old history of multi-objective solutions shows a different search, but the difficult problem has always been the high demands on data processing, algorithm quality and power consumption. In the case of today's smartphones, which use both more powerful visual signal processors than before, as well as energy efficient digital signal processors, and even improved neural network capabilities, these problems have been significantly reduced.

A high level of detail, great optical possibilities and customizable bokeh effects are currently at the top of the list of modern requirements for smartphone photography. Until recently, in order to fulfill them, the smartphone user had to apologize with the help of a traditional camera. Not necessarily today.

With large cameras, aesthetics come naturally when the lens size and aperture size are large enough to achieve analog blur wherever pixels are out of focus. Mobile phones have lenses and sensors (9) that are too small for this to happen naturally (in analog space). Therefore, a software emulation process is being developed.

Pixels further away from the focus area or focal plane are artificially blurred using one of the many blur algorithms commonly used in image processing. The distance of each pixel from the focus area is best and fastest measured by two photographs taken ~1 cm apart.

With a constant split length and the ability to shoot both views at the same time (avoiding motion noise), it is possible to triangulate the depth of each pixel in a photograph (using the multi-view stereo algorithm). It is now easy to get an excellent estimate of the position of each pixel in relation to the focus area.

It's not easy, but dual camera phones make the process easier because they can take photos at the same time. Systems with a single lens must either take two consecutive shots (from different angles) or use a different zoom.

Is there a way to enlarge a photo without losing resolution? telephoto ( optic). The maximum real optical zoom you can currently get on a smartphone is 5× on the Huawei P30 Pro.

Some phones use hybrid systems that use both optical and digital images, allowing you to zoom in without any apparent loss in quality. The mentioned Google Pixel 3 uses extremely complex computer algorithms for this, it is not surprising that it does not need additional lenses. However, the Quartet has already been implemented, so it seems difficult to do without optics.

The design physics of a typical lens makes it very difficult to fit a zoom lens into the slim body of a high-end smartphone. As a result, phone manufacturers have been able to achieve a maximum of 2 or 3 times the optical time due to the traditional sensor-lens smartphone orientation. Adding a telephoto lens usually means a fatter phone, a smaller sensor, or the use of a foldable optic.

One way of crossing the focal point is the so-called complex optics (ten). The sensor of the camera module is located vertically in the phone and faces the lens with the optical axis running along the body of the phone. The mirror or prism is placed at the right angle to reflect light from the scene to the lens and sensor.

10. Sophisticated optics in a smartphone

The first designs of this type featured a fixed mirror suitable for dual lens systems such as the Falcon and Corephotonics Hawkeye products which combine a traditional camera and a sophisticated telephoto lens design in one unit. However, projects from companies such as Light are also starting to enter the market, using movable mirrors to synthesize images from multiple cameras.

The complete opposite of telephoto wide angle photography. Instead of close-ups, a wide-angle view shows more of what's in front of us. Wide-angle photography was introduced as the second lens system on the LG G5 and subsequent phones.

The wide-angle option is especially useful for capturing exciting moments, such as being in a crowd at a concert or in a place too large to capture with a narrower lens. It's also great for capturing cityscapes, high-rise buildings, and other things that regular lenses just can't see. There is usually no need to switch to one "mode" or the other, as the camera switches as you move closer or further away from the subject, which integrates nicely with the normal in-camera camera experience. .

According to LG, 50% of dual camera users use a wide-angle lens as their main camera.

Currently, the entire line of smartphones is already equipped with a sensor designed for exercise. monochrome photosi.e. black and white. Their biggest advantage is sharpness, which is why some photographers prefer them that way.

Modern phones are smart enough to combine this sharpness with information from color sensors to produce a frame that is theoretically illuminated more accurately. However, the use of a monochrome sensor is still rare. If included, it can usually be isolated from other lenses. This option can be found in the camera app settings.

Because camera sensors don't pick up colors on their own, they require an app color filters about the pixel size. As a result, each pixel only records one color—usually red, green, or blue.

The resulting sum of pixels is created to create a usable RGB image, but there are trade-offs in the process. The first is the loss of resolution caused by the color matrix, and since each pixel only receives a fraction of the light, the camera is not as sensitive as a device without a color filter matrix. This is where the quality sensitive photographer comes to the rescue with a monochrome sensor that can capture and record in full resolution all available light. Combining the image from the monochrome camera with the image from the primary RGB camera results in a more detailed final image.

The second monochrome sensor is perfect for this application, but it's not the only option. Archos, for example, is doing something similar to regular monochrome, but using an additional higher resolution RGB sensor. Since the two cameras are offset from each other, the process of aligning and merging the two images remains difficult, and the final image is usually not as detailed as the higher resolution monochrome version.

However, as a result, we get a clear improvement in quality compared to a picture taken with a single camera module.

Depth sensor, used in Samsung cameras, among other things, allows for professional blur effects and better AR rendering using both the front and rear cameras. However, high-end phones are gradually replacing depth sensors by incorporating this process into cameras that can also detect depth, such as devices with ultra-wide or telephoto lenses.

Of course, depth sensors will likely continue to appear in more affordable phones and those that aim to create depth effects without expensive optics, such as moto G7.

Augmented Reality, i.e. real revolution

When the phone uses differences in images from multiple cameras to create a distance map from it in a given scene (commonly referred to as a depth map), it can then use that to power augmented reality app (AR). It will support it, for example, in placing and displaying synthetic objects on scene surfaces. If this is done in real time, objects will be able to come to life and move.

Both Apple with its ARKit and Android with ARCore provide AR platforms for multi-camera phones. 

One of the best examples of new solutions emerging with the proliferation of smartphones with multiple cameras is the achievements of Silicon Valley startup Lucid. In some circles he may be known as the creator VR180 LucidCam and technological thought of the revolutionary camera design Red 8K 3D

Lucid specialists have created a platform Clear 3D Fusion (11), which uses machine learning and statistical data to quickly measure the depth of images in real time. This method allows for features not previously available on smartphones, such as advanced AR object tracking and gesticulation in the air using high-resolution images. 

11. Lucid Technology Visualization

From the company's point of view, the proliferation of cameras in phones is a hugely useful area for augmented reality sensors embedded in ubiquitous pocket computers that run applications and are always connected to the Internet. Already, smartphone cameras are able to identify and provide additional information about what we are aiming them at. They allow us to collect visual data and view augmented reality objects placed in the real world.

The Lucid software can convert data from two cameras into 3D information used for real-time mapping and scene recording with depth information. This allows you to quickly create 3D models and XNUMXD video games. The company used its LucidCam to explore expanding the range of human vision at a time when dual-camera smartphones were only a small part of the market.

Many commentators point out that by focusing only on the photographic aspects of the existence of multi-camera smartphones, we do not see what such technology can actually bring with it. Take the iPhone, for example, which uses machine learning algorithms to scan objects in a scene, creating a real-time XNUMXD depth map of terrain and objects. The software uses this to separate the background from the foreground in order to selectively focus on the objects in it. The resulting bokeh effects are just tricks. Something else is important.

The software that performs this analysis of the visible scene simultaneously creates virtual window to the real world. Using hand gesture recognition, users will be able to naturally interact with the mixed reality world using this spatial map, with the phone's accelerometer and GPS data detecting and driving changes in the way the world is represented and updated.

That's why Adding cameras to smartphones, seemingly empty fun and competition in who gives the most, may eventually fundamentally affect the machine interface, and then, who knows, the ways of human interaction..

However, returning to the field of photography, many commentators note that multi-camera solutions may be the final nail in the coffin of many types of cameras, such as digital SLR cameras. Breaking the barriers of image quality means that only the highest quality specialized photographic equipment will retain the raison d'être. The same can happen with video recording cameras.

In other words, smartphones equipped with sets of cameras of various types will replace not only simple snaps, but also most professional devices. Whether this will actually happen is still difficult to judge. So far, they consider it so successful.

See also:

Add a comment