Your problem description goes like this:
Now that mobile phone photography is already very powerful with the blessing of mobile phone chips, can the camera make computational photography?
Will the camera then adopt a simplified shooting process (point-and-shoot photography) and devolve to a low-to-mid model?
Let's start with the second question.
Generally speaking, on the camera, the low-end model may have more intelligent functions than the mid-to-high-end camera.
Here's an example. The Canon EOS M200, which costs 3,000 yuan, has a scene recognition function, and he will push you different colors and filters according to different scenes.
This function is exactly the same as the AI scene recognition on the mobile phone, which is more gorgeous when shooting food and smoother when shooting.
In addition, there are many built-in filters, including grain black and white, soft focus, fisheye effect, toy camera effect, miniature effect, watercolor effect, and oil painting effect.
But once the price increases, such as 5D or 1D, or R3 level, a camera with tens of thousands of dollars, these functions are cut off.
Why?Because for professional users, this kind of function is actually not needed. There is another phenomenon that can be corroborated, if you use low-end models, you have no problem putting it in the auto file, but it is a good model with tens of thousands, and it always looks a little strange if it is still the auto file (although how to use the camera is your freedom.) )。
So even if there will be a lot of simplified shooting process after the camera, or point-and-shoot photography, it will be mounted on the low-to-mid model first. So this is not called decentralization, but decentralization.
The first question, you said, mobile phone photography is already very powerful with the blessing of mobile phone chips.
The reason is that the computing power ratio of the camera chip is completely inferior to that of the mobile phone, and the difference is very large.
For example, the A7III uses the Bionz X processor, which uses the ISP of the Sony CXD4236 and the CXD90027GF SoC.
This ISP can't find the specifications and parameters, let's talk about the CXD90027GF SoC.
Based on quad-core ARM Cortex-A5 architecture.
The most famous representatives of this series are the Qualcomm Snapdragon S1 and S4 Play series, S1 is too old, let's talk about the S4 Play series.
The representative models are MSM8225 and MSM8625, which support up to dual-core 12GHz, L2 cache is 512K, and the process is 45nm.
For example, MSM8625, the use of this processor has:
Basically, it's still the era when China Cool United ruled the world, and it's basically the operator's contract machine, the kind that will be given away when you charge two or three hundred phone bills, and it was released around 2012.
If you still don't understand, the upgraded version of MSM8625 is MSM8625Q, which belongs to the Snapdragon 200 series, Snapdragon 200, you are the product. Now that the Snapdragon 8Gen3 is there, some people feel that the 865 is a bit stuck, not to mention the Snapdragon 200.
The computing power is simply not enough to see.
Besides, camera users, can't you do computational photography if you export ** to your computer?
Is Intel too weak, or is Adobe's algorithm not working?
Then we also have AMD Yes , and Luminar AI, which one doesn't sling computational photography on mobile phones
At present, the computational photography of mobile phone photography is mainly based on multi-frame synthesis, super-resolution algorithm, image recognition, and partition processing, which you can actually do yourself in the later stage, but everyone's technology is different, and the results are very different.
Therefore, the camera will gradually increase the function of AI computational photography, but it will never be as rich and foolish as a mobile phone.