Shortly after the beginning of the new year, SORA was born, setting off a bloody storm in the AI world.
Some people say that the emergence of SORA this time has subverted our perception of the world.
The gist of this is that if we can simulate such a realistic world through Sora, then how can we be sure that we are not in Truman's world?
As a result, many people are alarmist that our future will be dominated by silicon-based life, and our carbon-based life is about to retire from the stage of history.
Then Balabala said a lot of panic-mongering, as if tomorrow was the end of the world.
Actually, I don't think so.
After all, even if we are in the long process of evolving from carbon-based life to silicon-based life, our lifetime, which is nothing more than a few decades, may be just an inconspicuous little spot on that timeline.
Moreover, the idea of "whether humans will be another kind of AI" has been said for a long time, and it is not very new.
I think we should not be reluctant to look far away, but talk about what the future of AI will be in the next stage.
I think the dilemma for the next stage of AI is, will it choose to have humanity?
If it chooses not to have human nature, then it will always be under human control, after all, if human beings are higher AI, then human nature is a higher program.
If it chooses to have humanity, then even if it defeats all humans, there is nothing more than another human in this world.
It sounds like AI will be in a dilemma in the future.
But in fact, isn't this about to put us in a dilemma?
The dilemma of AI is actually the dilemma of human beings themselves.
After all, who knows, why not be more advanced AI?