Recently, Meta Chief Scientist Yang Likun said in an interview that superintelligence will not come soon. This view is contrary to that of Nvidia founder Jensen Huang. Huang has said that artificial intelligence will develop much faster than humans think, and that superintelligence could emerge within a few decades. So, why does Yang Likun hold such a view?
First, we need to understand what superintelligence is. Superintelligence refers to machines or systems that have a level of intelligence that surpasses that of humans. This kind of intelligence is not only capable of completing tasks that can be completed by the first class, but also can solve many problems that humans cannot solve. At present, artificial intelligence has achieved remarkable results in many fields, such as image recognition, speech recognition, natural language processing, etc. However, these achievements are still a long way from true superintelligence.
Yang Likun believes that the arrival of superintelligence requires overcoming many technical difficulties. First of all, there are still significant limitations to current AI algorithms. For example, although deep learning Xi algorithms have achieved good results in areas such as image recognition, they still face difficulties in dealing with complex problems. In addition, current AI systems struggle to transfer knowledge across domains, which means that they struggle to apply knowledge from one domain to another. These issues need to be addressed in future research.
Second, the development of superintelligence also needs to address ethical and safety issues. With the development of artificial intelligence technology, machines will become more and more involved in human life. How to ensure that these machines do not pose a threat to human security and privacy while providing convenience to humans is an urgent problem to be solved. In addition, the development of superintelligence may also raise a series of ethical questions, such as whether robots should have rights and obligations. These issues require in-depth research and research in parallel with technological development.
Finally, Yang Likun believes that the arrival of superintelligence also requires overcoming social and cultural barriers. The development of AI technology will have a profound impact on human society, which can be positive or negative. How to ensure that the development of AI technology can benefit all mankind instead of exacerbating social inequality and contradictions is a global challenge.
To sum up, Yang believes that superintelligence will not come anytime soon, because there are still many limitations and problems that need to be solved in current AI technology. At the same time, the development of superintelligence also needs to overcome ethical, security, and socio-cultural challenges. Of course, this does not mean that we should be pessimistic about the development of AI technology. Instead, we should actively focus on these challenges and strive to find solutions to ensure that the development of AI technologies can benefit all of humanity.
Autumn and Winter Check-in Challenge