Microsoft AI chief warns: Studying consciousness in AI is ‘dangerous’
The rapid development of AI is generating both excitement and concern around the world. But a recent warning from Microsoft’s AI chief Mustafa Suleiman has given the debate a new twist. He said that researching AI consciousness could be dangerous.
According to Suleiman, trying to give AI consciousness or studying it could lead to unnecessary controversy and risky applications.
According to international media, Suleiman said that viewing AI as conscious could be dangerous. Doing so increases the possibility of showing AI as a living being or a being with rights. This could raise legal, ethical and policy questions. For example, if AI is considered conscious, the debate over whether it should be granted rights such as human rights arises. This could drag scientific research into philosophical debates and divert attention from actual technological development.
He says that moving forward with AI in relation to consciousness could lead to confusion in the research itself, as questions about its authority and existence are raised more than how it should be used. In addition, there is the possibility that research on consciousness could be misused by some groups or nations for military or surveillance purposes.
The concept of consciousness in AI has long been debated in the scientific community. Some experts have speculated that advanced AI systems could behave in a way that is similar to consciousness in the future. But many have considered it more a philosophical issue than a purely scientific one.
Globally, the debate over consciousness in AI has also signaled an intensification of competition among tech companies. Companies such as OpenAI, Google DeepMind, and Microsoft have taken different approaches to the issue.

Comments
Post a Comment
If you have any doubts. Please let me know.