Private posted in Artificial-Intelligence community

AI in Crisis: Call for Balancing Innovation and Social Responsibility


In a recent CACM article, Professor Moshe Vardi discusses the evolving landscape of computing since the advent of ChatGPT in 2022, highlighting AI's growing capabilities and associated risks (https://cacm.acm.org/opinion/is-computing-a-discipline-in-crisis/). A 2024 survey of AI researchers revealed significant concerns about AI risks, such as misinformation, manipulation, authoritarian control, and economic inequality. However, there is no consensus on how to address these risks.


A stark divide in the AI community is evident between academia and industry. Academic researchers adhere to the ACM and IEEE Codes of Ethics (https://www.acm.org/code-of-ethics,  https://www.ieee.org/about/corporate/governance/p7-8.html), which mandate computing professionals to consistently support the public good. However, industrial researchers, often employed by for-profit corporations, tend to prioritize profit maximization over social responsibility. As Jeff Horwitz noted in his 2023 book, Broken Code, about Facebook: “The chief executive, and his closest lieutenants have chosen to prioritize growth and engagement over any other objective.” With Big Tech, comprising six corporations with over one trillion dollars in market capitalization, their research budgets far exceed those of governments. This disparity in resources and priorities necessitates immediate collective action.


Vardi emphasizes the need for collective action within the fragmented AI community, divided between academia and industry, and calls for professional societies to lead a unified conversation on AI's future. He advocates for prioritizing social responsibility and fostering collaboration to mitigate potential harms.

Share your thoughts, comments and suggestions.