
For many in the research community, it has been difficult to be optimistic about the impacts of artificial intelligence.
As authoritarianism grows around the world, AI-generated “slugs” are dominating legitimate media, while AI-generated deepfakes are spreading misinformation and repeating extremist messages. Amidst intractable conflicts, AI is making warfare more precise and deadly. AI companies are exploiting people working as data labelers in the global south, and profiting from content creators around the world by using their work without license or compensation. The industry, with its huge energy demands, is also impacting an already fragile environment.
Meanwhile, particularly in the United States, public investment in science appears to be redirect And concentrated On AI at the expense of other topics. And big tech companies are tightening their control over the AI ecosystem. In these and other ways, AI is making everything worse.
This is not the whole story. We should not deny that AI is harmful to humanity. None of us should accept this as inevitable, especially those in positions to influence science, government, and society. Scientists and engineers can take AI on a beneficial path. This way.
Academy’s perspective on AI
A Pew study found in April that 56 percent of AI experts (authors and presenters of AI-related conference papers) predict that AI will have a positive impact on society. But that optimism does not extend to the scientific community at large. one 2023 survey Among 232 scientists at the Center for the Study of Science, Technology and Environmental Policy at Arizona State University, concern was found to outweigh enthusiasm about the use of generic AI in daily life – by about a three to one ratio.
We have faced this feeling again and again. Our careers of diverse applied work have brought us into contact with many research communities: privacy, cybersecurity, physics, drug discovery, public health, public interest technology, and democratic innovation. In all of these areas, we found strong negative sentiment about the impacts of AI. This sentiment is so clear that we are often asked to represent the voice of the AI optimist, even though we spend most of our time writing about the need to improve the structures of AI development.
We understand why these audiences see AI as a destructive force, but this negativity raises a different concern: Those who have the ability to direct the development of AI and drive its impact on society will see it as a lost cause and will be left out of that process.
Elements of a positive approach to AI
many to pass Argued He turnEng tide Climate action needs to clearly chart a path towards positive outcomes. In the same way, while scientists and technologists should anticipate, warn about, and help mitigate the potential harms of AI, they should also shed light on the ways in which the technology can be used for good and to inspire public action for those ends.
There are myriad ways to leverage and reshape AI to improve people’s lives, distribute power rather than concentrate it, and even strengthen democratic processes. Many examples have come from the scientific community and should be celebrated.
Some examples: AI is overcoming communication barriers across languages, including in low-resource contexts Marginalized sign languages And indigenous african languagesIt is helping policy makers incorporate the viewpoints of multiple constituents through AI-assisted discussion And legislative engagementLarge language models can scale individual interactions AAddress climate,change skepticismDisseminating accurate information at a critical moment. National laboratories are building AI foundation model To accelerate scientific research. And throughout the fields of medicine and biology, machine learning is solving scientific problems such as predicting protein structure to aid in drug discovery, which was recognized as a nobel prize In 2024.
Although each of these applications is nascent and certainly imperfect, they all demonstrate that AI can be used to advance the public interest. Scientists should embrace, support, and expand such efforts.
A call to action for scientists
In our new book, Reinvigorating Democracy: How AI will transform our politics, government, and citizenshipWe describe four key actions for policymakers committed to advancing AI toward the public good.
This applies to scientists also. researchers should work Improvement Make the AI industry more ethical, equitable and trustworthy. we have to work collectively develop Moral criteria For research that advances and applies AI, and should use and draw attention to AI developers who adhere to those criteria.
Second, we should resist Harmful uses of AI by documenting negative applications of AI and highlighting inappropriate uses.
Third, we should use responsibly AI uses its capabilities to improve society and people’s lives, helping the communities they serve.
And finally, we must advocate for renewal of institutions to prepare them for the impacts of AI; Universities, professional societies, and democratic organizations are all vulnerable to disruption.
Scientists have a special privilege and responsibility: we are close to the technology itself and therefore well-positioned to influence its trajectory. We must work to create an AI-enabled world we want to live in. Technology as historian melvin kranzberg saw“There is neither good nor bad; nor is it neutral.” Whether the AI we create is harmful or beneficial to society depends on the choices we make today. But we can’t build a positive future without a vision of what that looks like.
From articles on your site
Related articles on the web

