In the heart of Vatican City, a significant discussion unfolded on February 14, 2025, as Pope Francis’ adviser on artificial intelligence (AI), Father Paolo Benanti, voiced pressing concerns about the unregulated use of AI technology. The event, a collaborative effort by the embassies of Australia to the Holy See and Italy, highlighted the potential dangers of AI, including the creation of bioweapons and the exacerbation of income inequality.
Father Benanti, a respected member of the Vatican’s Pontifical Academy for Life and a moral theology professor, joined a panel of experts to delve into the ethical and human rights challenges posed by AI. This gathering followed the Artificial Intelligence Action Summit in Paris, where over 60 countries, including the Vatican, pledged to develop AI ethically and transparently. Notably, the U.S. abstained from signing the final agreement.
The panel featured prominent figures such as Diego Ciulli from Google, Professor Edward Santow from Australia’s AI Expert Group, Professor Luigi Ruggerone from Intesa Sanpaolo Innovation Center, and Rosalba Pacelli, a deep learning expert. Together, they explored AI’s impact on global politics, the economy, and social interactions, emphasizing the need for ethical considerations in AI development.
Father Benanti warned that open-source AI models without controls pose significant risks, potentially enabling the development of harmful technologies like bioweapons. The panelists echoed concerns about AI’s role in widening the gap between rich and poor, with Ciulli highlighting AI’s influence on wealth generation and opportunity distribution.
Ruggerone, from an economic perspective, noted that despite AI-driven productivity gains, wage increases remain elusive for most workers. Pacelli stressed the importance of collaborative regulation, pointing out the dangers of biased data selection in AI tools, which could harm marginalized communities.
The discussion also touched on the Vatican’s document, Antiqua et Nova, which underscores the human role in upholding principles of truth, justice, and peace in the face of AI advancements. Santow, a legal expert, emphasized the necessity of human accountability in AI-driven decisions, particularly in legal contexts.
Key Takeaways:
- Unregulated AI poses risks of bioweapons and increased inequality.
- Ethical AI development requires global cooperation and transparency.
- AI’s economic impact may not translate to equitable wage growth.
- Biased AI tools can harm marginalized groups, necessitating careful regulation.
- Human accountability remains crucial in AI decision-making processes.
This dialogue serves as a crucial reminder of the responsibilities accompanying AI advancements. As technology evolves, so must our ethical frameworks to ensure AI serves humanity positively and equitably.