A charter on the use of artificial intelligence was created in Paris. Ten commandments for journalists
The Paris Charter aims to safeguard the right to information by establishing guidelines for the ethical use and development of AI systems within news and information media outlets. It is the product of three months of intense collaboration between AI and journalism specialists, media representatives, and journalism support organizations. The committee, comprised of 32 distinguished personalities, was chaired by Nobel laureate Maria Ressa, with RSF coordinating the collaborative process.
We, as representatives of the media and journalism community, acknowledge the transformative implications of artificial intelligence (AI) for humanity. We champion global cooperation to ensure AI upholds human rights, peace and democracy, and aligns with our shared aspirations and values. The history of news and information is intertwined with technological progress. AI, spanning from basic automation to analytical and creative systems, introduces a new category of technologies with an unparalleled capacity to intersect with human thought, knowledge, and creativity. It represents a considerable shift in information gathering, truth seeking, storytelling, and the dissemination of ideas. As such, it will profoundly alter the technical, economic and social conditions of journalism and editorial practice.
AI systems have the potential, depending on their design, governance and application, to revolutionize the global information landscape. However, they also present a structural challenge to the right to information. The right to information flows from the freedom to seek, receive and access reliable information. It is rooted in the international legal framework, including the Universal Declaration of Human Rights, the International Covenant on Civil and Political Rights, and the International Partnership for Information and Democracy.
This right underpins the fundamental freedoms of opinion and expression. The social role of journalism and media outlets —serving as trustworthy intermediaries for society and individuals— is a cornerstone of democracy and enhances the right to information for all. AI systems can greatly assist media outlets in fulfilling this role, but only if they are used transparently, fairly and responsibly in an editorial environment that staunchly upholds journalistic ethics. In affirming these principles, we uphold the right to information, champion independent journalism, and commit to trustworthy news and media outlets in the era of AI.
1. JOURNALISM ETHICS GUIDE THE WAY MEDIA OUTLETS AND JOURNALISTS USE TECHNOLOGY
Media outlets and journalists use technologies that enhance their capacity to fulfill their primary mission: ensuring everyone’s right to quality, trustworthy information. The pursuit and achievement of this goal should drive their choices regarding technological tools. The use and development of AI systems in journalism must uphold the core values of journalistic ethics, including truthfulness and accuracy, fairness, impartiality, independence, non-harm, non-discrimination, accountability, respect for privacy and for the confidentiality of sources.
2. MEDIA OUTLETS PRIORITIZE HUMAN AGENCY.
Human decision-making must remain central to both longterm strategies and daily editorial choices. The use of AI systems should be a deliberate and wellinformed decision made by humans. Editorial teams must clearly define the goals, scope, and usage conditions for each AI system. They must ensure a cross-sectional and continuous oversight of the impacts of deployed AI systems, ensure their strict compliance with their usage framework, and retain the ability to deactivate them at any time.
3. AI SYSTEMS USED IN JOURNALISM UNDERGO PRIOR, INDEPENDENT EVALUATION
The AI systems used by the media and journalists should undergo an independent, comprehensive, and thorough evaluation involving journalism support groups. This evaluation must robustly demonstrate adherence to the core values of journalistic ethics. These systems must respect privacy, intellectual property and data protection laws. A clear accountability framework is established for any failure to meet these requirements. Systems that operate predictably and can be simply explained are preferred.
4. MEDIA OUTLETS ARE ALWAYS ACCOUNTABLE FOR THE CONTENT THEY PUBLISH
Media outlets assume editorial responsibility, including in their use of AI in gathering, processing, or disseminating information. They are liable and accountable for every piece of content they publish. Responsibilities tied to the use of AI systems should be anticipated, outlined, and assigned to humans to ensure adherence to journalistic ethics and editorial guidelines.
5. MEDIA OUTLETS MAINTAIN TRANSPARENCY IN THEIR USE OF AI SYSTEMS
Any use of AI that has a significant impact on the production or distribution of journalistic content should be clearly disclosed and communicated to everyone receiving information alongside the relevant content. Media outlets should maintain a public record of the AI systems they use and have used, detailing their purposes, scopes, and conditions of use.
6. MEDIA OUTLETS ENSURE CONTENT ORIGIN AND TRACEABILITY
Media outlets should, whenever possible, use state-of-theart tools that guarantee the authenticity and provenance of published content, providing reliable details about its origin and any subsequent changes it may have undergone. Any content not meeting these authenticity standards should be regarded as potentially misleading and should undergo thorough verification.
7. JOURNALISM DRAWS A CLEAR LINE BETWEEN AUTHENTIC AND SYNTHETIC CONTENT
Journalists and media outlets strive to ensure a clear and reliable distinction between content derived from the physical capture of the real world (such as photographs, and audio and video recordings) and that which is created or significantly altered using AI systems. They should favor the use of authentic footage and recordings to depict actual events. Media outlets must avoid misleading the public in their use of AI technologies. In particular, they should refrain from creating or using AI-generated content mimicking real-world captures and recordings or realistically impersonating actual individuals.
8. AI-DRIVEN CONTENT PERSONALIZATION AND RECOMMENDATION UPHOLDS DIVERSITY AND THE INTEGRITY OF INFORMATION
In media outlets, the design and use of AI systems for automatic content personalization and recommendation should be guided by journalistic ethics. Such systems should respect information integrity and promote a shared understanding of relevant facts and viewpoints. They should highlight diverse and nuanced perspectives on various topics, fostering open-mindedness and democratic dialogue. The use of such systems must be transparent, and users should whenever possible be given the option to disable them to ensure unfiltered access to editorial content.
9. JOURNALISTS, MEDIA OUTLETS AND JOURNALISM SUPPORT GROUPS ENGAGE IN THE GOVERNANCE OF AI
As essential guardians of the right to information, journalists, media outlets and journalism support groups should play an active role in the governance of AI systems. They should be included in any global or international institutional oversight of AI governance and regulation. They should ensure that AI governance respects democratic values, and that diversity of people and cultures is reflected in the development of AI. They must remain at the forefront of knowledge in the field of AI. They are committed to examining and reporting on the impacts of AI with accuracy, nuance, and a critical mind.
10. JOURNALISM UPHOLDS ITS ETHICAL AND ECONOMIC FOUNDATION IN ENGAGEMENTS WITH AI ORGANIZATIONS
Access to journalistic content by AI systems should be governed by formal agreements that ensure the sustainability of journalism and uphold the long-term shared interests of the media and journalists. AI system owners must credit sources, respect intellectual property rights, and provide just compensation to rights holders.This compensation must be passed on to journalists through fair remuneration. AI system owners are also required to maintain a transparent and detailed record of the journalistic content utilized to train and feed their systems. Rights holders must make the repurposing of their content by AI systems conditional on respect for the integrity of the information and the fundamental principles of journalistic ethics. They collectively call for AI systems to be designed and used in such a way as to guarantee high-quality, trustworthy and pluralistic information.
Read more:
Bialystok: Keeping touch burning for imprisoned journalists
London calling. Anti-censorship activists took part in Solidarity Marathon for Belarusian Journalists
The Internet Governance Forum aims to legitimize suppression of free speech