Elvira Kadyrova in Tashkent
On 13-14 November 2025, the 25th OSCE-supported Central Asian Media Conference “Actioning Media Viability for Informed and Resilient Societies” (#CAMC2025) was held in Tashkent at the InterContinental Tashkent Hotel.
The event brought together dozens of participants—journalists, experts, lawyers, and civil society representatives from Central Asia (Kazakhstan, Kyrgyzstan, Tajikistan, Turkmenistan, and Uzbekistan) and Europe. It was organized by the Office of the OSCE Representative on Freedom of the Media (RFoM).
The conference aimed to identify the key challenges facing the media today amid digital transformation, the growing use of social media as a source of information, and geopolitical changes.
It is noteworthy that the Tashkent Forum not only marked its 25th anniversary, but also took place on the 50th anniversary of the OSCE’s founding document, the Helsinki Final Act. This document enshrines a shared commitment to peace, cooperation, and respect for human rights, including freedom of expression, media freedom, and the free flow of information across borders. And as noted during the opening session of the conference, this document remains as relevant today as it was half a century ago. Its spirit of dialogue, transparency, and accountability encourages us to respect, protect, and promote the media.
The conference opened with a high-level panel featuring Ambassador Jan Braathu (OSCE Representative on Freedom of the Media), Muzaffarbek Madrakhimov (Deputy Minister of Foreign Affairs of Uzbekistan), Ambassador Terhi Hakala (Special Envoy of the OSCE Chairperson-in-Office), and Sergey Sizov (OSCE Project Coordinator’s Office in Uzbekistan). The keynote address, “From value to values: viable media for trust, social cohesion and
democracy,” was presented by Meera Selva, CEO of Internews Europe.
AI in Journalism and the Challenges of Big Tech
The first session, “Navigating challenges to public interest journalism in the age of
Big Tech and AI,” proved one of the most discussed. Speakers representing organizations such as the Media Policy Institute, Maqsut Narikbayev University, the European University Institute, and Anhor.uz addressed the dilemma of AI: on the one hand, it accelerates editorial processes (content generation, translation, image processing), but on the other, there are no regulatory mechanisms in the region. AI is increasingly being used to create content, but without ethical frameworks, which threatens credibility and independence.
It is worth noting that the topic of using AI became the background for most of the discussions in all sessions of the conference.
The evening session, “Ten tips on using AI tools responsibly in the newsroom” (moderated by Guido Keel, Senior Advisor at RFoM), was quite engaging and informative. The session presented technical solutions for using AI in newsrooms from Juan Carlos López Calvet (Schibsted News Media), Rustam Gulov (independent expert), and Fabian Lang (Deutsche Welle).
The discussion allowed the audience to develop a fresh understanding of AI, without focusing on the use of AI for article generation.
Key points from the speakers:
- What surprised me most about AI? The rapid development of models: AI learns to program itself, generates poems, paintings, videos from text, and even converses like an “assistant” (Fabian Lang, DW).
- What was disappointing? Bias, distortions, and errors—the quality of AI-generated material is still inferior to human thought and the product of the pen.
- What should never be allowed in a newsroom? Entering sensitive or private information into AI models.
- What is recommended for testing? Experts and programmers from Schibsted News Media and Deutsche Welle recommend using AI models to help authors analyze their content for usefulness and topic coverage. Additionally, there are a number of tools for users. For example, chatbots for searching for materials on a news site.
- What should everyone in the newsroom know? AI is just a tool in the hands of professionals; it’s always important to verify findings, disclose AI use to the audience, and create custom tools for journalists based on it.
The overall conclusion of the session is that AI capabilities are growing daily, but the benefits of AI should be emphasized without compromising ethics.
Legal protection of journalists
The second and third sessions combined the themes of legal protection and security. The sessions, titled “Media freedom policy and legislative frameworks to enable
viable media in Central Asia” and “Safety of journalists as a prerequisite for free and independent journalism,” emphasized that the safety of journalists is a human right.
Among the recommendations voiced were the development of special laws to protect the rights of journalists, strengthening access to information, and combating online harassment (especially gender-based).
Eco-Reporting Guide
The morning began with parallel sessions: Session 4, “Engaging audiences: media
literacy as a frontline response to disinformation” and a Presentation of the RFoM’s and
UNESCO’s Handbook on Environmental Journalism (Session 5).
This practical guide for journalists is part of the UNESCO series on journalism education, developed jointly by RFoM and UNESCO. This practical resource is designed to assist journalists in covering complex environmental topics in a way that engages the public, promotes transparency and accountability, and inspires constructive dialogue and action. The session covers key elements of the guide, including effective reporting techniques, storytelling methods tailored to environmental topics, and strategies for ensuring accuracy while engaging audiences. The session also addresses the challenges of distortion and disinformation, including climate change denial, and the impact of AI on environmental journalism. Additionally, various editorial strategies for disseminating fact-based journalism on environmental topics are discussed.
Media Literacy as a Response to Disinformation
The session “Engaging audiences: media literacy as a frontline response to disinformation” focused on the role of media literacy as a crucial skill in an era of rapidly developing artificial intelligence and fake news. Here, media literacy is presented not simply as a tool for analyzing information, but as a fundamental freedom: the right to truthfulness in the face of information chaos.
In the context of rapid advances in AI, media literacy is becoming critical. It implies a conscious responsibility to information consumers—from journalists and content creators to ordinary users. Threats like deepfakes cannot be ignored: for example, the use of names and images of high-ranking officials in manipulated videos opens the door to the mass spread of disinformation. Citizens’ gullibility, especially in the digital environment, becomes a vulnerability, turning into an economic and social problem. The victims are most often young people who actively consume content on social media but do not always have the tools to verify it.
This issue goes beyond technology and touches on ethics: who is responsible for AI-generated content, and how can its misuse be prevented? To counter disinformation, broad societal engagement is needed, including the education sector. Building strong media literacy skills should be a priority, from school curricula to training.
However, the path to media literacy lies not only in developing critical thinking but also in digital and technological literacy. Users must understand how AI algorithms work, recognize signs of manipulation, and evaluate sources.
At the same time, it’s important to support quality journalism, even though it doesn’t always reach wide audiences. There’s a growing tendency for audiences to prefer information that aligns with their beliefs, which exacerbates polarization.
One effective solution is the creation of fact-checking platforms. Such initiatives, implemented jointly with the education system and the media, will help verify information in real time and prevent the spread of fake news.
“What Works for Media? Media Viability Manifesto in action”
As the session’s title suggests, the speakers described the “Media Viability Manifesto in action,” a strategic framework developed through extensive fieldwork and dialogue with media representatives worldwide (86 organizations from 55 countries contributed to its development).
The Manifesto pursues three goals: to provide conceptual clarity, strengthen strategic collaboration among multiple stakeholders, and align on the practical implementation of media viability objectives. The session explored how these values can be translated into practical strategies in the Central Asian context. Speakers and participants discussed ways to adapt the Manifesto to local realities, whether by strengthening business models, promoting cross-sector collaboration, or addressing policy challenges.
The speakers noted that the manifesto’s global solutions require adaptation to the national specifics of countries in the region.
In the context of business models, questions were raised about the financial security of media outlets, which can be ensured not only through grant support from international partners but also through private sector participation (for example, through the purchase of media services).
The thread of discussion again led the speakers to the ethics of using AI.
A small example of the negative impact of AI on media financial sustainability is the following: when users enter search queries into Google, they often receive a summary answer from AI based on various sources. Because this answer satisfies the user’s needs, users do not click through to the original source sites. Consequently, click-through rates (CTR) on media sites decrease, which directly reduces their advertising revenue and, ultimately, undermines their financial sustainability.
Results and Prospects
In short, the conference emphasized that viable media is key to informed societies.
OSCE Representative on Freedom of the Media, Ambassador Jan Braathu, summing up the forum, emphasized that media viability faces a landscape of unprecedented complexity: technologies that are changing every aspect of how stories produced and how they reache audiences.
The proliferation of AI-powered content, the dominance of global tech platforms, and the algorithmic shaping of public attention all create pressures that threaten the viability of public interests.
However, the CAMC conference also demonstrated broader opportunities: opportunities for learning, collaboration, and development.
The Ambassador called for the conference to be seen not only as a source of new knowledge and practical ideas, but also as a renewed commitment to supporting independent, resilient and trustworthy media across the region.
“As we move forward, we will prepare a set of key takeaways from this conference. They will serve as a roadmap to pursue the implementation of the ideas, strategies, and frameworks we have explored together,” Ambassador Braathu said. /// nCa, 15 November 2025












