Table of content
This Horizon Scenario Collection was developed as part of a STOA-commissioned foresight study on "Open Science and the Effects of Generative AI in Scientific Exchange in the EU". The scenarios presented here explore plausible futures at the intersection of Open Science and generative artificial intelligence, examining how these forces might reshape scientific exchange by the mid-2030s.
Each scenario was authored by domain experts representing diverse stakeholder perspectives across the research ecosystem. Contributors include researchers, policy advisors, ethics specialists, representatives from publishing and infrastructure organisations, funders, and journalists. These experts were invited to envision how current trends in Open Science and generative AI might unfold, considering both opportunities and risks, intended and unintended consequences.
The scenarios range from optimistic visions of enhanced collaboration and efficiency to cautionary tales of fragmentation, epistemic crisis, and inequity. Together, they form a rich tapestry of possible futures designed to inform evidence-based policy making and strategic planning at the European and global level.
Contributing authors: Julia Priess Buchheit, Ana Marušić, Alex Freeman, Gustav Nilsonne, Monya Baker, Katharina Miller, Alexei Grinbaum, Timothy M. Errington, Marie Alavi, Graham Smith, Mariette van den Hoven, and Aneta Pazik-Aybar.
Disclosure: This annexe is a collaborative effort by many co-authors. It is written at a time when GenAI was already embedded in many of our working processes, so that we will not count single tools which were used. Instead, we will use Peters (2024) visuals to indicate where no artificial intelligence was used. However, the whole text was proofread and assessed by all co-authors before it was published. ****

Peters, M. (2024). Transparent Use of Artificial Intelligence. chercheuse et professeure en science de l’éducation Directrice du Partenariat universitaire sur la prévention du plagiat. https://mpeters.uqo.ca/logos-ai-en-peters-2023
The little logo at the beginning of each scenario indicates how it was written.
By the mid-2030s, peer review has become one of the first core mechanisms of scientific exchange to be fundamentally reshaped by the combined forces of Open Science and generative artificial intelligence. Traditional peer review—slow, opaque, and unevenly distributed—proved increasingly incompatible with AI-augmented research practices. In response, publicly funded research systems began to redesign peer review as a shared, open infrastructure embedded within the wider Open Science ecosystem, rather than a journal-based service.
Researchers no longer submit a standalone manuscript but an open research package comprising data, code, methodological documentation, and a narrative interpretation, often drafted with AI assistance and curated by humans. These packages are deposited in interoperable open repositories and become immediately accessible. Scientific exchange thus shifts from circulating finished texts to exposing research processes.
Before human judgment is applied, certified generative AI systems conduct a first layer of review, assessing statistical robustness, methodological consistency, documentation quality, and reproducibility, while mapping the work against the open literature. The result is a transparent, publicly visible diagnostic report that informs, but does not replace, human review. AI defines the conditions for meaningful human evaluation rather than making acceptance decisions.
Human peer review follows as a second layer, drawing on open, community-curated reviewer pools. Reviewers focus on interpretation, epistemic judgement, ethical considerations, and societal relevance, engaging both with the research package and the AI report. Reviews are typically signed, openly accessible, and updateable, thereby reinforcing accountability and recognition of reviews as scholarly contributions.
Scientific exchange becomes faster and more iterative. Review is continuous rather than decisive, allowing corrections, updates, and partial re-evaluations as research evolves. Trust shifts away from journal prestige towards transparent workflows, traceable human–AI interactions, and open review histories. Integrity is grounded in process transparency rather than symbolic authority.