Section outline

  • Powered by GraspOS project, this course addresses critical topics for advancing a responsible research assessment (RRA) system that fully embraces Open Science (OS) principles including:  

    • Open infrastructures for responsible research assessment
    • Transparency and inclusivity of the assessment processes
    • Recognition of contributions to Open Science 

    This training catalogue aims to provide to the stakeholders a guide to the use of indicators, tools, services and infrastructure to further assist their use in the context of implementing OS aware RRA approaches. 

    For each session, you can access the webinar videos, the links to the website, the presentation materials, links to articles and more.

    • Introduction to Responsible Research Assessment (RRA)

    • What is Research Assessment?

      Research assessment systematically evaluates the quality, impact, rigor, and significance of scholarly work. It significantly influences crucial decisions related to funding, hiring, promotions, tenure, and institutional strategies. Traditional approaches primarily utilize quantitative metrics, such as Journal Impact Factors and citation counts, to gauge research success.

      Why Reform is Necessary

      Traditional research assessment practices tend to prioritize publication quantity and journal prestige, often neglecting qualitative measures and diverse scholarly outputs such as datasets, software, community engagement, mentoring, and interdisciplinary collaboration. Such practices can lead researchers to prioritize quantity over genuine quality and societal impact, thereby undermining innovation, fairness, inclusivity, and transparency in scholarly evaluation.

    • Multiple global initiatives have driven significant advancements in responsible research assessment practices:

      A. San Francisco Declaration on Research Assessment (DORA, 2013)

      • Advocates against using Journal Impact Factors for individual researcher evaluations.

      • Promotes evaluation of research based on the actual content and diversity of outputs (e.g., software, data, mentorship).

      • Emphasizes transparency, inclusivity, and a holistic evaluation approach.

      B. Leiden Manifesto (2015)

      • Presents ten guiding principles for the responsible use of research metrics.

      • Reinforces the importance of expert judgment alongside appropriate quantitative metrics.

      • Encourages transparency, contextual sensitivity, and regular assessment method evaluations.

      C. Coalition for Advancing Research Assessment (CoARA, 2022)

      • Establishes a global coalition committed to systemic reform of research assessment.

      • Balances qualitative insights with responsibly applied quantitative indicators.

      • Emphasizes accountability, transparency, inclusivity, and ethical oversight.

      CoARA has established 13 operational working groups tackling the reform of research assessment from different perspectives. These include working groups on open infrastructures, multilingualism, responsible use of metrics, artificial intelligence and ethics, narrative CVs, and more. One of these groups specifically focuses on developing principles, frameworks, and tools for Open Infrastructures.

      Open Infrastructures are not merely supportive tools; they are the backbone of a modern research assessment ecosystem. By offering transparent, community-governed, interoperable systems, they create the foundational layer that enables all other reforms to take root and scale. Without open infrastructures, efforts to adopt narrative CVs, multilingual evaluation practices, responsible metrics, and ethical AI would remain fragmented and difficult to operationalize. Their role is critical in ensuring that assessment practices are FAIR (Findable, Accessible, Interoperable, Reusable), trustworthy, and resilient over time.

      Open Infrastructures: A Foundation for Implementing RRA

      Open infrastructures (OIs) are essential for effectively implementing responsible research assessment, aligning with global reform initiatives like CoARA, DORA, and the Leiden Manifesto. OIs offer transparent, inclusive, interoperable, and ethically grounded environments to ensure a fairer and more holistic approach to research evaluation. The CoARA Working Group ‘’Towards Open Infrastructures for Responsible Research Assessment’’ has conducted related work in mapping the frameworks and principles or OIs fit for RRA (Open Infrastructures for Responsible Research Assessment: Principles and Framework):

      Core Characteristics of Open Infrastructures

      • Technical Robustness: Ensures data integrity, traceability, reproducibility, interoperability, and scalability. Transparent integration of advanced technologies like AI, Explainable AI (XAI), and NLP enhances accuracy and fairness.

      • Operational Efficiency: Offers streamlined workflows to reduce administrative burden, while valuing a wide range of scholarly contributions. Continuous capacity-building maintains stakeholder engagement and proficiency.

      • Community-Centred Governance: Promotes inclusive governance with active participation from diverse stakeholders, ensuring transparency, adaptability, and sustainability.

      • Ethical and Inclusive Practices: Maintains fairness, ethical data management, transparent technology application, and robust ethical oversight to guarantee impartial assessments.

      Implementation: Strategy and Practical Steps

      To effectively implement responsible research assessment using open infrastructures:

      • clear, institution-specific action plans guided by the CoARA framework.

      • Offer regular training and resources to stakeholders to ensure understanding and active participation.

      • Establish periodic evaluation cycles, incorporating community feedback to ensure ongoing improvement and adaptability.

      Benefits of Adopting Responsible Research Assessment

      Adopting responsible research assessment through open infrastructures:

      • Promotes genuine innovation, interdisciplinary collaboration, and societal impact.

      • Strengthens transparency, fairness, equity, and inclusivity in research evaluation processes.

      • Enhances institutional accountability and builds trust among researchers and stakeholders.

      Conclusion and Call to Action

      Implementing responsible research assessment practices through open infrastructures and aligning with international reform initiatives is essential for fostering inclusive, fair, and impactful scholarly environments. Stakeholders must actively engage and commit to these transformative reforms to ensure ongoing excellence, integrity, and societal relevance in research.

    • From Transition to Adoption — Practical Steps and Best Practices

    • Overview

      This module guides institutions and stakeholders through the transition process from traditional assessment to Responsible Research Assessment (RRA) using Open Infrastructures.

      Transition Roadmap

      • Assess Current Practices: Conduct an institutional audit of existing assessment procedures, metrics, and tools.

      • Define Objectives: Set clear goals for adopting RRA (e.g., improving transparency, recognizing diverse outputs).

      • Engage Stakeholders: Involve researchers, administrators, and funders early to build buy-in and co-create strategies.

      Adoption Steps

      1. Build on and follow related Policies: Draft or revise assessment policies to reflect RRA principles and frameworks.

      2. Select or Build Open Infrastructures: Identify systems meeting CoARA criteria or develop internal systems aligned with openness and inclusivity.

      3. Pilot Implementation: Run small-scale pilots to test workflows, data quality, and user experience.

      4. Training and Capacity Building: Provide training on new tools, open metrics, and narrative CVs.

      5. Evaluate and Scale: Gather feedback, measure impact, and refine processes before institution-wide rollout.

      Best Practices

      • Transparency: Communicate policies and criteria clearly.

      • Inclusivity: Recognize contributions beyond publications.

      • Flexibility: Adapt frameworks to discipline-specific contexts.

      • Continuous Improvement: Establish cycles of review, feedback, and refinement.

      Conclusion

      A deliberate, participatory approach ensures sustainable adoption of responsible research assessment, empowering institutions to drive excellence and fairness in research evaluation.

    • GraspOS sets out the ambitious goal to develop, assess and put into operation an open and trusted federated infrastructure for next generation research metrics and indicators by offering data, tools, services and guidance to support and enable policy reforms for research assessment at researcher (individual/group), institutional, organisational and country level. 

      The SCOPE+i Framework is designed to support the transition to Responsible Research Assessment (RRA), with a particular emphasis on contributions to OS. By integrating process resources and digital services into the SCOPE Framework by The International Network of Research Management Societies (INORMS), the SCOPE+i Framework combines the flexibility required to plan and conduct RRA in diverse assessment contexts with practical tools for managing the complexities of research assessment reform.

      The assessment infrastructure - SCOPE+i Framework  - has two main components:

      • assessment process resources which offer guidance for implementing key elements of Responsible Research Assessment (RRA) such as SCOPE+i Resources and
      • digital services which enable collaborative collection and sharing of assessment plans and documentation throughout the entire assessment process.

      Learn what are SCOPE+i Resources https://openplato.eu/mod/page/view.php?id=2664

      SCOPE+i Services consists of two digital services:

      • Assessment Protocol Portfolio (APP) which brings together essential information for assessment planning – such as a readiness self-assessment, values statement, and purpose statement – along with the assessment protocol that outlines the assessment approach
      • Openness Profile – a portfolio for making visible one’s contributions to Open Science (OS).

      These services are part of the GraspOS Open Infrastructure and are technologically underpinned by the Research Activity Identifier, also known as RAiD.

      As part of the broader GraspOS infrastructure, the SCOPE+i services offer three key advantages:

      • Collaborative and Inclusive – Open to all assessment participants.
      • Comprehensive Support – Accommodates all aspects of assessment planning, documentation, and final protocol development.
      • Interoperable and Extensible – Built on a robust metadata schema and API, enabling direct data transfer to downstream analytic services. Additionally, individual instances of these Assessment Protocol Portfolios and Openness Profiles can be hierarchically linked to each other.

      Assessment Protocol Portfolio

      The SCOPE+i Assessment Protocol Portfolio (APP) supports the planning and documentation of research assessment by serving two key functions: it records the agreed-upon approach for a specific assessment event and provides a means to register the assessment protocol after the event concludes. It acts as a shared resource for conducting the assessment and documenting its outcomes.

      An APP is a collaborative, multi-actor digital object that brings together essential information for assessment planning—such as a readiness self-assessment, values statement, and purpose statement—along with the assessment protocol that outlines the assessment approach.

      During the assessment, the portfolio is accessible ("locally open") to all participants involved in the event. It can also be made publicly available afterward, balancing transparency with the need for privacy during the process. This approach ensures consistency and clarity for both evaluators and those being assessed, while protecting sensitive information during the event.

      After the assessment concludes, the APP can be archived for historical reference. Additionally, the assessment protocol itself—separate from any privacy-related content—can be published in the Assessment Protocols Registry. This allows the broader community to learn from the protocol's design in relation to its local context and stated purpose.

      The Assessment Protocols Registry serves as a shared knowledge base to inform and inspire the design of future assessment events.

      Openness Profile 

      The primary goal of the Openness Profile is to make Open Science (OS) activities visible as a distinct and independent information entity, thereby promoting a more diverse and comprehensive consideration of OS in research and related assessment processes.

      The Openness Profile supports the diversity of Open Science contributions by allowing flexible input of various types of content, including both quantitative and qualitative information. Qualitative information is captured through narratives, which enable structured, evidence-based input to support research assessment.

      A dedicated narrative CV template is being developed within the GraspOS project and will be available for use during the piloting phase of the Openness Profile. These narratives can be supported by the Openness Profile, where relevant evidence-based input is included. Quantitative information refers to data that can be measured or counted using numerical values.

      The Openness Profile can also be used in assessment contexts, either directly or as a general-purpose portfolio. In direct use cases, OS contributions recorded in an Openness Profile can be integrated into local assessment infrastructures. Alternatively, the Openness Profile can function as a portfolio of all relevant contributions—OS-related or otherwise—for a specific assessment event.

      In both cases, Assessment Protocol Portfolios and Openness Profiles can be hierarchically linked. For instance, linked information may share a common landing page and be accessible through an API.

      Read more:

      Please note: The SCOPE+i Framework was earlier called "Open Science Assessment Framework (OSAF)".

    • GraspOS training website featuring events, resources, and guidance on Open Science-aware assessment tools, indicators, and services for stakeholders.

      In the GraspOS project, the Open Science Assessment Framework (OSAF) is focused on enabling Responsible Assessment (RRA), as it forms the basis of assessing Open Science. The OSAF has three elements: the SCOPE+i method (SCOPE plus infrastructure), digital Assessment Portfolios, and an Assessment Registry. In this webinar we will introduce the OSAF and provide a concrete use case of the Assessment Portfolio (also known as the Openness Profile). 

      In this webinar you will hear about the GraspOS project’s work on the OSAF. The OSAF will enable a system based on rewards and recognition using a new generation of qualitative and quantitative metrics and indicators. You will hear about the RRA and how it relates to Open Science and about the various frameworks that guide our work, e.g. OS-CAM, Nor-CAM, Opus RAF and SCOPE. The main goal of OSAF is to translate principles into practice with the help of CoARA and SCOPE frameworks, which will provide practical guidelines in terms of points to consider when planning on implementing OS research assessment into an organisation’s workflow. Lastly, you will learn how to apply the SCOPE-i method, Assessment Portfolio and Assessment Registry in different assessment event phases. One part of the Assessment Portfolio consists of an Openness Profile, which documents relevant contributions to Open Science that are essential to research but still unrecognised by the current research evaluation practice. To showcase the Openness Profile in concrete terms, we will introduce the work from the viewpoint of a pilot - the Finnish research.fi service.

      Presentation on the Open Science Assessment Framework (OSAF), covering its agenda, key concepts and the Openness Profile use case. Tatum, C., Nordling, J., & Anli, Z. (2024, March 7th). GraspOS Webinar The Open Science Assessment Framework. Zenodo. https://doi.org/10.5281/zenodo.10794055. All version can be cited by using DOI  https://doi.org/10.5281/zenodo.10794054

      Recording of the Open Science Assessment Framework (OSAF) webinar held on 7 March 2024.

      Objectives & preferred outcomes:

      • Disseminate concept

      • Concrete use case

      • Feedback on usability

      Please note: The term "OSAF"  was later changed to "SCOPE+i Framework.

    • Explore

      1. the ppt-presentation Introduction to SCOPE+i Resources.pdf
      2. Description of 16 SCOPE+i Resources

      Get to know

      • What are SCOPE+i Resources
      • Why to familiarize and adopt SCOPE+i Resources
      • Basics of SCOPE Framework
      • How SCOPE+i Resources are aligned with SCOPE phases
      • What is the contents of SCOPE+i Resources
      • Where can you find SCOPE+i Resources
      • Re-use permissions of SCOPE+i Resources

       

      Learning objectives

      After completing the course section the learner

      • understands the practical purpose of five SCOPE phases 
      • applies and disseminates SCOPE Framework in RRA processes 
      • gets an idea of the complexities of RRA reform 
      • is aware of the variety of OS contributions and activities 
      • fits SCOPE+i Resources e.g. templates to fit hers/his organizations´needs 
      • utilizes, applies and adapts SCOPE+i  Resources e.g. guides, checklist and toolbox in all phases of RRA processes and in training and/or orientation sessions she/he is involved in 

       

      Supplementary material 

       

    • Ppt-presentation on introduction to SCOPE+i Resources

    • Short summary of the contents of 16 SCOPE+i Resources supplemented with links to full texts in Zenodo

    • EOSC Finnish Forum webinar 24th March 2025 looked at how the solutions developed in the GraspOS project advance open-science-aware responsible research assessment. Learn how pilot work related to the Research.fi service, as well as resources to support assessments and a new hybrid-indicator can benefit Finnish organisations, especially from the point of view of open science practices.

      In this webinar the Finnish partners of GraspOS, CSC - IT Center for Science, University of Eastern Finland and the Federation of Finnish Learned Societies are joining forces and presenting their work done in these contexts.

      Agenda:

      • Introduction to GraspOS / Laura Himanen, CSC - IT Center for Science (5 min)
      • Pilot work related to Research.fi (OpenCitations and Openness Profile) / Joonas Nikkanen, CSC - IT Center for Science (15 min)
      • Developing resources to support responsible research assessments as part of the Open Science Assessment Framework (OSAF) / Elina Koivisto, Tiina Sipola, Federation of Finnish Learned Societies (15 min)
      • Hybrid Indicator for the Evaluation of Societal Interactions of Open Science / Katri Rintamäki, Heikki Laitinen, Anni Tarkiainen, University of Eastern Finland (15 min)
      • Q&A (15 min)

      GraspOS benefits to Finnish Organisations-National event - GraspOS

      presentations (PDF) and webinar 

    • Content pending from giulia b+ serafeim and thanasis 

    • The course materials are designed to address the needs of different audiences, including researchers, Research Performing Organisations (RPOs), and Research Funding Organisations (RFOs). Depending on the focus of each session, the content may specifically target one or more of these groups. Participants are advised to refer to the sections of interests.

    • In academia, the traditional emphasis on publications and their impact has often overshadowed other equally important activities of researchers. However, a paradigm shift in research culture and assessment is underway: Policymakers and the research community are reevaluating how we acknowledge researchers' contributions, aiming to recognize endeavors beyond conventional metrics.

      BIP! Scholar aims to facilitate this transition offering a tailored platform for researchers to spotlight their work comprehensively and put it into the correct context. Researchers can create BIP! Scholar profiles from the contents of their ORCiD records (e.g., publications, datasets) and enrich them with valuable additional information like various types of indicators, contribution roles, related topics, and relevant narratives. Researchers can select to make their profiles public or keep them private (and use them for self-monitoring purposes).  

      As regards indicators, a wide range of researcher metrics, calculated using data gathered from the OpenAIRE Graph, is provided capturing a variety of aspects ranging from researchers’ productivity to their impact and compliance with Open Science practices. Regarding roles and topics, the service supports annotating research works with classes from the CRediT taxonomy and Wikidata concepts, respectively. Finally, regarding narratives, BIP! Scholar supports the creation of textual descriptions that describe interesting lines of work offering more context about the respective research outputs, like the among-them connections or their motivation and impact. Widely recognized templates for narrative CVs will be supported very soon. 

      BIP! Scholar profiles have an interesting dynamic feature: the viewer can select to display tailored views of a researcher’s profile exploring particular perspectives of their career. For example, it is possible to examine only those works that are relevant to particular topics of interest or those for which the researcher contributed with specific types of roles. Each time, the respective indicators are updated so that their calculation involves only those works that meet the selected criteria. 

      Objectives and learning outcomes:

      • Learn the basics about scientific impact and its diverse aspects.
      • Learn about the most important impact indicators, their proper usage, and common mistakes and misconceptions.
      • Learn about the concept of narrative CVs and how it can be useful for research assessment.
      • Learn the main functionalities of BIP! Scholar and how they can assist researcher assessment.
    • Presentation slides from the BIP! Scholar training event, explaining how researchers can create academic profiles to highlight their work, contributions, and career narratives with customizable views.

      DOI: https://doi.org/10.5281/zenodo.10067214
    • GraspOS training website featuring events, resources, and guidance on Open Science-aware assessment tools, indicators, and services for stakeholders.

    • Recording of the BIP! Scholar webinar that took place on November 2nd, 2023, showcasing how researchers can create detailed academic profiles to highlight their work, roles, and career narratives.

    • OPERAS Metrics offers a database of usage and impact information of published Open Access books. Designed with the nuanced needs of Social Sciences and Humanities in mind, this service consolidates diverse data sources into a single, transparent interface. Metrics is not only displayed for the publisher’s website, but also aggregated with those of other sites where a book is available.

      Usage metrics have been widely adopted in Open Access works as an indication of the popularity or acceptance of a particular publication. Inevitably, performance assessment and funding allocation is being based on these statistics. However, while we do not agree with these practices, we acknowledge that metrics collection and reporting is nowadays a fundamental need for any organisations producing and/or hosting digital monographs.

      Key features

      Comprehensive Data Collection: usage metrics collected from various sources, providing a clear picture of open access book impact.

      Centrally-Managed Database: metrics on open access books stored in a central database that can be accessed by anyone.

      Open Source and Community-Driven: based on open-source principles, offering an alternative to proprietary usage metrics services and emphasising community involvement.

      Transparent Processing: individual metrics comes with explanations of the data source and collection methods.

      Designed for SSH Disciplines: tailored to better serve the SSH domain, often underrepresented in metric systems.

      Widget: metrics can be visually displayed via any site, facilitating real-time interaction and analysis.

      Flexible Hosting Options: option for publishers to host the service themselves, with dedicated support.

      This training session provides:

      1. Context, service overview & discussion

      • Challenges & opportunities for OA books in Research Assessment
      • Main features & benefits of OPERAS Metrics
      • Accessing OPERAS Metrics database in practice
      • Framing OPERAS Metrics in the context of Research Assessment 

      2. Technical session

       Objectives

      • Demonstrate the position and challenges of OA books in current research assessment systems
      • Understand the value of OPERAS Metrics in measuring  usage and impact of OA Books
      • Describe how the GraspOS Infrastructure can integrate OA books in Research Assessment
      • Learn how to use OPERAS Metrics


      Learning Outcomes

      • This session will offer the ability to assess the landscape of open access (OA) books within existing research assessment frameworks, identify the importance of metrics for measuring the usage and impact of OA books, and recognise the value of the GraspOS Infrastructure in integrating OA books into research assessment systems. It will also provide practical knowledge and skills for using OPERAS Metrics effectively.

       
    • Slides from GraspOS's 4th training session on using OPERAS Metrics to measure and showcase the impact of Open Access books.

      DOI: https://doi.org/10.5281/zenodo.14228176

    • GraspOS training website featuring events, resources, and guidance on Open Science-aware assessment tools, indicators, and services for stakeholders.

    • Recording of the GraspOS training session 'Leveraging OPERAS Metrics' (26 Nov 2024), demonstrating how to measure and showcase the impact of Open Access books.

    • A coalition of funders is partnering with Marie Curie Alumni Association and the Young Academy of Europe to create a peer exchange platform to enhance researchers' ability to present and valorise a broad range of experiences and achievements in the format of narrative-style CVs. Mentees will be able to find mentors based on a range of traditional and non-traditional career experiences and achievements, to support how these outputs can be articulated in evaluation, hiring, and promotion processes.

      About PEP- CV
       
      The PEP - CV is a platform for everyone active in the research and innovation sector to engage in simple peer mentoring exchange to discuss hot to best present diverse experience, achievements, and career paths in narrative style CVs. 
       
      This training aims to explore the growing importance of Narrative CVs in transforming research culture by promoting a more qualitative and holistic approach to academic assessment. It will examine how this format supports the recognition of a wider range of researcher contributions—such as open science practices, mentorship, and societal impact—and aligns with international initiatives like DORA and CoARA to foster a more inclusive and responsible evaluation framework.

      Gain insights into the opportunities and challenges of implementing Narrative CVs, and hear about future initiatives, including workshops, training and community engagement events. With a focus on making research assessments more inclusive and accessible beyond geographical and cultural barriers, this webinar is a step toward empowering a vibrant and diverse research community.

      Objectives

      • Raise awareness about Narrative CVs and holistic assessment. Highlight PEP-CV's role in fostering these practices.

      • Reflect on PEP-CV's annual progress, highlighting its benefits, challenges and future directions.

      • Collect community feedback to better support the transition to holistic assessment and emphasise the importance of peer mentoring.

      Learning Outcomes

      The session will provide a clear understanding of Narrative CVs, their applications, and how PEP-CV supports CV development and research culture reform through peer mentorship.

       
    • Presentation from GraspOS's 3rd webinar on PEP-CV, exploring peer-mentored Narrative CVs as a tool for academic recognition and research culture reform.

      DOI: https://doi.org/10.5281/zenodo.14850494

    • GraspOS training website featuring events, resources, and guidance on Open Science-aware assessment tools, indicators, and services for stakeholders.

    • Recording of the PEP-CV webinar (11 Feb 2025) on peer-mentored Narrative CVs, a transformative approach to academic assessment aligned with DORA and CoARA, promoting inclusivity and broader research contribution recognition.

    • TBC

    • TBD

    • The institutional dashboard of the OpenAIRE MONITOR service is a supplementary tool in enhancing research assessment and decision-making processes for institutions. By providing comprehensive Open Access, collaboration, and impact metrics and indicators, this dashboard offers a nuanced understanding of an institution's research landscape. The detailed insights into Open Access publications enable institutions to gauge their commitment to Open Science, while collaboration metrics shed light on collaborative efforts within the scholarly community. Impact indicators provide valuable information about the reach and influence of an institution's research outputs. Leveraging these dashboards empowers institutions to make informed decisions, fostering a data-driven approach to research management and strategic planning.
      The institutional dashboard assists institutions on identifying how they fare in Open Science, utilises the collaborating and funding organisations, and how much impact the institutions research output work is making. The institutional dashboard is your go-to tool for understanding and managing your institution's research. With the easy-to-understand metrics broken down to various domains, you can make smart decisions about your research strategy. It's like having a clear roadmap to navigate the world of scholarly communication, collaboration and impact.
      OpenAIRE MONITOR delivers a rich set of indicators, empowering you to explore and analyze research activities across key themes. With flexible breakdowns and customizable insights, MONITOR helps you track progress, evaluate impact, and align with strategic goals:
       
       
      • Funding: Overview of funding in time, including European Commission projects, possibility to filter by program .

      • Research Output: Publications by type, time, data source, publisher, along with indicators for peer-reviewed publications and Field of Science. 

      • Open Science: Composite indicators such as openness and findability scores, visualisations for publications by access rights, including over time, as well as insights into datasets and software.

      • Collaborations: Geographic distribution of collaborations and the top 20 collaborating organisations, based on project collaborations.

      • Impact: Indicators for reach and frequency, including total downloads and citations, broken down by access route and Field of Science.

      OpenAIRE MONITOR dashboards are powered by the OpenAIRE Graph, a comprehensive collection of research information. Data is aggregated from a large pool of trusted global sources, including repositories, publishers, global registries (Crossref, Datacite, ORCID, ROR, re3data, Patstat, etc.) and research information systems. This data is then enriched and validated through a series of automated processes, including text and data mining and deduplication. This ensures that the OpenAIRE Graph remains a high-quality, reliable source of information for research monitoring and assessment.

      OpenAIRE MONITOR uses this data to create interactive dashboards that provide insights into research activities, Open Science uptake, collaborations, and impact. The diagram below illustrates the flow of data from these sources to the OpenAIRE Graph and ultimately to OpenAIRE MONITOR, which generates interactive dashboards.

      The OpenAIRE Graph is continuously updated, and user feedback is incorporated to improve its quality and relevance.

      Objectives
       
      The outcomes of the session are designed to equip participants with the skills and knowledge needed to make the most out of the institutional dashboards in OpenAIRE MONITOR.
      • Understand how the institutional dashboard provides insights about your institution's research landscape.
      • Navigate and interpret Open Access metrics for scholarly impact.
      • Identify collaboration patterns and networks of your institution.
      • Evaluate metrics to assess the influence of your institution's research.
      • Examples of how to use the institutional dashboard for informed decisions on research strategy and collaborations.
       
      Learning Outcomes
       
      • Efficiently navigate to the institutional dashboard for relevant insights.
      • Understand and interpret Open Access metrics and access rights.
      • Collaboration Identification: Identify and analyse collaboration patterns among institutions.
      • Impact Evaluation: Evaluate impact indicators to gauge research influence.

       
    • Training slides on navigating the research landscape with OpenAIRE Monitor, emphasizing institutional tracking of outputs to unlock research excellence.

      DOI: https://doi.org/10.5281/zenodo.10406679

    • GraspOS training website featuring events, resources, and guidance on Open Science-aware assessment tools, indicators, and services for stakeholders.

    • Recording of the OpenAIRE Monitor webinar (19 Dec 2023) on using institutional indicators to map research excellence and navigate the scholarly landscape.

    • Article summarizing the OpenAIRE Monitor training webinar, focusing on institutional indicators and their role in assessing research impact.

    • Assessing Open Access Books: Shedding Light on Peer Review with PRISM

      OPERAS is the Research Infrastructure supporting open scholarly communication in the social sciences and humanities (SSH) in the European Research Area. Its mission is to coordinate and federate resources in Europe to efficiently address the scholarly communication needs of European researchers in the field of SSH.

      The Peer Review Information Service for Monographs (PRISM) gives publishers the opportunity to display information about their peer review procedures in a standardised way as part of the book’s metadata in the Directory of Open Access Books. PRISM contributes to building trust in Open Access book publishing by improving transparency around the quality assurance process.

      Key features

      Controlled Vocabulary: standardised set of terms to describe the peer review processes, promoting consistency, clarity, transparency and trust.

      Title-Level Detail: provision of specific information about the peer review practices applied to each Open Access book, directly on the DOAB platform.

      Catalogue-Level Overview: ability to apply across a publisher’s entire catalogue, showcasing the peer review processes that their titles undergo.

      API Integration: accessibility to PRISM data through the DOAB API, allowing for inclusion in library databases and search tools.

      Widget: possibility for publishers to use the widget on their webpages to show PRISM records about their catalogues or titles.

      GraspOS webinar with OPERAS to learn about the Peer Review Information Service for Monographs (PRISM). PRISM aims to enhance trust in Open Access book publishing through increased transparency in the peer review process, offering publishers a standardised platform to display their peer-review procedures, integrate metadata, ultimately building trust in Open Access book publishing. We will also discuss how PRISM could be used in research assessment. 

      Objectives & preferred outcomes

      • Understand peer review in relation to Responsible Research Assessment
      • Learn how PRISM works and how it helps increase transparency in Peer Review for OA books
      • Reflect on how PRISM could be useful in the context of Responsible Research Assessment

       

       
    • Slides from the GraspOS webinar on assessing Open Access books, featuring PRISM (Peer Review Information Service for Monographs) and its role in transparent research evaluation.

      DOI: https://doi.org/10.5281/zenodo.11196020

    • GraspOS training website featuring events, resources, and guidance on Open Science-aware assessment tools, indicators, and services for stakeholders.

    • Recording of the GraspOS webinar (14 May 2024) on assessing Open Access books using PRISM (Peer Review Information Service for Monographs) to enhance transparency in peer review tracking.

    • The OpenAIRE Metadata Validator service is used by content providers who wish to register their content to OpenAIRE and allows them to verify that it is compliant with the OpenAIRE guidelines. The service also checks the quality of implementation of the OAI-PMH protocol. Content providers can use the service after logging into OpenAIRE PROVIDE. If validation succeeds the provider is eligible to register and join the OpenAIRE infrastructure. The providers' content will be regularly aggregated to contribute to the OpenAIRE Graph. OpenAIRE allows for registration of institutional and thematic repositories registered in OpenDOAR, research data repositories registered in re3data, individual e-Journals, CRIS, aggregators and publishers. The OpenAIRE Metadata Validator service is realised with configurable software that allows users with administrative rights to customize the validation rules to be applied. This feature makes it easier to adapt the service when the OpenAIRE guidelines are updated and to offer similar services, possibly with different rules and configurations, to third-parties.

      Some key features of the metadata validator are:

      • Agile data compatibility assessment: Supports multiple guidelines improving content quality and spectrum
      • Configurations: It supports the addition of new rules and modification of existing ones
      • User friendly: In accessed via an online user friendly web interface within OpenAIRE PROVIDE service
      • Automated: Validates content and quality of implementation of the OAI-PMH protocol in a schedules way
      • Transparency & History of Information: Users can view the history of validations and the status of harvesting

      Through this training part discover how to assess and enhance the quality, openness, FAIRness, and interoperability of your metadata using the OpenAIRE Metadata Validator. This training session will guide you through the practical use of the Validator, demonstrating how it supports compliance with OpenAIRE Guidelines and alignment with the FAIR principles. Whether you're managing a repository, CRIS, aggregator, or journal platform, you will learn how to evaluate your metadata, identify areas for improvement, and increase the visibility and reusability of your content across the Open Science ecosystem.

      Objectives - Learning Outcomes
      • Understand the purpose and scope of the OpenAIRE Metadata Validator
      • Explore how the Validator incorporates FAIR principles into metadata assessment
      • Understand the key criteria and indicators used in the FAIR validator module
      • Learn how to interpret validation reports and use them to improve metadata quality
      • Discover how through the Validator you can strengthen interoperability, increase the visibility and reuse of your content, having conceptualised context with the use of Controlled Vocabularies, through the OpenAIRE Graph and the wider Open Science ecosystem
      • Learn best practices and where to find resources, documentation, and support for populating your metadata for compliance with the OpenAIRE Guidelines
       

       

       
    •  Learn how FAIR is your metadata and the compliance with OpenAIRE Guideline s.Discover how the OpenAIRE Metadata Validator can help you improve the quality, openness, FAIRness and interoperability of your metadata. Whether you manage a repository, CRIS, aggregator or journal platform, this session will show how the Validator supports both compliance with OpenAIRE Guidelines and alignment with FAIR principles — helping you make your content more visible and reusable across the Open Science landscape.

    • Learn how FAIR is your metadata and the compliance with OpenAIRE Guideline s.Discover how the OpenAIRE Metadata Validator can help you improve the quality, openness, FAIRness and interoperability of your metadata. Whether you manage a repository, CRIS, aggregator or journal platform, this session will show how the Validator supports both compliance with OpenAIRE Guidelines and alignment with FAIR principles — helping you make your content more visible and reusable across the Open Science landscape.

    • Learn how FAIR is your metadata and the compliance with OpenAIRE Guideline s.Discover how the OpenAIRE Metadata Validator can help you improve the quality, openness, FAIRness and interoperability of your metadata. Whether you manage a repository, CRIS, aggregator or journal platform, this session will show how the Validator supports both compliance with OpenAIRE Guidelines and alignment with FAIR principles — helping you make your content more visible and reusable across the Open Science landscape.

    • OpenCitations has been established as a community-guided open infrastructure to provide access to global scholarly bibliographic and citation data, with the mission of harvest and openly publish accurate and comprehensive metadata describing the world’s academic publications and the scholarly citations that link them, and to preserve ongoing access to this information by secure archiving. The importance of open citations refers to the open availability of bibliographic citation data, which is a crucial requirement in the fields of bibliometrics and scientometrics for creating reproducible metrics in research assessment exercises.

      Bibliographic citation, i.e. referring from a citing entity to the cited one, is one of the most critical activities of an author in producing any bibliographic work. Indeed, acknowledging the sources we use to back our research stands at the very core of the scholarly enterprise. The network of citations created by combining citation information from many academic articles, books, proceedings, etc., is a source of rich information for scholars: a PhD student surveying the literature for her thesis exploits citations to find relevant articles; a senior researcher deepening his research exploits citations to continuously find new material; a reviewer reads citations to understand if the citing works are up-to-date and well-connected to others; a professor writing a project proposal uses citations to spot recent works and helpful links; and several other examples could be listed here. However, the reasons behind such acts of citing are manifold. 

      Usually, it is because the author has gained assistance of some sort, perhaps in the form of background information, ideas, methods or data, from the cited previously published works and wishes to acknowledge this. Sometimes, citations may be made because the citing works review, critique or refute previous works. In this seminar, I will introduce existing data models for classifying citation intents (or functions) that we are using as a starting point in the context of the GraspOS project and will briefly show the tool we are developing to extract such citation semantics from scholarly articles in PDF format.

      Objectives

      • Understanding the different dynamics behind citations
      • Exploring ontological models for citation characterisation
      • Getting the basics in using a tool for inferring such citation characterisation

      Learning Outcomes

      At the end of the session, the participant will understand the importance of citation semantics, learn about ontological models dedicated to describing such semantics, and finally be able to use a tool for extracting citation semantics.

       
    • Slides of the 3rd Training event of GraspOS entitled "Citation and their meaning - or why we cite".

      DOI: https://doi.org/10.5281/zenodo.12800674

    • GraspOS training website featuring events, resources, and guidance on Open Science-aware assessment tools, indicators, and services for stakeholders.

    • Recording of the GraspOS training (23 July 2024) on citation analysis, exploring the meaning and intent behind scholarly citations, and introducing tools to classify citation functions in research.

    • This module introduces the new and enhanced OpenAIRE Graph API, designed to support bibliometric analyses, research discovery, and Open Science monitoring.

      The API provides streamlined access to OpenAIRE’s rich and interconnected research data ecosystem. Learners will explore its key features, real-world applications, and how it enables institutions, librarians, researchers, and developers to gain deeper insights into the research landscape.

       

       
       

       
    • This is the presentation of an exclusive webinar held on Tuesday, 29 April 2025, to explore the enhanced OpenAIRE Graph API! Designed to support bibliometric analyses, research discovery, and Open Science monitoring, the new API offers streamlined access to OpenAIRE’s rich research data ecosystem. Learn about its key features, real-world applications, and how it can help institutions, librarians, researchers, and developers gain deeper insights.

    • This is an exclusive webinar to explore the enhanced OpenAIRE Graph API! Designed to support bibliometric analyses, research discovery, and Open Science monitoring, the new API offers streamlined access to OpenAIRE’s rich research data ecosystem. Learn about its key features, real-world applications, and how it can help institutions, librarians, researchers, and developers gain deeper insights. Plus, get your questions answered in our live Q&A session! 

    • GraspOS training website featuring events, resources, and guidance on Open Science-aware assessment tools, indicators, and services for stakeholders.

    • GraspOS, in collaboration with CRAFT-OA and DIAMAS, is exploring the evolving connection between scholarly publishing and research assessment. The session will focus on how Diamond Open Access (OA) publishing and open infrastructures interact with ongoing reform efforts in research evaluation, highlighting both the opportunities and challenges of aligning non-profit publishing with emerging assessment models.

      In particular, this part examines how infrastructures have been developed to support a more open approach to research assessment—enhancing the visibility of Diamond OA journals and aligning with broader efforts such as CoARA and the Barcelona Declaration. It also highlights how emerging standards and practices help ensure the quality and credibility of non-profit publishing.

      This part presents practical insights into how infrastructures and standards have made Diamond Open Access (OA) publishing a more visible, credible, and valuable component of research assessment. It clarifies common misconceptions about the quality of Diamond OA, explores the reforms introduced to support its recognition, and shares how open infrastructures contribute to better integrating Diamond journals into evolving evaluation frameworks.

      Objectives

      • Raise awareness on the issues of research assessment in relation to Diamond Publishing

      • Showcase the development and growth of infrastructure of Journals in Diamond publishing.

      • Demonstrate the effectiveness of Open Infrastructures and how they support reform of research assessment

      • Showcase  the quality of Diamond OA Publishing and discuss recommendations for institutions, funders and policy makers

      Learning Outcomes

      • Address misconception about the quality of Diamond OA publishing and articulate how technical and editorial standards ensure credibility.

      • Discuss policy and infrastructure reforms needed to enhance the visibility, discoverability, and credibility of Diamond OA journals.

    • TBC