Browse the glossary using this index

Special | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | ALL

D

Data backup

Data backup is a process of creating a copy of data in a digital format and storing it on another device to ensure that data are saved and to prevent data loss.
Backups can be full (all files are backed up whenever a backup is made) or partial (only a part of the files, e.g. new files, are backed up).

Source: https://www.openaire.eu/how-to-comply-with-horizon-europe-mandate-for-rdm (Glossary)


Data documentation

Data documentation includes various types of information that can help find, assess, understand/interpret, and (re)use research data – e.g. information about methods, protocols, datasets to be used and data files, preliminary findings, etc. Documentation helps understand the context in which data were created, as well as the structure and the content of data. Data should be documented through all stages of the research data lifecycle. Detailed and rich documentation ensures reproducibility and upholds research integrity. Documentation also includes metadata.

Various tools, such as e-lab notebooks, are available to support you in the process of  creating documentation.


Source: https://www.openaire.eu/how-to-comply-with-horizon-europe-mandate-for-rdm (Glossary)



Data ethics

Definition: A branch of ethics that studies and evaluates moral problems related to data (including generation, recording, curation, processing, dissemination, sharing and use), algorithms (including artificial intelligence, artificial agents, machine learning and robots) and corresponding practices (including responsible innovation, programming, hacking and professional codes), in order to formulate and support morally good solutions (e.g. right conducts or right values). Data ethics builds on the foundation provided by computer and information ethics but, at the same time, it refines the approach endorsed so far in this research field, by shifting the level of abstraction of ethical enquiries, from being information-centric to being data-centric.This shift highlights the need for ethical analyses to concentrate on the content and nature of computational operations—the interactions among hardware, software and data—rather than on the variety of digital technologies that enable them. 

Related terms: computer ethics, information ethics, AI, responsible innovation

Found inhttps://www.turing.ac.uk/research/publications/what-data-ethics 

Reference: Floridi, L. and Taddeo, M. (2016). What is data ethics?. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 374(2083), p.20160360. https://doi.org/10.1098/rsta.2016.0360 


Data Management Plan

Information regarding your data management needs to be easily found and understood, not least if you are working on a project that runs over several years and involves a large team of people. In order to simplify data management, a Data Management Plan (DMP) can be created early in the research process. A DMP is a formal document that provides a framework for how to handle the data material during and after the research project. The way a DMP will look once it is finished is not universal. It is a "living" document that changes together with the needs of a project and its participants. It is updated throughout the project to make sure that it tracks such changes over time and that it reflects the current state of your project.

CESSDA Training Team (2017 - 2022). CESSDA Data Management Expert Guide.
Bergen, Norway: CESSDA ERIC. Retrieved from https://dmeg.cessda.eu/


Data Management Plan (DMP)

Definition: Data Management Plans (DMPs) are a key element of good data management. A DMP describes the data management life cycle for the data to be collected, processed and/or generated by a project. As part of making research data findable, accessible, interoperable and re-usable (FAIR), a DMP should include information on:

  • the handling of research data during & after the end of the project
  • what data will be collected, processed and/or generated
  • which methodology & standards will be applied
  • whether data will be shared/made open access and
  • how data will be curated & preserved (including after the end of the project).

Horizon 2020: A DMP is required for all projects participating in the extended ORD pilot, unless they opt out of the ORD pilot. However, projects that opt out are still encouraged to submit a DMP on a voluntary basis.

Related terms: FAIR principles

Found in: https://ec.europa.eu/research/participants/docs/h2020-funding-guide/cross-cutting-issues/open-access-data-management/data-management_en.htm


Data repository

A data repository is a digital archive collecting, preserving and displaying datasets, related documentation and metadata. Repositories and archives typically use terms like “preservation” and “curation” rather than “archiving” or “storage”: long-term accessibility implies expertise and services to convert data to new formats and to add value to the data, for instance by new functionality to query the data.

https://www.openaire.eu/briefpaper-rdm-infonoads/view-document 


Data storage

Data storage is a computing technology that enables saving data in a digital format on computer components and recording media, including cloud services.

In the context of Research Data Management, it is necessary to ensure that data are stored securely until the end of the project and throughout the minimum retention period. Storage options may include:

  • portable devices,
  • university network drives,
  • cloud services.
Storage on local storage spaces on hard drives, pen drives, etc. is discouraged because these devices are vulnerable and data loss may occur. If portable devices are used, it should be ensured that there are copies on networked drives and backup storage.
Using in-house or institutionally approved spaces is recommended, especially if regular backups are enabled. 
Cloud services are suitable for collaboration with partners from partner institutions. However, it should be checked whether the selected cloud service makes regular backups and whether it falls under the relevant jurisdiction.


Diamond Open Access

‘Diamond’ Open Access refers to a scholarly publication model in which journals and platforms do not charge fees to either authors or readers. 

Diamond Open Access journals represent community-driven, academic-led and -owned publishing initiatives. Serving a fine-grained variety of generally small-scale, multilingual, and multicultural scholarly communities, these journals and platforms embody the concept of bibliodiversity. For all these reasons, Diamond Open Access journals and platforms are equitable by nature and design. 

The landmark ‘Open Access Diamond Journals Study’ (OADJS) uncovered the vast size and scope of this publication ecosystem. The estimated (2021) 17.000 to 29.000 Diamond Open Access journals worldwide are an essential component of scholarly communication, publishing 8 to 9% of the total article publication volume and 45% of Open Access publishing.

https://www.scienceeurope.org/media/t3jgyo3u/202203-diamond-oa-action-plan.pdf

***

Diamond open access texts are those that are published/distributed without charge to the reader or the author.

Diamond Open Access journals are normally community -driven by groups of academics on their own initiatives. They tend to be small scale and often cross-cultural boundaries, and many have equity as a founding principle.

The Diamond model has been particularly successful in Latin America, with 25% of DOAJ OA journals appearing there.

In 2021 it was estimated that there were 17000-29000 Diamond OA journals worldwide.



Digital object identifier (DOI)

The digital object identifier [DOI®1)] system provides an infrastructure for persistent unique identification of objects of any type.
DOI is an acronym for “digital object identifier”, meaning a “digital identifier of an object” rather than an “identifier of a digital object”. In this International Standard, the term “digital object identifier” refers to the system defined in this International Standard unless otherwise stated. The DOI system was initiated by the International DOI Foundation in 1998, and initially developed with the collaboration of some participants in ISO/TC 46/SC 9. Due to its application in the fields of information and documentation and previous collaboration with some ISO/TC 46/SC 9 participants, it was introduced as a possible work item in 2004 and further developed from 2006 to 2010.
The DOI system is designed to work over the Internet. A DOI name is permanently assigned to an object to provide a resolvable persistent network link to current information about that object, including where the object, or information about it, can be found on the Internet. While information about an object can change over time, its DOI name will not change. A DOI name can be resolved within the DOI system to values of one or more types of data relating to the object identified by that DOI name, such as a URL, an e-mail address, other identifiers and descriptive metadata.
The DOI system enables the construction of automated services and transactions. Applications of the DOI system include but are not limited to managing information and documentation location and access; managing metadata; facilitating electronic transactions; persistent unique identification of any form of any data; and commercial and non-commercial transactions.
The content of an object associated with a DOI name is described unambiguously by DOI metadata, based on a structured extensible data model that enables the object to be associated with metadata of any desired degree of precision and granularity to support description and services. The data model supports interoperability between DOI applications.
The scope of the DOI system is not defined by reference to the type of content (format, etc.) of the referent, but by reference to the functionalities it provides and the context of use. The DOI system provides, within networks of DOI applications, for unique identification, persistence, resolution, metadata and semantic interoperability.



DOAB

Directory of Open Access Books

The primary aim of DOAB is to increase discoverability of open access books. Academic publishers are invited to provide metadata of their open access books to DOAB. DOAB is an open infrastructure committed to open science.

source: https://www.doabooks.org/en/doab/purpose-of-doab 


DOAJ

The Directory of Open Access Journals (DOAJ) was launched in 2003 at Lund University, Sweden, with 300 open access journals and today contains ca. 9000 open access journals covering all areas of science, technology, medicine, social science and humanities.

DOAJ is a community-curated list of open access journals and aims to be the starting point for all information searches for quality, peer reviewed open access material. To assist libraries and indexers keep their lists up-to-date, we make public a list of journals that have been accepted into or removed from DOAJ but we will not discuss specific details of an application with anyone apart from the applicant. Neither will we discuss individual publishers or applications with members of the public unless we believe that, by doing so, we will be making a positive contribution to the open access community.

The aim of the DOAJ is to increase the visibility and ease of use of open access scientific and scholarly journals, thereby promoting their increased usage and impact. The DOAJ aims to be comprehensive and cover all open access scientific and scholarly journals that use a quality control system to guarantee the content. In short, the DOAJ aims to be the one-stop shop for users of open access journals.

Source link: https://www.unccd.int/resources/knowledge-sharing-system/directory-open-access-journals-doaj 

DOAJ link:  https://doaj.org