The technologies used in open data

The open data is subjected to a single constraint which is to be accessible by all types of machines to allow their processing. This implies that open data is interoperable . If the data does not comply with the standards of the Web to allow their interoperability, we will speak of data clamped because their reuse is less or almost nil.

This interoperability test was never really respected in the industry, until 2008 when the standard SPARQL became a recommendation of the W3C . This query language allows developers to test their applications directly from their web browsers on online open data and to develop their own program to analyze the data. It is thus possible to consume the data remotely without having to transform them or to move them. For example, governments in the UK and the US have begun to switch their data in the Web data open (in English, Linked Open Data or LOD) respecting standards of W3C and providing an access point SPARQL For developers.

Example of actually open data: find on a map schools closest to home with data from Data.gov.uk (from the UK government) that provides all school-related data on its territory.

A consequence of this political choice, to offer a true standard for public data such as Web data is described in the book “Linking Government Data”. This book describes how the Web data rose from some 40 million triples RDF within four data warehouses 2007-203 warehouses with more than 25 billion triples with 395 million connections at the end of 2010. More optimists speak of exponential data growth and announce Web 3.0 and even a potential point of singularity in the future. Nevertheless, this political choice opened up new avenues of scientific, economic and social research.