One aspect which is key to this increase and continuation in use of data from the GBIF platform is trust in the data made available. Ultimately the responsibility lies with the original data collectors and authors.
Since 2008, research utilizing GBIF data has increased from 52 to over 230 in 2012. The platform works through a series of national nodes, which submit data they collect nationally to GBIF. In that way GBIF acts as a kind of global records center, with data rights remaining with the original organisation.
One recent (October 2012) national node addition is
the Biodiversity Information System on Brazil (www.sibbr.org), an assessment of their
national capability and infrastructure is being undertaken. The assessment is
reported to encompass over 200 organisations. Given Brazil’s relative importance to
global biodiversity, such a pragmatic approach is perhaps reassuring.
In order for such a system of global data to facilitate quality research, each author needs to deliver quality data in the first place. Given the need for a vast range of organisations (i.e. research through to citizens) to collect data to fill gaps in our current knowledge, this may seem daunting. Considerations should include accuracy of collection, possibility of data loss during processes, taxonomic data labeling, raw data labeling, metadata collection, reuse and longevity. Specialist software can help organisations achieve quality in these areas, but only if the software is tailored to the users needs.
We are constantly developing our software to meet our simple mission, to make quality data efficient. In order to do this we need feedback from you, the biodiversity community.
So which parts of your data processes do you find frustrating? Is there anything you think could be automated to make your life easier?
GBits March Newsletter - http://www.gbif.org/communications/resources/newsletters/
NASA Goddard Photo and Video - Flickr collection (http://www.flickr.com/photos/gsfc/6012329930/)