Our APIs are currently in internal use only and will be released once they have been stabilized. For now this page is little more than a placeholder.
We gather observation, collection specimen, literature, etc., etc. data from Finland and close by regions (and in case of collection specimens from all over the world) to a centralized Data warehouse for easy single place access.
The idea is to have at least the most basic information of each record (taxon, time, place (coordinates), person), but the warehouse can contain all the background variables for the record. For most variables the glossary has not yet been harmonized, but our aim is to provide reference glossaries for the most common variables. This is also called "master data management". See master data section for more details.
In use but not yet public.
There are two ways to get your data into the Data warehouse: Push and transaction feed. When pushing, when ever a record is stored, modified or deleted, the information is pushed to the Data Warehouse (You call us). This is a preferred method if it is desirable that the changes are visible in the Data warehouse almost instantly. The other method, transaction feed is suited for services that will be synchronized for example once a night. You provide a service that we read periodically (We call you).
This service must be implemented in two parts: 1) ATOM transaction feed, 2) DarwinCore Recordset service
From time to time the Data warehouse contacts the data source. The response is a list of transaction events (inserts, updates and deletions). A sequence id or a timestamp is given as a parameter. In both cases, the response should contain those changes that have occurred from that sequence id or timestamp onward (inclusive).
The number of transactions included in the response should be limited to a maximum number (for example 1000). This limit can also be given as a parameter but to prevent DOS-attacks a hardcoded limit should always exist).
The transactions should be listed in ascending order (oldest first, newest last) in a way that if the limit is reached, the oldest transactions are shown.
None of the header fields (author, id, link, title, updated) are used for any real operational purposes. They are however required elements of the ATOM standard.
Entries part is the important bit:
Transaction sequence ID starts from 1 and increases by one for each transaction.
Records are identified by an id which is unique for that data source. The IDs can be numerical or in any textual format, like a GUID, or a complete URI for the record. ID must be shorter than 200 characters.
DarwinCore recordset contains the relevant information about a single record.
In addition, you can include any Darwin Core terms you wish ,but they will not be used by the DW. (http://rs.tdwg.org/dwc/terms/index.htm)
If the record (identified by record id) does not exists (has been deleted), an empty SimpleDarwinRecordSet should be returned.
In use on this site, for example, but not public yet. Improved versions will be developed.
Not yet. For now see http://koivu.luomus.fi/wkartta/
Backbone for most of "master data management", including taxonomy services.
In use, will be published later.
Taxonomies, places, organizations, information sources and other "background data".
Welcome to the OLD testing environment of the Finnish Biodiversity Information Facility - FinBIF!
To access the latest version of FinBIF service portal, please proceed to the following address:
In order to use the old testing enviroment, close this window.