GenoEx-GDE User’s manual v.1.2

part 3 - API Manual (including the use of the program).

See also part 1 in GDE_user_manual and part 2 in GDE_gxprep_manual. This part assumes that those previous parts have already been read and understood.

The accompanying support program, maintained and distributed by the Interbull Centre, allows easy access to the API for upload and download of 706 and 711 files associated with the GenoEx-GDE database and is provided as an easy way to get started with using the API. For those that can read the python code that it is written in, it also provides an additional source of documentation of the API.

This manual describes each of the calls of the API along with the usage of the gxapi program. These descriptions are organized into four sections to focus on the main aspects of the API where the first section provides an overview of, and some general information about, the API and the last section focus on the program. The remaining sections focus on different usage of the API.

This is perhaps a suitable time to emphasize that the use of curl in this documentation is primarily meant to provide examples of the use of the API and to show the details of the syntax of the arguments. It is definitely not meant as a suggestion that this is a good way to implement a scripted workflow, in fact it is strongly recommended against that. The program is meant for such uses.

Section 1, overview and general information

The API is provided as an alternative way to access the functionality provided via the web browser interface and is provided via POST calls on the same site. The operations have a basic structure where each call require arguments in JSON format split into parameters and auth, both of which are key/value mappings.
The parameters part contain different keys depending on what call it is about, but always, at least, information on the version.
The auth part always contain keys username and pw where the respective values should be your registered email address and associated password.

An example call via the curl program looks like (in one long command):

Where the and test strings would need to be substituted with your registered email and associated password. Note that passwords containing special characters may be problematic and need to be encoded according to JSON rules for this to work. This represents the common basic structure of every call to the API although many calls would need additional parameters.
This is a suitable command to use for verifying that you have access and that communication is set up correctly, so please use this (or something similar) as first command when trying out this functionality.

The above example uses the linux command line syntax to embed strings inside a string. For windows, this would probably need to be written something like:

The remaining examples will stick to the linux syntax as it is easier to read, so if you need to use this altered syntax for this example, then apply the corresponding modifications to each following example.

All calls may be using the HTTP construct multipart/form-data (i.e. the -F flag instead of --data in curl), but only the gde_submission call requires multipart/form-data (i.e. the -F flag).
Note that if not using multipart/form-data, the password provided may need to be quoted for URL transmission depending on which characters it contain, for example characters such as % and & need to be replaced by their corresponding three character sequence (%25 and %26 respectively).

The data returned from each call is a JSON encoded data structure containing, at the minimum, keys "status" and "status_message". If status has value true then an additional key named return_values is provided (with some exceptions).

The details below are up-to-date with the 220930 version of the API (and

The API is in central parts asynchronous, i.e. an operation is first initiated and then the user would need to periodically poll for the status of that operation until it terminates either successfully or with a failure. This mode of operation is needed to avoid the timeouts inherent in normal implementations of the HTTP protocol for long running operations.

The return values of most calls is a JSON data structure looking, at the top level, like:

Where the "..." would be a set of key/value pairs which will vary between different calls.
Whenever the value of "status" is false, then the value of "status_message" will report the error message. Furthermore, if the value of "status" is true, then the value of "return_values" should still be investigated for possible error messages before retrieving the real return values (keys "error" or "error_list" to be explicit). The value of "status_message" should be ignored when the value of "status" is true, but it may in some cases contain a comment.

The following two sections focus on the primary functionalities provided: upload and download of 706/711 files.

Section 2, upload of 706/711 files

This is a two steps operation: a submit call (once) and then intermittently (once per minute or so) polling status of that submission until a terminating state is reached.

Step 1: An example of a submit call via the curl program looks like (note that this call require multipart/form-data):

As in all these examples, the and test strings would need to be substituted to your registered email address and associated password before running this.
In addition, the paths and filenames specified (i.e. the parts between @ and ; inside the JSON strings) need to be adapted to your own situation.
Note that the use of a single backslash at the end of the lines is just a way to visualize that the single command continues on the next line.

This example shows how to upload a 706 file and the associated 711 file in one go, but if only one of these file types are to be uploaded then simply remove the -F switch, and following associated JSON string, related to the file you are not going to upload.

The above submission call will return a JSON data structure containing, if successful, the job_id assigned to this submission:

Please remember that in all calls, if the key status has a false value, then the error message is found in status_message. Even if status is true, there may still be errors described inside the return_values data structure, but in the specific case of gde_submission there are currently no such cases.

Step 2: Polling for status, is accomplished via a call like:

The 9be6c0bf-de9f-4951-b9e1-27217ec1e0c4 string needs to be replaced with the value of the job_id key provided in the return data structure of the submit call above.

This last call is then intermittently repeated, with no change, until a job_status of "FINISHED" or "FAILED" is reached and returned in a JSON data structure:

Errors may be returned under the key error_list inside the return_values value (but there is no error key there in this call).

Note that in the gde_job_status call, the return_values structure may in the "FINISHED" case include an additional key test_results. The value of that key is, if present, a (potentially rather long, multiple line) string that should be made known to the user. This additional key will only be present if the job_id is referring to an upload operation.

Section 3, download of 706/711 files

Download operations are a bit different from upload as 711 files are downloaded in synchronous mode but 706 files are downloaded in asynchronous mode, similar to upload.

In addition, there is an optional preliminary step to retrieve all the available values to choose from when selecting the parameter values to provide in the download operation
(you may want to redirect the output to a file {see params.json in the command line of the example} to have the results handy - and refresh this file from time to time by repeating this operation):

This is a synchronous operation and hence a single step is sufficient.

The return_values data structure in the reply will include keys: breeds, countries, orgs, gender, extraction_type and arrays, but no error message. The value of each key is a list of strings to choose from when specifying the corresponding parameter in calls below. This data roughly corresponds to the data shown in the download dialog of the web browser interface.

Download 706 files

This is a three steps operation: an extraction call (once) followed by intermittently (every 30 seconds or so) polling status of that extraction until a terminating state is reached and finally, if status of extraction is "FINISHED", downloading the resulting assembled zip file.

Step 1: The extraction call is where the specification for what data to download is provided.
The allowed values for different parts of the specification are:

Example call:

Note that in this example, the values of keys "countries" and "arrays" are specified as empty lists. This means "all values included". The value for key "orgs" is also an empty list, but that means all orgs except IBC.
The value of "quality_criteria" is null, also meaning "anything goes" ignoring the results of the quality checks, i.e. all genotypes are considered for extraction.

This extraction call will return a JSON data structure containing, if successful, the job_id assigned to this submission:

If the extraction call fails, it may return an error message as a string value in the error key inside the return_values value.

Step 2: Intermittently poll for status, which is performed identical to how it is done for the upload except that the job_id is extracted from the reply of the extraction call.
See section 2 step 2 "polling for status", for how this step is accomplished.

Step 3: If the extraction was successful (i.e. polling ended with status "FINISHED"), this step is simply a call to download the zip file associated with the prepared extraction. Example:

Errors are a bit tricky to handle as they are returned as a JSON structure instead of the expected zip file, so would in the above curl call end up inside the file. See the program for how this may be accomplished.

Download 711 files

This is a single step operation which is fully specified in a single curl call:

The parameters "breeds", "countries", "gender" and "arrays" are used precisely as for the download of file706 reported above.

The same challenge in handling errors as gde_download above apply also to gde_download_711 calls.

Section 4, using the program

The program is fetched from the web browser interface on the GDE -> UPLOAD page.
Note that the program requires a fairly recent version of python (3.7 or newer) with the requests module installed.

In the examples below, the program is assumed to be located in the current directory, but that is not a requirement.
If it is located in another directory, just precede in the examples with the path to where it is stored, i.e. use something like path-to-installdir/ instead of in the examples.

To get a quick overview of how to execute it, run it with the -h switch:

Upload of 706/711 files

To get a quick overview of how to execute upload, run it with the -h switch:

To upload a pair of 706/711 files in one go, simply run it like this:

To upload only one file, either a 706 or a 711 file, just omit the argument referencing the other file in the example above.

Download of 706/711 files

The optional preliminary step, calling gde_get_parameters, is performed via:

(complete with redirecting the stdout to a file, params.json, to save output for later).

To get an overview of how to execute download, run it with the -h switch:

Here, the switches are divided into groups "optional arguments" (used for both 706 and 711 files), "genotypes data" (used for 706 files only) and "access data" (used for 711 files only):

optional arguments:

genotypes data:

access data:

At the end of the help output, a couple of small explicit examples are shown, but here follows a couple more.

To, for example, download a 706 file containing the best genotypes of BSW bulls regardless of country, array, organization, date-of-upload or quality status execute:

Adding switch --all would remove limitation to download only the best genotypes of each animal. To do the same download but only genotypes that pass all quality checks, omit -q "" from the above command. To further limit the data downloaded, add switches for breeds, countries, arrays, organizations and/or dates and either replace the empty string after -q with a suitable specification (e.g. "pedigree,call_rate") or omit -q and associated string completely (which is the same as specifying -q "frequency,pedigree,call_rate").

Example data volume limiting switch:

Another example, downloading a 711 file for all animals available:

public/GDE_api_manual (last edited 2022-11-14 19:37:30 by KatarineHaugaard)