Overview
The File storage service is where clients can store raw files in their original form. The files are still registered as a dataset in the metadata service, as any other type of dataset. It is up to the client to use the files appropriately as the format or internal structure of the files are not known. Clients can interact with the files through the web api and by using the transfer pipeline.
Endpoints¶
Internal endpoints for interaction with the transfer/conversion pipeline
- GET
/api/raw/prepare-download/{datasetId}Prepare download of data for dataset. - GET
/api/raw/prepare-upload/{datasetName}Prepare upload of data for dataset.
File synchronization - for integration primarily with desktop clients
- GET
/api/filesync/{id}/blocksGet file blocks with checksums. - GET
/api/filesync/checksumsGet checksums for selected or all files in a folder. - GET
/api/filesync/prepare-downloadGet download info for selected or all files in a folder. - PUT
/api/filesync/uploadUpload file datasets from the staging area or another storage. Files in the staging area moved, all other files are copied. Check the copy operations with POST/api/filesync/upload/status. - POST
/api/filesync/upload/statusGet upload copy operations status, copy operations started in "PUT /api/filesync/upload".
Main endpoint for accessing an individual file
- GET
/api/raw/dataset/{id}Prepare download of data for dataset. When you call this service directly you need to provide a SAS token for the given dataset with read privilege to download data for a dataset.
Endpoints for staging files during transfer or conversion
- POST
/api/raw/move-staged-urlMove data from staging storage to dataset. - GET
/api/raw/staging-urlPrepare staging blob storage for data upload. - POST
/api/raw/staging-urlCheck if given url is a staging url - GET
/api/raw/staging-urlsPrepare staging blob storages for data upload.