Analytics Data Warehouse API v1
The Analytics Data Warehouse service is the enterprise data warehouse (EDW) component of HealtheEDW. It pulls together data from different sources and makes it available for reporting and analytics. Data is sourced from Oracle Health Data Intelligence data that is ingested, normalized, transformed, and loaded into the data warehouse. When data is in the data warehouse, it can be used to create a data set. Data sets are grouped with associated data sets in schemas. Data sets primarily are composed of fields and transformations. Transformations are how data is inserted and transformed in the data set using processing. Fields determine the structure and data type of the data in the data warehouse. Workflows are how data sets are processed. They can be used to create dependencies between data sets and dictate the order in which data sets are processed.
URL: https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1
Feeds
Operations about Feeds
Create a new Feed
Example Request:
require 'httparty' # Using HTTParty 0.16.2
require 'json'
headers = {
'Authorization' => '<auth_header>',
'Content-Type' => 'application/json',
'Accept' => 'application/json'
}
result = HTTParty.post('https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/feeds', headers: headers, body: {"dataSets":["100001234","100001337","100005656"],"name":"Feed Name","mnemonic":"testmnemonic7","description":"This is the feed's description.","frequency":"WEEKLY","nextRun":"2019-04-25T20:41:18.181Z","scheduleTimeZone":"America/Chicago","compressionType":"TAR_GZ"}.to_json )
print JSON.pretty_generate(result)
# You can also use wget
curl -X POST https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/feeds \
-H 'Authorization: {auth_header}' \
-H 'Content-Type: application/json' \ \
-H 'Accept: application/json' \
-d {"dataSets":["100001234","100001337","100005656"],"name":"Feed Name","mnemonic":"testmnemonic7","description":"This is the feed's description.","frequency":"WEEKLY","nextRun":"2019-04-25T20:41:18.181Z","scheduleTimeZone":"America/Chicago","compressionType":"TAR_GZ"}
POST /feeds
Creates a new feed from the provided body.
Parameters
Parameter | In | Type | Required | Default | Description | Accepted Values |
---|---|---|---|---|---|---|
body | body | postFeeds | true | N/A | No description | - |
Response Statuses
Status | Meaning | Description | Schema |
---|---|---|---|
201 | Created | Created. | Feed |
400 | Bad Request | Bad Request | Error |
401 | Unauthorized | Unauthorized | Error |
403 | Forbidden | Forbidden | Error |
404 | Not Found | Not Found | Error |
Retrieve a List of Feeds
Example Request:
require 'httparty' # Using HTTParty 0.16.2
require 'json'
headers = {
'Authorization' => '<auth_header>',
'Accept' => 'application/json'
}
result = HTTParty.get('https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/feeds', headers: headers)
print JSON.pretty_generate(result)
# You can also use wget
curl -X GET https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/feeds \
-H 'Authorization: {auth_header}' \
-H 'Accept: application/json'
Example response
{
"items": [
{
"id": "a413813c-bb7a-46b8-88d5-d6fd8bc5eaf8",
"name": "Feed Name",
"mnemonic": "testmnemonic7",
"description": "This is the feed's description.",
"status": "QUEUED",
"frequency": "WEEKLY",
"nextRun": "2019-04-25T20:41:18.181Z",
"scheduleTimeZone": "America/Chicago",
"compressionType": "TAR_GZ",
"deliveryChannelId": "a413813c-bb7a-46b8-88d5-d6fd8bc5eaf8",
"createdAt": "2019-04-25T20:41:18.181Z",
"dataSets": [
"100001234",
"100001337",
"100005656"
]
}
],
"totalResults": 1,
"firstLink": "http://cernerdemo.api.us.healtheintent.com/example/v1/examples?offset=0&limit=20",
"lastLink": "http://cernerdemo.api.us.healtheintent.com/example/v1/examples?offset=20&limit=20",
"prevLink": "http://cernerdemo.api.us.healtheintent.com/example/v1/examples?offset=0&limit=20",
"nextLink": "http://cernerdemo.api.us.healtheintent.com/example/v1/examples?offset=20&limit=20"
}
GET /feeds
Retrieves a list of feeds that match the query.
Parameters
Parameter | In | Type | Required | Default | Description | Accepted Values |
---|---|---|---|---|---|---|
mnemonic | query | string | false | N/A | The feed mnemonic to filter by | - |
status | query | string | false | N/A | The feed status to filter by | - |
offset | query | integer(int32) | false | 0 | The number of results to skip from the beginning of the list of results (typically for the purpose of paging). The minimum offset is 0. There is no maximum offset. | - |
limit | query | integer(int32) | false | 20 | The maximum number of results to display per page. The minimum limit is 1. The maximum limit is 100. | - |
orderBy | query | string | false | name | A comma-separated list of fields by which to sort. | name, -name, mnemonic, -mnemonic, status, -status, createdAt, -createdAt |
Response Statuses
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | Retrieve a List of Feeds | Feeds |
400 | Bad Request | Bad Request | Error |
401 | Unauthorized | Unauthorized | Error |
403 | Forbidden | Forbidden | Error |
Delete a Feed
Example Request:
require 'httparty' # Using HTTParty 0.16.2
require 'json'
headers = {
'Authorization' => '<auth_header>',
'Accept' => 'application/json'
}
result = HTTParty.delete('https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/feeds/3174fa83-4c61-4a05-8d03-e6880c81c83d', headers: headers)
print JSON.pretty_generate(result)
# You can also use wget
curl -X DELETE https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/feeds/3174fa83-4c61-4a05-8d03-e6880c81c83d \
-H 'Authorization: {auth_header}' \
-H 'Accept: application/json'
DELETE /feeds/{feedId}
Deletes a single feed.
Parameters
Parameter | In | Type | Required | Default | Description | Accepted Values |
---|---|---|---|---|---|---|
feedId | path | string | true | N/A | The ID of the feed. | - |
Response Statuses
Status | Meaning | Description | Schema |
---|---|---|---|
204 | No Content | Delete a Feed | None |
400 | Bad Request | Bad Request | Error |
401 | Unauthorized | Unauthorized | Error |
403 | Forbidden | Forbidden | Error |
404 | Not Found | Not Found | Error |
Update a feed
Example Request:
require 'httparty' # Using HTTParty 0.16.2
require 'json'
headers = {
'Authorization' => '<auth_header>',
'Content-Type' => 'application/json',
'Accept' => 'application/json'
}
result = HTTParty.put('https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/feeds/3174fa83-4c61-4a05-8d03-e6880c81c83d', headers: headers, body: {"dataSets":["100001234","100001337","100005656"],"name":"Feed Name","mnemonic":"testmnemonic7","description":"This is the feed's description.","frequency":"WEEKLY","nextRun":"2019-04-25T20:41:18.181Z","scheduleTimeZone":"America/Chicago","compressionType":"TAR_GZ"}.to_json )
print JSON.pretty_generate(result)
# You can also use wget
curl -X PUT https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/feeds/3174fa83-4c61-4a05-8d03-e6880c81c83d \
-H 'Authorization: {auth_header}' \
-H 'Content-Type: application/json' \ \
-H 'Accept: application/json' \
-d {"dataSets":["100001234","100001337","100005656"],"name":"Feed Name","mnemonic":"testmnemonic7","description":"This is the feed's description.","frequency":"WEEKLY","nextRun":"2019-04-25T20:41:18.181Z","scheduleTimeZone":"America/Chicago","compressionType":"TAR_GZ"}
PUT /feeds/{feedId}
Updates a single feed
Parameters
Parameter | In | Type | Required | Default | Description | Accepted Values |
---|---|---|---|---|---|---|
feedId | path | string | true | N/A | The ID of the feed. | - |
body | body | putFeeds | true | N/A | No description | - |
Response Statuses
Status | Meaning | Description | Schema |
---|---|---|---|
201 | Created | Update a feed | Feed |
400 | Bad Request | Bad Request | Error |
401 | Unauthorized | Unauthorized | Error |
403 | Forbidden | Forbidden | Error |
404 | Not Found | Not Found | Error |
Retrieve a Feed
Example Request:
require 'httparty' # Using HTTParty 0.16.2
require 'json'
headers = {
'Authorization' => '<auth_header>',
'Accept' => 'application/json'
}
result = HTTParty.get('https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/feeds/3174fa83-4c61-4a05-8d03-e6880c81c83d', headers: headers)
print JSON.pretty_generate(result)
# You can also use wget
curl -X GET https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/feeds/3174fa83-4c61-4a05-8d03-e6880c81c83d \
-H 'Authorization: {auth_header}' \
-H 'Accept: application/json'
Example response
{
"id": "a413813c-bb7a-46b8-88d5-d6fd8bc5eaf8",
"name": "Feed Name",
"mnemonic": "testmnemonic7",
"description": "This is the feed's description.",
"status": "QUEUED",
"frequency": "WEEKLY",
"nextRun": "2019-04-25T20:41:18.181Z",
"scheduleTimeZone": "America/Chicago",
"compressionType": "TAR_GZ",
"deliveryChannelId": "a413813c-bb7a-46b8-88d5-d6fd8bc5eaf8",
"createdAt": "2019-04-25T20:41:18.181Z",
"dataSets": [
"100001234",
"100001337",
"100005656"
]
}
GET /feeds/{feedId}
Retrieves a single feed.
Parameters
Parameter | In | Type | Required | Default | Description | Accepted Values |
---|---|---|---|---|---|---|
feedId | path | string | true | N/A | The ID of the feed. | - |
Response Statuses
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | Retrieve a Feed | Feed |
400 | Bad Request | Bad Request | Error |
401 | Unauthorized | Unauthorized | Error |
403 | Forbidden | Forbidden | Error |
404 | Not Found | Not Found | Error |
Feed Runs
Operations about Feed Runs
Create a Feed Run
Example Request:
require 'httparty' # Using HTTParty 0.16.2
require 'json'
headers = {
'Authorization' => '<auth_header>',
'Content-Type' => 'application/json',
'Accept' => 'application/json'
}
result = HTTParty.post('https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/feeds/3174fa83-4c61-4a05-8d03-e6880c81c83d/runs', headers: headers)
print JSON.pretty_generate(result)
# You can also use wget
curl -X POST https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/feeds/3174fa83-4c61-4a05-8d03-e6880c81c83d/runs \
-H 'Authorization: {auth_header}' \
-H 'Content-Type: application/json' \
-H 'Accept: application/json'
POST /feeds/{feedId}/runs
Creates a new feed run from a feed ID and an offset index.
Parameters
Parameter | In | Type | Required | Default | Description | Accepted Values |
---|---|---|---|---|---|---|
feedId | path | string | true | N/A | The ID of the feed. | - |
body | body | postFeedsFeedidRuns | true | N/A | No description | - |
Response Statuses
Status | Meaning | Description | Schema |
---|---|---|---|
201 | Created | Created. | FeedRun |
400 | Bad Request | Bad Request | Error |
401 | Unauthorized | Unauthorized | Error |
403 | Forbidden | Forbidden | Error |
404 | Not Found | Not Found | Error |
Retrieve a List of Feed Runs
Example Request:
require 'httparty' # Using HTTParty 0.16.2
require 'json'
headers = {
'Authorization' => '<auth_header>',
'Accept' => 'application/json'
}
result = HTTParty.get('https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/feeds/3174fa83-4c61-4a05-8d03-e6880c81c83d/runs', headers: headers)
print JSON.pretty_generate(result)
# You can also use wget
curl -X GET https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/feeds/3174fa83-4c61-4a05-8d03-e6880c81c83d/runs \
-H 'Authorization: {auth_header}' \
-H 'Accept: application/json'
Example response
{
"items": [
{
"id": "1a2b3c",
"startTime": "2019-04-25T20:41:18.181Z",
"endTime": "2019-04-25T20:41:18.181Z",
"status": "FAILED",
"dataSetIds": "1a2b3c",
"feedId": "1a2b3c",
"errorMessage": "1a2b3c",
"createdAt": "2019-04-25T20:41:18.181Z"
}
],
"totalResults": 1,
"firstLink": "http://cernerdemo.api.us.healtheintent.com/example/v1/examples?offset=0&limit=20",
"lastLink": "http://cernerdemo.api.us.healtheintent.com/example/v1/examples?offset=20&limit=20",
"prevLink": "http://cernerdemo.api.us.healtheintent.com/example/v1/examples?offset=0&limit=20",
"nextLink": "http://cernerdemo.api.us.healtheintent.com/example/v1/examples?offset=20&limit=20"
}
GET /feeds/{feedId}/runs
Retrieves a list of Feed runs that match the query.
Parameters
Parameter | In | Type | Required | Default | Description | Accepted Values |
---|---|---|---|---|---|---|
feedId | path | string | true | N/A | The ID of the feed. | - |
status | query | string | false | N/A | The status of the feed run | - |
offset | query | integer(int32) | false | 0 | The number of results to skip from the beginning of the list of results (typically for the purpose of paging). The minimum offset is 0. There is no maximum offset. | - |
limit | query | integer(int32) | false | 20 | The maximum number of results to display per page. The minimum limit is 1. The maximum limit is 100. | - |
orderBy | query | string | false | -createdAt | A comma-separated list of fields by which to sort. | startTime, -startTime, endTime, -endTime, createdAt, -createdAt |
Response Statuses
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | Retrieve a List of Feed Runs | FeedRuns |
400 | Bad Request | Bad Request | Error |
401 | Unauthorized | Unauthorized | Error |
403 | Forbidden | Forbidden | Error |
Retrieve a Feed Run
Example Request:
require 'httparty' # Using HTTParty 0.16.2
require 'json'
headers = {
'Authorization' => '<auth_header>',
'Accept' => 'application/json'
}
result = HTTParty.get('https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/feeds/3174fa83-4c61-4a05-8d03-e6880c81c83d/runs/e996e2c6-85df-44e2-a320-af14691773a3', headers: headers)
print JSON.pretty_generate(result)
# You can also use wget
curl -X GET https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/feeds/3174fa83-4c61-4a05-8d03-e6880c81c83d/runs/e996e2c6-85df-44e2-a320-af14691773a3 \
-H 'Authorization: {auth_header}' \
-H 'Accept: application/json'
Example response
{
"id": "1a2b3c",
"startTime": "2019-04-25T20:41:18.181Z",
"endTime": "2019-04-25T20:41:18.181Z",
"status": "FAILED",
"dataSetIds": "1a2b3c",
"feedId": "1a2b3c",
"errorMessage": "1a2b3c",
"createdAt": "2019-04-25T20:41:18.181Z"
}
GET /feeds/{feedId}/runs/{feedRunId}
Retrieves a single feed run.
Parameters
Parameter | In | Type | Required | Default | Description | Accepted Values |
---|---|---|---|---|---|---|
feedId | path | string | true | N/A | The ID of the feed. | - |
feedRunId | path | string | true | N/A | The ID of the feed run. | - |
Response Statuses
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | Retrieve a Feed Run | FeedRun |
400 | Bad Request | Bad Request | Error |
401 | Unauthorized | Unauthorized | Error |
403 | Forbidden | Forbidden | Error |
404 | Not Found | Not Found | Error |
Data Sets
A data set is a collection of data that is created in the data warehouse.
Create a Data Set
Example Request:
require 'httparty' # Using HTTParty 0.16.2
require 'json'
headers = {
'Authorization' => '<auth_header>',
'Content-Type' => 'application/json',
'Accept' => 'application/json'
}
result = HTTParty.post('https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/data-sets', headers: headers, body: {"name":"Data Set Name","mnemonic":"DATA_SET_MNEMONIC","schemaId":"143","description":"143","truncate":true,"cernerDefined":true,"fields":[{"name":"Field Name","mnemonic":"field_mnemonic","dataType":"VARCHAR","precision":500,"scale":5,"primaryKey":true,"principalColumn":true}],"transformations":[{"name":"Insert a File","index":1,"type":"INSERT_FILE","description":"This transformation inserts columns \\[a\\], \\[b\\], and \\[c\\] into the data set.","delimiter":"|","errorStrategy":"SKIP","loadStrategy":"ALL","insertOrUpdate":true,"loadStrategyVersion":"all","query":"","fromClause":"","whereClause":"","fieldValueMap":{"value":"121","fieldMnemonic":"header_one"},"fileFieldMap":{"fileHeader":"File_Header","fieldMnemonic":"field_mnemonic"},"files":"filename1,filename2,filename3"}]}.to_json )
print JSON.pretty_generate(result)
# You can also use wget
curl -X POST https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/data-sets \
-H 'Authorization: {auth_header}' \
-H 'Content-Type: application/json' \ \
-H 'Accept: application/json' \
-d {"name":"Data Set Name","mnemonic":"DATA_SET_MNEMONIC","schemaId":"143","description":"143","truncate":true,"cernerDefined":true,"fields":[{"name":"Field Name","mnemonic":"field_mnemonic","dataType":"VARCHAR","precision":500,"scale":5,"primaryKey":true,"principalColumn":true}],"transformations":[{"name":"Insert a File","index":1,"type":"INSERT_FILE","description":"This transformation inserts columns \\[a\\], \\[b\\], and \\[c\\] into the data set.","delimiter":"|","errorStrategy":"SKIP","loadStrategy":"ALL","insertOrUpdate":true,"loadStrategyVersion":"all","query":"","fromClause":"","whereClause":"","fieldValueMap":{"value":"121","fieldMnemonic":"header_one"},"fileFieldMap":{"fileHeader":"File_Header","fieldMnemonic":"field_mnemonic"},"files":"filename1,filename2,filename3"}]}
POST /data-sets
Creates a new data set, which enables you to view information about the data set.
Parameters
Parameter | In | Type | Required | Default | Description | Accepted Values |
---|---|---|---|---|---|---|
body | body | postDataSets | true | N/A | No description | - |
Response Statuses
Status | Meaning | Description | Schema |
---|---|---|---|
201 | Created | Created. | DataSet |
400 | Bad Request | Bad Request | Error |
401 | Unauthorized | Unauthorized | Error |
403 | Forbidden | Forbidden | Error |
Retrieve a List of Data Sets
Example Request:
require 'httparty' # Using HTTParty 0.16.2
require 'json'
headers = {
'Authorization' => '<auth_header>',
'Accept' => 'application/json'
}
result = HTTParty.get('https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/data-sets', headers: headers)
print JSON.pretty_generate(result)
# You can also use wget
curl -X GET https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/data-sets \
-H 'Authorization: {auth_header}' \
-H 'Accept: application/json'
Example response
{
"items": [
{
"id": "125",
"name": "Data Set Name",
"mnemonic": "DATA_SET_MNEMONIC",
"schemaId": "143",
"description": "143",
"truncate": true,
"cernerDefined": true,
"createdAt": "2019-04-25T20:41:18.181Z",
"version": 2
}
],
"totalResults": 1,
"firstLink": "http://cernerdemo.api.us.healtheintent.com/example/v1/examples?offset=0&limit=20",
"lastLink": "http://cernerdemo.api.us.healtheintent.com/example/v1/examples?offset=20&limit=20",
"prevLink": "http://cernerdemo.api.us.healtheintent.com/example/v1/examples?offset=0&limit=20",
"nextLink": "http://cernerdemo.api.us.healtheintent.com/example/v1/examples?offset=20&limit=20"
}
GET /data-sets
Retrieves a list of data sets that match the query.
Parameters
Parameter | In | Type | Required | Default | Description | Accepted Values |
---|---|---|---|---|---|---|
schemaId | query | array[string] | false | N/A | Filters to only the data sets that are included in the given schema. | - |
schemaMnemonic | query | string | false | N/A | Filters to only the data sets that are included in the given schema. | - |
workflowId | query | string | false | N/A | Filters to only the data sets that are included in the given workflow. | - |
feedId | query | string | false | N/A | Filters the results to only the data sets that are included in the specified feed. | - |
name | query | string | false | N/A | Filters the response by the name of the data set. The response includes all data sets that contain the name. Partial matches are included and matching is not case sensitive. | - |
mnemonic | query | string | false | N/A | Filters the response by the mnemonic of the data set. The response includes all data sets that contain the mnemonic. Partial matches are included and matching is not case sensitive. | - |
truncate | query | string | false | N/A | Filters the data sets based on whether they have truncation enabled. | - |
cernerDefined | query | string | false | N/A | Filters the data sets based on whether they are Cerner-defined. | - |
offset | query | integer(int32) | false | 0 | The number of results to skip from the beginning of the list of results (typically for the purpose of paging). The minimum offset is 0. There is no maximum offset. | - |
limit | query | integer(int32) | false | 20 | The maximum number of results to display per page. The minimum limit is 1. The maximum limit is 100. | - |
orderBy | query | string | false | name | A comma-separated list of fields by which to sort. | name, -name, mnemonic, -mnemonic, createdAt, -createdAt |
Response Statuses
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | Retrieve a List of Data Sets | DataSets |
400 | Bad Request | Bad Request | Error |
401 | Unauthorized | Unauthorized | Error |
403 | Forbidden | Forbidden | Error |
Delete a Data Set
Example Request:
require 'httparty' # Using HTTParty 0.16.2
require 'json'
headers = {
'Authorization' => '<auth_header>',
'Accept' => 'application/json'
}
result = HTTParty.delete('https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/data-sets/404510', headers: headers)
print JSON.pretty_generate(result)
# You can also use wget
curl -X DELETE https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/data-sets/404510 \
-H 'Authorization: {auth_header}' \
-H 'Accept: application/json'
DELETE /data-sets/{dataSetId}
Deletes a single data set.
Parameters
Parameter | In | Type | Required | Default | Description | Accepted Values |
---|---|---|---|---|---|---|
dataSetId | path | string | true | N/A | The ID of the data set. | - |
Response Statuses
Status | Meaning | Description | Schema |
---|---|---|---|
204 | No Content | Delete a Data Set | None |
400 | Bad Request | Bad Request | Error |
401 | Unauthorized | Unauthorized | Error |
403 | Forbidden | Forbidden | Error |
404 | Not Found | Not Found | Error |
Update a Data Set
Example Request:
require 'httparty' # Using HTTParty 0.16.2
require 'json'
headers = {
'Authorization' => '<auth_header>',
'Content-Type' => 'application/json',
'Accept' => 'application/json'
}
result = HTTParty.put('https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/data-sets/404510', headers: headers, body: {"name":"Data Set Name","mnemonic":"DATA_SET_MNEMONIC","schemaId":"143","description":"143","truncate":true,"cernerDefined":true,"fields":[{"name":"Field Name","mnemonic":"field_mnemonic","dataType":"VARCHAR","precision":500,"scale":5,"primaryKey":true,"principalColumn":true,"oldMnemonic":"old_field_mnemonic"}],"transformations":[{"name":"Insert a File","index":1,"type":"INSERT_FILE","description":"This transformation inserts columns \\[a\\], \\[b\\], and \\[c\\] into the data set.","delimiter":"|","errorStrategy":"SKIP","loadStrategy":"ALL","insertOrUpdate":true,"loadStrategyVersion":"all","query":"","fromClause":"","whereClause":"","fieldValueMap":{"value":"121","fieldMnemonic":"header_one"},"fileFieldMap":{"fileHeader":"File_Header","fieldMnemonic":"field_mnemonic"},"files":"filename1,filename2,filename3"}]}.to_json )
print JSON.pretty_generate(result)
# You can also use wget
curl -X PUT https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/data-sets/404510 \
-H 'Authorization: {auth_header}' \
-H 'Content-Type: application/json' \ \
-H 'Accept: application/json' \
-d {"name":"Data Set Name","mnemonic":"DATA_SET_MNEMONIC","schemaId":"143","description":"143","truncate":true,"cernerDefined":true,"fields":[{"name":"Field Name","mnemonic":"field_mnemonic","dataType":"VARCHAR","precision":500,"scale":5,"primaryKey":true,"principalColumn":true,"oldMnemonic":"old_field_mnemonic"}],"transformations":[{"name":"Insert a File","index":1,"type":"INSERT_FILE","description":"This transformation inserts columns \\[a\\], \\[b\\], and \\[c\\] into the data set.","delimiter":"|","errorStrategy":"SKIP","loadStrategy":"ALL","insertOrUpdate":true,"loadStrategyVersion":"all","query":"","fromClause":"","whereClause":"","fieldValueMap":{"value":"121","fieldMnemonic":"header_one"},"fileFieldMap":{"fileHeader":"File_Header","fieldMnemonic":"field_mnemonic"},"files":"filename1,filename2,filename3"}]}
PUT /data-sets/{dataSetId}
Updates a single data set.
Parameters
Parameter | In | Type | Required | Default | Description | Accepted Values |
---|---|---|---|---|---|---|
dataSetId | path | string | true | N/A | The ID of the data set. | - |
body | body | putDataSets | true | N/A | No description | - |
Response Statuses
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | Update a Data Set | DataSet |
400 | Bad Request | Bad Request | Error |
401 | Unauthorized | Unauthorized | Error |
403 | Forbidden | Forbidden | Error |
404 | Not Found | Not Found | Error |
Retrieve a Data Set
Example Request:
require 'httparty' # Using HTTParty 0.16.2
require 'json'
headers = {
'Authorization' => '<auth_header>',
'Accept' => 'application/json'
}
result = HTTParty.get('https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/data-sets/404510', headers: headers)
print JSON.pretty_generate(result)
# You can also use wget
curl -X GET https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/data-sets/404510 \
-H 'Authorization: {auth_header}' \
-H 'Accept: application/json'
Example response
{
"id": "125",
"name": "Data Set Name",
"mnemonic": "DATA_SET_MNEMONIC",
"schemaId": "143",
"description": "143",
"truncate": true,
"cernerDefined": true,
"createdAt": "2019-04-25T20:41:18.181Z",
"version": 2,
"fields": [{
"id": "125",
"name": "Field Name",
"mnemonic": "field_mnemonic",
"dataType": "VARCHAR",
"precision": 500,
"scale": 5,
"primaryKey": true,
"principalColumn": true,
"createdAt": "2019-04-25T20:41:18.181Z"
}],
"transformations": [
{
"id": "125",
"name": "Insert a File",
"index": 1,
"type": "INSERT_FILE",
"description": "This transformation inserts columns \\[a\\], \\[b\\], and \\[c\\] into the data set.",
"delimiter": "|",
"errorStrategy": "SKIP",
"loadStrategy": "ALL",
"insertOrUpdate": true,
"loadStrategyVersion": "all",
"createdAt": "2019-04-25T20:41:18.181Z",
"query": "",
"fromClause": "",
"whereClause": "",
"fieldValueMap": {
"value": "121",
"fieldMnemonic": "header_one"
},
"fileFieldMap": {
"fileHeader": "File_Header",
"fieldMnemonic": "field_mnemonic"
},
"files": "filename1,filename2,filename3"
}
],
"versions": [
{
"version": 2,
"createdAt": "2019-04-25T20:41:18.181Z"
}
]
}
GET /data-sets/{dataSetId}
Retrieves a single data set.
Parameters
Parameter | In | Type | Required | Default | Description | Accepted Values |
---|---|---|---|---|---|---|
dataSetId | path | string | true | N/A | The ID of the data set. | - |
version | query | string | false | N/A | The version of the data set. | - |
Response Statuses
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | Retrieve a Data Set | DataSet |
400 | Bad Request | Bad Request | Error |
401 | Unauthorized | Unauthorized | Error |
403 | Forbidden | Forbidden | Error |
404 | Not Found | Not Found | Error |
Data Set Runs
A data set run is the processing of a data set.
Retrieve a List of Data Set Runs
Example Request:
require 'httparty' # Using HTTParty 0.16.2
require 'json'
headers = {
'Authorization' => '<auth_header>',
'Accept' => 'application/json'
}
result = HTTParty.get('https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/data-sets/404510/runs', headers: headers)
print JSON.pretty_generate(result)
# You can also use wget
curl -X GET https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/data-sets/404510/runs \
-H 'Authorization: {auth_header}' \
-H 'Accept: application/json'
Example response
{
"items": [
{
"id": "1a2b3c",
"name": "Data Set Run name",
"startTime": "2019-04-25T20:41:18.181Z",
"endTime": "2019-04-25T20:41:18.181Z",
"status": "PROCESSING",
"substatus": "PREPARING:file_name",
"dataSetId": "1a2b3c",
"workflowId": "1a2b3c",
"workflowRunId": "1a2b3c",
"createdAt": "2019-04-25T20:41:18.181Z",
"tableMigrated": "false",
"workflowName": "1a2b3c",
"index": 123,
"numberOfRecordsInserted": 123,
"numberOfRecordsUpdated": 123,
"numberOfRecordsDeleted": 123
}
],
"totalResults": 1,
"firstLink": "http://cernerdemo.api.us.healtheintent.com/example/v1/examples?offset=0&limit=20",
"lastLink": "http://cernerdemo.api.us.healtheintent.com/example/v1/examples?offset=20&limit=20",
"prevLink": "http://cernerdemo.api.us.healtheintent.com/example/v1/examples?offset=0&limit=20",
"nextLink": "http://cernerdemo.api.us.healtheintent.com/example/v1/examples?offset=20&limit=20"
}
GET /data-sets/{dataSetId}/runs
Retrieves a list of Data Set runs that match the query.
Parameters
Parameter | In | Type | Required | Default | Description | Accepted Values |
---|---|---|---|---|---|---|
dataSetId | path | string | true | N/A | The ID of the data set. | - |
workflowRunId | query | string | false | N/A | The ID of the workflow run. | - |
offset | query | integer(int32) | false | 0 | The number of results to skip from the beginning of the list of results (typically for the purpose of paging). The minimum offset is 0. There is no maximum offset. | - |
limit | query | integer(int32) | false | 20 | The maximum number of results to display per page. The minimum limit is 1. The maximum limit is 100. | - |
orderBy | query | string | false | -createdAt | A comma-separated list of fields by which to sort. | startTime, -startTime, endTime, -endTime, index, -index, name, -name, createdAt, -createdAt |
Response Statuses
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | Retrieve a List of Data Set Runs | DataSetRuns |
400 | Bad Request | Bad Request | Error |
401 | Unauthorized | Unauthorized | Error |
403 | Forbidden | Forbidden | Error |
Retrieve a Data Set Run
Example Request:
require 'httparty' # Using HTTParty 0.16.2
require 'json'
headers = {
'Authorization' => '<auth_header>',
'Accept' => 'application/json'
}
result = HTTParty.get('https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/data-sets/404510/runs/118347178', headers: headers)
print JSON.pretty_generate(result)
# You can also use wget
curl -X GET https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/data-sets/404510/runs/118347178 \
-H 'Authorization: {auth_header}' \
-H 'Accept: application/json'
Example response
{
"id": "1a2b3c",
"name": "Data Set Run name",
"startTime": "2019-04-25T20:41:18.181Z",
"endTime": "2019-04-25T20:41:18.181Z",
"status": "PROCESSING",
"substatus": "PREPARING:file_name",
"dataSetId": "1a2b3c",
"workflowId": "1a2b3c",
"workflowRunId": "1a2b3c",
"createdAt": "2019-04-25T20:41:18.181Z",
"tableMigrated": "false",
"workflowName": "1a2b3c",
"index": 123,
"numberOfRecordsInserted": 123,
"numberOfRecordsUpdated": 123,
"numberOfRecordsDeleted": 123,
"transformationRuns": [
{
"id": "1a2b3c",
"status": "FAILED",
"transformationId": "1a2b3c",
"startTime": "2019-04-25T20:41:18.181Z",
"endTime": "2019-04-25T20:41:18.181Z",
"query": "select * from mytable",
"createdAt": "2019-04-25T20:41:18.181Z",
"index": 123,
"name": "Insert from query",
"type": "insert_query",
"numberOfRecordsInserted": 123,
"numberOfRecordsUpdated": 123,
"numberOfRecordsDeleted": 123,
"transformationRunErrors": [
{
"id": "1a2b3c",
"errorKey": "File_Parsing",
"message": "Pipeline job failed.",
"createdAt": "2019-04-25T20:41:18.181Z"
}
]
}
],
"dataSetRunErrors": [
{
"id": "1a2b3c",
"errorKey": "File_Parsing",
"message": "Pipeline job failed.",
"createdAt": "2019-04-25T20:41:18.181Z"
}
]
}
GET /data-sets/{dataSetId}/runs/{dataSetRunId}
Retrieves a single data set run.
Parameters
Parameter | In | Type | Required | Default | Description | Accepted Values |
---|---|---|---|---|---|---|
dataSetId | path | string | true | N/A | The ID of the data set. | - |
dataSetRunId | path | string | true | N/A | The ID of the data set run. | - |
Response Statuses
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | Retrieve a Data Set Run | DataSetRun |
400 | Bad Request | Bad Request | Error |
401 | Unauthorized | Unauthorized | Error |
403 | Forbidden | Forbidden | Error |
404 | Not Found | Not Found | Error |
Workflows
Workflows are used to process one or many data sets and create processing dependencies between data sets.
Create a Workflow
Example Request:
require 'httparty' # Using HTTParty 0.16.2
require 'json'
headers = {
'Authorization' => '<auth_header>',
'Content-Type' => 'application/json',
'Accept' => 'application/json'
}
result = HTTParty.post('https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/workflows', headers: headers, body: {"name":"Workflow Name","description":"This workflow is used to process data sets related to the Sepsis Dashboard.","dataSets":"555,556,557","trigger":{"triggerType":"TIME","frequency":"WEEKLY","startTime":"2021-01-02T03:45:00.000Z","nextScheduledRun":"2021-01-02 03:45:00","timeZone":"America/Chicago"}}.to_json )
print JSON.pretty_generate(result)
# You can also use wget
curl -X POST https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/workflows \
-H 'Authorization: {auth_header}' \
-H 'Content-Type: application/json' \ \
-H 'Accept: application/json' \
-d {"name":"Workflow Name","description":"This workflow is used to process data sets related to the Sepsis Dashboard.","dataSets":"555,556,557","trigger":{"triggerType":"TIME","frequency":"WEEKLY","startTime":"2021-01-02T03:45:00.000Z","nextScheduledRun":"2021-01-02 03:45:00","timeZone":"America/Chicago"}}
POST /workflows
Creates a new workflow that is comprised of a name, description, and ordered list of data set IDs.
Parameters
Parameter | In | Type | Required | Default | Description | Accepted Values |
---|---|---|---|---|---|---|
body | body | postWorkflows | true | N/A | No description | - |
Response Statuses
Status | Meaning | Description | Schema |
---|---|---|---|
201 | Created | Created. | Workflow |
400 | Bad Request | Bad Request | Error |
401 | Unauthorized | Unauthorized | Error |
403 | Forbidden | Forbidden | Error |
Retrieve a List of Workflows
Example Request:
require 'httparty' # Using HTTParty 0.16.2
require 'json'
headers = {
'Authorization' => '<auth_header>',
'Accept' => 'application/json'
}
result = HTTParty.get('https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/workflows', headers: headers)
print JSON.pretty_generate(result)
# You can also use wget
curl -X GET https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/workflows \
-H 'Authorization: {auth_header}' \
-H 'Accept: application/json'
Example response
{
"items": [
{
"id": "125",
"name": "Workflow Name",
"description": "This workflow is used to process data sets related to the Sepsis Dashboard.",
"createdAt": "2019-04-25T20:41:18.181Z",
"dataSets": "555,556,557"
}
],
"totalResults": 1,
"firstLink": "http://cernerdemo.api.us.healtheintent.com/example/v1/examples?offset=0&limit=20",
"lastLink": "http://cernerdemo.api.us.healtheintent.com/example/v1/examples?offset=20&limit=20",
"prevLink": "http://cernerdemo.api.us.healtheintent.com/example/v1/examples?offset=0&limit=20",
"nextLink": "http://cernerdemo.api.us.healtheintent.com/example/v1/examples?offset=20&limit=20"
}
GET /workflows
Retrieves a list of workflows that match the query.
Parameters
Parameter | In | Type | Required | Default | Description | Accepted Values |
---|---|---|---|---|---|---|
name | query | string | false | N/A | Filters the response by the name of the workflow. The response includes all the workflows that contain the name. Partial matches are included and matching is not case sensitive. | - |
offset | query | integer(int32) | false | 0 | The number of results to skip from the beginning of the list of results (typically for the purpose of paging). The minimum offset is 0. There is no maximum offset. | - |
limit | query | integer(int32) | false | 20 | The maximum number of results to display per page. The minimum limit is 1. The maximum limit is 100. | - |
orderBy | query | string | false | name | A comma-separated list of fields by which to sort. | name, -name, createdAt, -createdAt |
Response Statuses
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | Retrieve a List of Workflows | Workflows |
400 | Bad Request | Bad Request | Error |
401 | Unauthorized | Unauthorized | Error |
403 | Forbidden | Forbidden | Error |
Delete a Workflow
Example Request:
require 'httparty' # Using HTTParty 0.16.2
require 'json'
headers = {
'Authorization' => '<auth_header>',
'Accept' => 'application/json'
}
result = HTTParty.delete('https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/workflows/45014', headers: headers)
print JSON.pretty_generate(result)
# You can also use wget
curl -X DELETE https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/workflows/45014 \
-H 'Authorization: {auth_header}' \
-H 'Accept: application/json'
DELETE /workflows/{workflowId}
Deletes a single workflow.
Parameters
Parameter | In | Type | Required | Default | Description | Accepted Values |
---|---|---|---|---|---|---|
workflowId | path | string | true | N/A | The ID of the workflow. | - |
Response Statuses
Status | Meaning | Description | Schema |
---|---|---|---|
204 | No Content | Delete a Workflow | None |
400 | Bad Request | Bad Request | Error |
401 | Unauthorized | Unauthorized | Error |
403 | Forbidden | Forbidden | Error |
404 | Not Found | Not Found | Error |
Update a Workflow
Example Request:
require 'httparty' # Using HTTParty 0.16.2
require 'json'
headers = {
'Authorization' => '<auth_header>',
'Content-Type' => 'application/json',
'Accept' => 'application/json'
}
result = HTTParty.put('https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/workflows/45014', headers: headers, body: {"name":"Workflow Name","description":"This workflow is used to process data sets related to the Sepsis Dashboard.","dataSets":"555,556,557","trigger":{"triggerType":"TIME","frequency":"WEEKLY","startTime":"2021-01-02T03:45:00.000Z","nextScheduledRun":"2021-01-02 03:45:00","timeZone":"America/Chicago"}}.to_json )
print JSON.pretty_generate(result)
# You can also use wget
curl -X PUT https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/workflows/45014 \
-H 'Authorization: {auth_header}' \
-H 'Content-Type: application/json' \ \
-H 'Accept: application/json' \
-d {"name":"Workflow Name","description":"This workflow is used to process data sets related to the Sepsis Dashboard.","dataSets":"555,556,557","trigger":{"triggerType":"TIME","frequency":"WEEKLY","startTime":"2021-01-02T03:45:00.000Z","nextScheduledRun":"2021-01-02 03:45:00","timeZone":"America/Chicago"}}
PUT /workflows/{workflowId}
Updates a workflow that is comprised of a name, description, and ordered list of data set IDs.
Parameters
Parameter | In | Type | Required | Default | Description | Accepted Values |
---|---|---|---|---|---|---|
workflowId | path | string | true | N/A | The ID of the workflow. | - |
body | body | putWorkflows | true | N/A | No description | - |
Response Statuses
Status | Meaning | Description | Schema |
---|---|---|---|
201 | Created | Updated. | Workflow |
400 | Bad Request | Bad Request | Error |
401 | Unauthorized | Unauthorized | Error |
403 | Forbidden | Forbidden | Error |
404 | Not Found | Not Found | Error |
Retrieve a Workflow
Example Request:
require 'httparty' # Using HTTParty 0.16.2
require 'json'
headers = {
'Authorization' => '<auth_header>',
'Accept' => 'application/json'
}
result = HTTParty.get('https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/workflows/45014', headers: headers)
print JSON.pretty_generate(result)
# You can also use wget
curl -X GET https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/workflows/45014 \
-H 'Authorization: {auth_header}' \
-H 'Accept: application/json'
Example response
{
"id": "125",
"name": "Workflow Name",
"description": "This workflow is used to process data sets related to the Sepsis Dashboard.",
"createdAt": "2019-04-25T20:41:18.181Z",
"dataSets": "555,556,557",
"trigger": {
"triggerType": "TIME",
"frequency": "WEEKLY",
"startTime": "2021-01-02T03:45:00.000Z",
"nextScheduledRun": "2021-01-02 03:45:00",
"timeZone": "America/Chicago",
"createdAt": "2021-01-02T03:45:00.000Z"
}
}
GET /workflows/{workflowId}
Retrieves a single workflow.
Parameters
Parameter | In | Type | Required | Default | Description | Accepted Values |
---|---|---|---|---|---|---|
workflowId | path | string | true | N/A | The ID of the workflow. | - |
Response Statuses
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | Retrieve a Workflow | Workflow |
400 | Bad Request | Bad Request | Error |
401 | Unauthorized | Unauthorized | Error |
403 | Forbidden | Forbidden | Error |
404 | Not Found | Not Found | Error |
Workflow Runs
A workflow run is the processing of a workflow.
Create a Workflow Run
Example Request:
require 'httparty' # Using HTTParty 0.16.2
require 'json'
headers = {
'Authorization' => '<auth_header>',
'Content-Type' => 'application/json',
'Accept' => 'application/json'
}
result = HTTParty.post('https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/workflows/45014/runs', headers: headers)
print JSON.pretty_generate(result)
# You can also use wget
curl -X POST https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/workflows/45014/runs \
-H 'Authorization: {auth_header}' \
-H 'Content-Type: application/json' \
-H 'Accept: application/json'
POST /workflows/{workflowId}/runs
Creates a new workflow run from a workflow ID and an offset index.
Parameters
Parameter | In | Type | Required | Default | Description | Accepted Values |
---|---|---|---|---|---|---|
workflowId | path | string | true | N/A | The ID of the workflow. | - |
body | body | postWorkflowsWorkflowidRuns | true | N/A | No description | - |
Response Statuses
Status | Meaning | Description | Schema |
---|---|---|---|
201 | Created | Created. | WorkflowRun |
400 | Bad Request | Bad Request | Error |
401 | Unauthorized | Unauthorized | Error |
403 | Forbidden | Forbidden | Error |
404 | Not Found | Not Found | Error |
Retrieve a List of Workflow Runs
Example Request:
require 'httparty' # Using HTTParty 0.16.2
require 'json'
headers = {
'Authorization' => '<auth_header>',
'Accept' => 'application/json'
}
result = HTTParty.get('https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/workflows/45014/runs', headers: headers)
print JSON.pretty_generate(result)
# You can also use wget
curl -X GET https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/workflows/45014/runs \
-H 'Authorization: {auth_header}' \
-H 'Accept: application/json'
Example response
{
"items": [
{
"id": "125",
"startTime": "2019-04-25T20:41:18.181Z",
"endTime": "2019-04-25T20:41:18.181Z",
"status": "QUEUED",
"workflowId": "143",
"createdAt": "2019-04-25T20:41:18.181Z",
"offsetIndex": 0
}
],
"totalResults": 1,
"firstLink": "http://cernerdemo.api.us.healtheintent.com/example/v1/examples?offset=0&limit=20",
"lastLink": "http://cernerdemo.api.us.healtheintent.com/example/v1/examples?offset=20&limit=20",
"prevLink": "http://cernerdemo.api.us.healtheintent.com/example/v1/examples?offset=0&limit=20",
"nextLink": "http://cernerdemo.api.us.healtheintent.com/example/v1/examples?offset=20&limit=20"
}
GET /workflows/{workflowId}/runs
Retrieves a list of workflow runs that match the query.
Parameters
Parameter | In | Type | Required | Default | Description | Accepted Values |
---|---|---|---|---|---|---|
workflowId | path | string | true | N/A | The ID of the workflow. | - |
offset | query | integer(int32) | false | 0 | The number of results to skip from the beginning of the list of results (typically for the purpose of paging). The minimum offset is 0. There is no maximum offset. | - |
limit | query | integer(int32) | false | 20 | The maximum number of results to display per page. The minimum limit is 1. The maximum limit is 100. | - |
orderBy | query | string | false | -createdAt | A comma-separated list of fields by which to sort. | startTime, -startTime, endTime, -endTime, createdAt, -createdAt |
Response Statuses
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | Retrieve a List of Workflow Runs | WorkflowRuns |
400 | Bad Request | Bad Request | Error |
401 | Unauthorized | Unauthorized | Error |
403 | Forbidden | Forbidden | Error |
Retrieve a Workflow Run
Example Request:
require 'httparty' # Using HTTParty 0.16.2
require 'json'
headers = {
'Authorization' => '<auth_header>',
'Accept' => 'application/json'
}
result = HTTParty.get('https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/workflows/45014/runs/5393608', headers: headers)
print JSON.pretty_generate(result)
# You can also use wget
curl -X GET https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/workflows/45014/runs/5393608 \
-H 'Authorization: {auth_header}' \
-H 'Accept: application/json'
Example response
{
"id": "125",
"startTime": "2019-04-25T20:41:18.181Z",
"endTime": "2019-04-25T20:41:18.181Z",
"status": "QUEUED",
"workflowId": "143",
"createdAt": "2019-04-25T20:41:18.181Z",
"offsetIndex": 0,
"dataSetRuns": [
{
"id": "1a2b3c",
"name": "Data Set Run name",
"startTime": "2019-04-25T20:41:18.181Z",
"endTime": "2019-04-25T20:41:18.181Z",
"status": "PROCESSING",
"substatus": "PREPARING:file_name",
"dataSetId": "1a2b3c",
"workflowId": "1a2b3c",
"workflowRunId": "1a2b3c",
"createdAt": "2019-04-25T20:41:18.181Z",
"tableMigrated": "false",
"workflowName": "1a2b3c",
"index": 123,
"numberOfRecordsInserted": 123,
"numberOfRecordsUpdated": 123,
"numberOfRecordsDeleted": 123,
"transformationRuns": [
{
"id": "1a2b3c",
"status": "FAILED",
"transformationId": "1a2b3c",
"startTime": "2019-04-25T20:41:18.181Z",
"endTime": "2019-04-25T20:41:18.181Z",
"query": "select * from mytable",
"createdAt": "2019-04-25T20:41:18.181Z",
"index": 123,
"name": "Insert from query",
"type": "insert_query",
"numberOfRecordsInserted": 123,
"numberOfRecordsUpdated": 123,
"numberOfRecordsDeleted": 123,
"transformationRunErrors": [
{
"id": "1a2b3c",
"errorKey": "File_Parsing",
"message": "Pipeline job failed.",
"createdAt": "2019-04-25T20:41:18.181Z"
}
]
}
],
"dataSetRunErrors": [
{
"id": "1a2b3c",
"errorKey": "File_Parsing",
"message": "Pipeline job failed.",
"createdAt": "2019-04-25T20:41:18.181Z"
}
]
}
]
}
GET /workflows/{workflowId}/runs/{workflowRunId}
Retrieves a single workflow run.
Parameters
Parameter | In | Type | Required | Default | Description | Accepted Values |
---|---|---|---|---|---|---|
workflowId | path | string | true | N/A | The ID of the workflow. | - |
workflowRunId | path | string | true | N/A | The ID of the workflow run. | - |
Response Statuses
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | Retrieve a Workflow Run | WorkflowRun |
400 | Bad Request | Bad Request | Error |
401 | Unauthorized | Unauthorized | Error |
403 | Forbidden | Forbidden | Error |
404 | Not Found | Not Found | Error |
Schemas
A schema is a collection of data sets in the data warehouse. It can contain one or many data sets.
Create a new Schema
Example Request:
require 'httparty' # Using HTTParty 0.16.2
require 'json'
headers = {
'Authorization' => '<auth_header>',
'Content-Type' => 'application/json',
'Accept' => 'application/json'
}
result = HTTParty.post('https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/schemas', headers: headers, body: {"name":"Schema Name","mnemonic":"SCHEMA_MNEMONIC","description":"This schema houses all user-defined data sets for client A."}.to_json )
print JSON.pretty_generate(result)
# You can also use wget
curl -X POST https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/schemas \
-H 'Authorization: {auth_header}' \
-H 'Content-Type: application/json' \ \
-H 'Accept: application/json' \
-d {"name":"Schema Name","mnemonic":"SCHEMA_MNEMONIC","description":"This schema houses all user-defined data sets for client A."}
POST /schemas
Creates a new schema from the provided body.
Parameters
Parameter | In | Type | Required | Default | Description | Accepted Values |
---|---|---|---|---|---|---|
body | body | postSchemas | true | N/A | No description | - |
Response Statuses
Status | Meaning | Description | Schema |
---|---|---|---|
201 | Created | Created | Schema |
400 | Bad Request | Bad Request | Error |
401 | Unauthorized | Unauthorized | Error |
403 | Forbidden | Forbidden | Error |
404 | Not Found | Not Found | Error |
Retrieve a List of Schemas
Example Request:
require 'httparty' # Using HTTParty 0.16.2
require 'json'
headers = {
'Authorization' => '<auth_header>',
'Accept' => 'application/json'
}
result = HTTParty.get('https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/schemas', headers: headers)
print JSON.pretty_generate(result)
# You can also use wget
curl -X GET https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/schemas \
-H 'Authorization: {auth_header}' \
-H 'Accept: application/json'
Example response
{
"items": [
{
"id": "125",
"name": "Schema Name",
"mnemonic": "SCHEMA_MNEMONIC",
"description": "This schema houses all user-defined data sets for client A.",
"type": "CUSTOM_EDW",
"createdAt": "2019-04-25T20:41:18.181Z"
}
],
"totalResults": 1,
"firstLink": "http://cernerdemo.api.us.healtheintent.com/example/v1/examples?offset=0&limit=20",
"lastLink": "http://cernerdemo.api.us.healtheintent.com/example/v1/examples?offset=20&limit=20",
"prevLink": "http://cernerdemo.api.us.healtheintent.com/example/v1/examples?offset=0&limit=20",
"nextLink": "http://cernerdemo.api.us.healtheintent.com/example/v1/examples?offset=20&limit=20"
}
GET /schemas
Retrieves a list of schemas that match the query.
Parameters
Parameter | In | Type | Required | Default | Description | Accepted Values |
---|---|---|---|---|---|---|
type | query | array[string] | false | N/A | Filters to only the schemas of the given types. | - |
name | query | array[string] | false | N/A | Filters to only the schemas with the given names. | - |
mnemonic | query | array[string] | false | N/A | Filters to only the schemas with the given mnemonics. | - |
Response Statuses
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | Retrieve a List of Schemas | Schemas |
400 | Bad Request | Bad Request | Error |
401 | Unauthorized | Unauthorized | Error |
403 | Forbidden | Forbidden | Error |
Update a schema
Example Request:
require 'httparty' # Using HTTParty 0.16.2
require 'json'
headers = {
'Authorization' => '<auth_header>',
'Content-Type' => 'application/json',
'Accept' => 'application/json'
}
result = HTTParty.put('https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/schemas/269', headers: headers, body: {"name":"Schema Name","mnemonic":"SCHEMA_MNEMONIC","description":"This schema houses all user-defined data sets for client A."}.to_json )
print JSON.pretty_generate(result)
# You can also use wget
curl -X PUT https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/schemas/269 \
-H 'Authorization: {auth_header}' \
-H 'Content-Type: application/json' \ \
-H 'Accept: application/json' \
-d {"name":"Schema Name","mnemonic":"SCHEMA_MNEMONIC","description":"This schema houses all user-defined data sets for client A."}
PUT /schemas/{schemaId}
Updates a single schema.
Parameters
Parameter | In | Type | Required | Default | Description | Accepted Values |
---|---|---|---|---|---|---|
schemaId | path | string | true | N/A | The ID of the schema. | - |
body | body | putSchemas | true | N/A | No description | - |
Response Statuses
Status | Meaning | Description | Schema |
---|---|---|---|
201 | Created | Updated | Schema |
400 | Bad Request | Bad Request | Error |
401 | Unauthorized | Unauthorized | Error |
403 | Forbidden | Forbidden | Error |
404 | Not Found | Not Found | Error |
Retrieve a Single Schema
Example Request:
require 'httparty' # Using HTTParty 0.16.2
require 'json'
headers = {
'Authorization' => '<auth_header>',
'Accept' => 'application/json'
}
result = HTTParty.get('https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/schemas/269', headers: headers)
print JSON.pretty_generate(result)
# You can also use wget
curl -X GET https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/schemas/269 \
-H 'Authorization: {auth_header}' \
-H 'Accept: application/json'
Example response
{
"id": "125",
"name": "Schema Name",
"mnemonic": "SCHEMA_MNEMONIC",
"description": "This schema houses all user-defined data sets for client A.",
"type": "CUSTOM_EDW",
"createdAt": "2019-04-25T20:41:18.181Z"
}
GET /schemas/{schemaId}
Retrieves a single schema.
Parameters
Parameter | In | Type | Required | Default | Description | Accepted Values |
---|---|---|---|---|---|---|
schemaId | path | string | true | N/A | The ID of the schema. | - |
Response Statuses
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | Retrieve a Single Schema | Schema |
400 | Bad Request | Bad Request | Error |
401 | Unauthorized | Unauthorized | Error |
403 | Forbidden | Forbidden | Error |
404 | Not Found | Not Found | Error |
Schema Definitions
postDataSets
Name | Type | Required | Description | Accepted Values |
---|---|---|---|---|
name | string | true | The human-friendly name of the data set. This value cannot exceed 255 characters. | - |
mnemonic | string | true | A single-word ID of the data set to be created. You use this mnemonic when you query the data set in the data warehouse. This value cannot have spaces, start with a number, or exceed 32 characters. | - |
schemaId | string | true | The ID of the schema in which the data set resides. | - |
fields | [object] | false | A list of the fields included in the data set. The number of fields cannot exceed 1000. | - |
» name | string | true | The human-friendly name of the field. This value cannot exceed 255 characters. | - |
» mnemonic | string | true | A single-word ID for the field on the data set. You use this mnemonic when querying this field in the data warehouse. This value cannot have spaces, start with a number, or exceed 100 characters. | - |
» dataType | string | true | The data type of the column to be created in the data warehouse. The following values are possible:
|
VARCHAR, INTEGER, DOUBLE, DECIMAL, TIMESTAMP, TIMESTAMPTZ, BOOLEAN, DATE, VARBINARY |
» precision | integer(int32) | false | The number of allowed characters or bytes for the data type in the data warehouse. This applies only to fields with a dataType value of VARCHAR or VARBINARY. | - |
» scale | integer(int32) | false | The number of allowed digits after the decimal point in a number. This applies only to fields with a dataType value of DECIMAL. | - |
» primaryKey | boolean | false | Indicates whether the field is used as a primary key for the data set. | - |
» principalColumn | boolean | false | Indicates whether the field is used as a principal column for the data set. This is used for database optimization. | - |
transformations | [object] | false | A list of the transformations included in the data set. The number of transformations cannot exceed 150. | - |
» name | string | true | The human-friendly name of the transformation. This value cannot exceed 255 characters. | - |
» index | integer(int32) | true | The index that represents the order in the transformations in which the step is executed. | - |
» type | string | true | The type of transformation to be executed. The following types are possible:
|
INSERT_FILE, INSERT_QUERY, UPDATE, DELETE |
» description | string | false | The human-friendly description of the transformation. This value cannot exceed 2000 characters. | - |
» delimiter | string | false | A character that represents the delimiter that is used to parse the file. | - |
» errorStrategy | string | false | The strategy used to report errors. The following values are available:
|
FAIL, SKIP |
» loadStrategy | string | false | The strategy used for loading files into the data set. This is required if the action type is INSERT_FILE. The following options are supported:
|
LATEST, ALL, NEW |
» loadStrategyVersion | string | false | The release version with which to start when loading files for this transformation. | - |
» insertOrUpdate | boolean | false | Indicates whether to treat rows with duplicate primary keys as updates rather than inserts. Primary keys must be defined to use this option. | true, false |
» query | string | false | An ANSI SQL query that selects data to be inserted into the data set. | - |
» fromClause | string | false | An ANSI SQL clause that indicates from where the data is updated. | - |
» whereClause | string | false | An ANSI SQL clause that indicates the conditions that need to be satisfied for the data to be updated. | - |
» fieldValueMap | [object] | false | A list of the mappings of column headers to field mnemonics. | - |
»» value | string | true | The expression that indicates the value that the field is set at for the update. | - |
»» fieldMnemonic | string | true | A single-word ID of the field on the data set. You use this mnemonic when you query this field in the data warehouse. This value cannot have spaces, start with a number, or exceed 100 characters. | - |
» fileFieldMap | [object] | false | A list of the mappings of column headers to field mnemonics. | - |
»» fileHeader | string | true | The name of the header in the delimited file that is processed. | - |
»» fieldMnemonic | string | true | A single-word ID of the field on the data set. You use this mnemonic when you query this field in the data warehouse. This value cannot have spaces, start with a number, or exceed 100 characters. | - |
» files | [string] | false | A list of the file names for which releases are processed for the transformation. | - |
truncate | boolean | false | Indicates whether to remove all data in the table before repopulating it when the data set is processed next. | - |
description | string | false | The description of the data set. This value cannot exceed 2000 characters. | - |
DataSet
Name | Type | Required | Description | Accepted Values |
---|---|---|---|---|
id | string | true | The unique ID of a data set. | - |
name | string | true | The human-friendly name of the data set. This value cannot exceed 255 characters. | - |
mnemonic | string | true | A single-word ID of the data set to be created. You use this mnemonic when you query the data set in the data warehouse. This value cannot have spaces, start with a number, or exceed 32 characters. | - |
schemaId | string | true | The ID of the schema in which the data set resides. | - |
description | string | false | The description of the data set. This value cannot exceed 2000 characters. | - |
truncate | boolean | false | Indicates whether to remove all data in the table before repopulating it when the data set is processed next. | - |
cernerDefined | boolean | false | Indicates whether the data set was defined by Cerner. Data sets defined by Cerner cannot be modified by users and should be considered read-only. | - |
createdAt | string | false | The date and time when the data set was initially entered into the system. In ISO 8601 formatting with precision ranging up to the millisecond (YYYY-MM-DDTHH:mm:ss.sssZ), for example, 2019-04-25T20:41:18.181Z. The time in this field is set automatically when the data set is first created; therefore, the field does not need to be set explicitly. | - |
version | integer(int32) | false | The version number associated with the data set retrieved. | - |
fields | Field | false | A list of the fields included in the data set. The number of fields cannot exceed 1000. | - |
transformations | Transformation | false | A list of the transformations included in the data set. The number of transformations cannot exceed 150. | - |
versions | [DataSetVersions] | false | A list of versions available for the data set. | - |
Field
Name | Type | Required | Description | Accepted Values |
---|---|---|---|---|
id | string | false | The unique ID of a field. | - |
name | string | true | The human-friendly name of the field. This value cannot exceed 255 characters. | - |
mnemonic | string | true | A single-word ID for the field on the data set. You use this mnemonic when querying this field in the data warehouse. This value cannot have spaces, start with a number, or exceed 100 characters. | - |
oldMnemonic | string | false | Optional attribute used to preserve data when renaming an existing field mnemonic. This mnemonic should match a mnemonic in the latest version of the data set being updated. | - |
dataType | string | true | The data type of the column to be created in the data warehouse. The following values are possible:
|
VARCHAR, INTEGER, DOUBLE, DECIMAL, TIMESTAMP, TIMESTAMPTZ, BOOLEAN, DATE, VARBINARY |
precision | integer(int32) | false | The number of allowed characters or bytes for the data type in the data warehouse. This applies only to fields with a dataType value of VARCHAR or VARBINARY. | - |
scale | integer(int32) | false | The number of allowed digits after the decimal point in a number. This applies only to fields with a dataType value of DECIMAL. | - |
primaryKey | boolean | false | Indicates whether the field is used as a primary key for the data set. | - |
principalColumn | boolean | false | Indicates whether the field is used as a principal column for the data set. This is used for database optimization. | - |
createdAt | string | false | The date and time when the field was initially entered into the system. In ISO 8601 formatting with precision ranging up to the millisecond (YYYY-MM-DDTHH:mm:ss.sssZ), for example, 2019-04-25T20:41:18.181Z. The time in this field is set automatically when the field is first created; therefore, the field does not need to be set explicitly. | - |
Transformation
Name | Type | Required | Description | Accepted Values |
---|---|---|---|---|
id | string | false | The unique ID of a transformation. | - |
name | string | true | The human-friendly name of the transformation. This value cannot exceed 255 characters. | - |
index | integer(int32) | true | The index that represents the order in the transformations in which the step is executed. | - |
type | string | true | The type of transformation to be executed. The following types are possible:
|
INSERT_FILE, INSERT_QUERY, UPDATE, DELETE |
description | string | false | The human-friendly description of the transformation. This value cannot exceed 2000 characters. | - |
delimiter | string | false | A character that represents the delimiter that is used to parse the file. | - |
errorStrategy | string | false | The strategy used to report errors. The following values are available:
|
FAIL, SKIP |
loadStrategy | string | false | The strategy used for loading files into the data set. This is required if the action type is INSERT_FILE. The following options are supported:
|
LATEST, ALL, NEW |
insertOrUpdate | boolean | false | Indicates whether to treat rows with duplicate primary keys as updates rather than inserts. Primary keys must be defined to use this option. | true, false |
loadStrategyVersion | string | false | The release version with which to start when loading files for this transformation. | - |
createdAt | string | false | The date and time when the transformation was initially entered into the system. In ISO 8601 formatting with precision ranging up to the millisecond (YYYY-MM-DDTHH:mm:ss.sssZ), for example, 2019-04-25T20:41:18.181Z. The time in this field is set automatically when the transformation is first created; therefore, the field does not need to be set explicitly. | - |
query | string | false | An ANSI SQL query that selects data to be inserted into the data set. | - |
fromClause | string | false | An ANSI SQL clause that indicates from where the data is updated. | - |
whereClause | string | false | An ANSI SQL clause that indicates the conditions that need to be satisfied for the data to be updated. | - |
fieldValueMap | FieldValueMapping | false | A list of the mappings of column headers to field mnemonics. | - |
fileFieldMap | FileFieldMapping | false | A list of the mappings of column headers to field mnemonics. | - |
files | string | false | A list of the file names for which releases are processed for the transformation. | - |
FieldValueMapping
Name | Type | Required | Description | Accepted Values |
---|---|---|---|---|
value | string | true | The expression that indicates the value that the field is set at for the update. | - |
fieldMnemonic | string | true | A single-word ID of the field on the data set. You use this mnemonic when you query this field in the data warehouse. This value cannot have spaces, start with a number, or exceed 100 characters. | - |
FileFieldMapping
Name | Type | Required | Description | Accepted Values |
---|---|---|---|---|
fileHeader | string | true | The name of the header in the delimited file that is processed. | - |
fieldMnemonic | string | true | A single-word ID of the field on the data set. You use this mnemonic when you query this field in the data warehouse. This value cannot have spaces, start with a number, or exceed 100 characters. | - |
DataSetVersions
Name | Type | Required | Description | Accepted Values |
---|---|---|---|---|
version | integer(int32) | false | The version of a singular data set. | - |
createdAt | string | false | The date and time when the data set version was initially entered into the system. In ISO 8601 formatting with precision ranging up to the millisecond (YYYY-MM-DDTHH:mm:ss.sssZ), for example, 2019-04-25T20:41:18.181Z. The time in this field is set automatically when the data set version is first created; therefore, the field does not need to be set explicitly. | - |
Error
Name | Type | Required | Description | Accepted Values |
---|---|---|---|---|
code | integer(int32) | true | The HTTP response status code that represents the error. | - |
message | string | true | A human-readable description of the error. | - |
errorDetails | [ErrorDetail] | false | A list of additional error details. | - |
ErrorDetail
Name | Type | Required | Description | Accepted Values |
---|---|---|---|---|
domain | string | false | A subsystem or context where an error occurred. | - |
reason | string | false | A codified value that represents the specific error that caused the current error status. | - |
message | string | false | A human-readable description of an error. | - |
locationType | string | false | The location or type of the field that caused an error. | query, header, path, formData, body |
location | string | false | The name of the field that caused an error. | - |
DataSets
Name | Type | Required | Description | Accepted Values |
---|---|---|---|---|
items | [DataSet] | true | An array containing the current page of results. | - |
totalResults | integer(int32) | false | The total number of results for the specified parameters. | - |
firstLink | string | true | The first page of results. | - |
lastLink | string | false | The last page of results. | - |
prevLink | string | false | The previous page of results. | - |
nextLink | string | false | The next page of results. | - |
putDataSets
Name | Type | Required | Description | Accepted Values |
---|---|---|---|---|
name | string | true | The human-friendly name of the data set. This value cannot exceed 255 characters. | - |
mnemonic | string | true | A single-word ID of the data set to be created. You use this mnemonic when you query the data set in the data warehouse. This value cannot have spaces, start with a number, or exceed 32 characters. | - |
schemaId | string | true | The ID of the schema in which the data set resides. | - |
fields | [Field] | true | A list of the fields included in the data set. The number of fields cannot exceed 1000. | - |
transformations | [Transformation] | true | A list of the transformations included in the data set. The number of transformations cannot exceed 150. | - |
truncate | boolean | false | Indicates whether to remove all data in the table before repopulating it when the data set is processed next. | - |
description | string | false | The description of the data set. This value cannot exceed 2000 characters. | - |
DataSetRuns
Name | Type | Required | Description | Accepted Values |
---|---|---|---|---|
items | [DataSetRun] | true | An array containing the current page of results. | - |
totalResults | integer(int32) | false | The total number of results for the specified parameters. | - |
firstLink | string | true | The first page of results. | - |
lastLink | string | false | The last page of results. | - |
prevLink | string | false | The previous page of results. | - |
nextLink | string | false | The next page of results. | - |
DataSetRun
Name | Type | Required | Description | Accepted Values |
---|---|---|---|---|
id | string | true | The unique ID of a data set run. | - |
name | string | false | The name of the data set run. | - |
startTime | string | false | The date and time when the data set run started. In ISO 8601 formatting with precision ranging up to the millisecond (YYYY-MM-DDTHH:mm:ss.sssZ), for example, 2019-04-25T20:41:18.181Z. The time in this field is set automatically when the data set run is started; therefore, the field does not need to be set explicitly. | - |
endTime | string | false | The date and time when the data set run ended. In ISO 8601 formatting with precision ranging up to the millisecond (YYYY-MM-DDTHH:mm:ss.sssZ), for example, 2019-04-25T20:41:18.181Z. The time in this field is set automatically when the data set run is ended; therefore, the field does not need to be set explicitly. | - |
status | string | false | The current status of the data set run. The following values are possible:
|
CANCELLED, FAILED, FINISHING, PROCESSING, SUCCEEDED, UNKNOWN, QUEUED |
substatus | string | false | The current, more specific status of the data set run. The following values are possible:
|
QUEUED, LAUNCHING, PREPARING:<file-name>, MIGRATING, WAITING:<resource-name>, EXECUTING:<resource-name>, REVERTING, SUCCEEDED, FAILED, KILLED, CANCELLED, UNKNOWN |
dataSetId | string | true | The ID of the data set to which this data set run belongs. | - |
workflowId | string | false | The ID of the workflow that the workflow run for this data set run belongs to. | - |
workflowRunId | string | false | The ID of the workflow run that this data set run belongs. | - |
createdAt | string | false | The date and time when the data set run was initially entered into the system. In ISO 8601 formatting with precision ranging up to the millisecond (YYYY-MM-DDTHH:mm:ss.sssZ), for example, 2019-04-25T20:41:18.181Z. The time in this field is set automatically when the data set run is first created; therefore, the field does not need to be set explicitly. | - |
tableMigrated | string | false | Changes to the data set forced the table to be migrated during this data set run. | - |
workflowName | string | false | The name of the workflow to which this data set run belongs. | - |
index | integer(int32) | false | Represents the placement of the data set run within the workflow run to which it belongs. | - |
numberOfRecordsInserted | integer(int32) | false | The number of records inserted by this data set run. | - |
numberOfRecordsUpdated | integer(int32) | false | The number of records updated by this data set run. | - |
numberOfRecordsDeleted | integer(int32) | false | The number of records deleted by this data set run. | - |
transformationRuns | [TransformationRun] | false | The transformation runs in the data set run. | - |
dataSetRunErrors | [DataSetRunError] | false | The errors for the data set run. | - |
TransformationRun
Name | Type | Required | Description | Accepted Values |
---|---|---|---|---|
id | string | false | The unique ID of a transformation run. | - |
status | string | false | The current status of the transformation run. The following values are possible:
|
FAILED, PREPARING, PROCESSING, SUCCEEDED |
transformationId | string | false | The ID of the transformation to which this transformation run belongs. | - |
startTime | string | false | The date and time when the transformation run started. In ISO 8601 formatting with precision ranging up to the millisecond (YYYY-MM-DDTHH:mm:ss.sssZ), for example, 2019-04-25T20:41:18.181Z. The time in this field is set automatically when the transformation run is started; therefore, the field does not need to be set explicitly. | - |
endTime | string | false | The date and time when the transformation run ended. In ISO 8601 formatting with precision ranging up to the millisecond (YYYY-MM-DDTHH:mm:ss.sssZ), for example, 2019-04-25T20:41:18.181Z. The time in this field is set automatically when the transformation run is ended; therefore, the field does not need to be set explicitly. | - |
query | string | false | The text of a query of a query-based transformation | - |
createdAt | string | false | The date and time when the data set run was initially entered into the system. In ISO 8601 formatting with precision ranging up to the millisecond (YYYY-MM-DDTHH:mm:ss.sssZ), for example, 2019-04-25T20:41:18.181Z. The time in this field is set automatically when the data set run is first created; therefore, the field does not need to be set explicitly. | - |
index | integer(int32) | false | Represents the placement within the data set of the transformation that this transformation run is associated with. | - |
name | string | false | The name of the transformation that this run is associated with. | - |
type | string | false | The action type of the transformation that this run is associated with | - |
numberOfRecordsInserted | integer(int32) | false | The number of records inserted by this transformation run. | - |
numberOfRecordsUpdated | integer(int32) | false | The number of records updated by this transformation run. | - |
numberOfRecordsDeleted | integer(int32) | false | The number of records deleted by this transformation run. | - |
transformationRunErrors | [TransformationRunError] | false | The errors for the transformation run. | - |
TransformationRunError
Name | Type | Required | Description | Accepted Values |
---|---|---|---|---|
id | string | false | The unique ID of a transformation run error. | - |
errorKey | string | false | The error key for the transformation run error. | - |
message | string | false | The message for the transformation run error. | - |
createdAt | string | false | The date and time when the transformation run error was initially entered into the system. In ISO 8601 formatting with precision ranging up to the millisecond (YYYY-MM-DDTHH:mm:ss.sssZ), for example, 2019-04-25T20:41:18.181Z. | - |
DataSetRunError
Name | Type | Required | Description | Accepted Values |
---|---|---|---|---|
id | string | false | The unique ID of a data set run error. | - |
errorKey | string | false | The error key for the data set run error. | - |
message | string | false | The message for the data set run error. | - |
createdAt | string | false | The date and time when the data set run error was initially entered into the system. In ISO 8601 formatting with precision ranging up to the millisecond (YYYY-MM-DDTHH:mm:ss.sssZ), for example, 2019-04-25T20:41:18.181Z. | - |
Trigger
Name | Type | Required | Description | Accepted Values |
---|---|---|---|---|
triggerType | string | true | The type of the trigger. The following values are possible:
|
TIME, DATASET |
dataSetId | string | false | The id of the Cerner Defined data set that, on completion, triggers this workflow. | - |
workflowId | string | false | The id of the workflow that, on completion, triggers this workflow. | - |
scheduleFrequency | string | false | The frequency of the trigger. The following values are possible:
|
- |
startTime | string | false | The date and time when the workflow is set to start. In ISO 8601 formatting with precision ranging up to the millisecond (YYYY-MM-DDTHH:mm:ss.sssZ), for example, 2019-04-25T20:41:18.181Z. This must be set to schedule the workflow trigger. | - |
nextScheduledRun | string | false | The date and time when the workflow will run next. In ISO 8601 formatting with precision ranging up to the millisecond (YYYY-MM-DDTHH:mm:ss.sssZ), for example, 2019-04-25T20:41:18.181Z. The time in this field is set automatically when the trigger is first created and when the workflow runs; therefore, the field does not need to be set explicitly. | - |
timeZone | string | false | The time zone in which this trigger is scheduled. | - |
createdAt | string | false | The date and time when the trigger was initially entered into the system. In ISO 8601 formatting with precision ranging up to the millisecond (YYYY-MM-DDTHH:mm:ss.sssZ), for example, 2019-04-25T20:41:18.181Z. The time in this field is set automatically when the trigger is first created; therefore, the field does not need to be set explicitly. | - |
postWorkflows
Name | Type | Required | Description | Accepted Values |
---|---|---|---|---|
name | string | true | The human-friendly name of the data set. This value cannot exceed 255 characters. | - |
description | string | false | The description of the data set. This value cannot exceed 2000 characters. | - |
dataSets | [string] | false | A sorted list of data set IDs in the order in which they are processed in the workflow. The number of data sets cannot exceed 150. | - |
trigger | Trigger | false | Trigger for the workflow. | - |
Workflow
Name | Type | Required | Description | Accepted Values |
---|---|---|---|---|
id | string | true | The unique ID of a data set. | - |
name | string | true | The human-friendly name of the data set. This value cannot exceed 255 characters. | - |
description | string | false | The description of the data set. This value cannot exceed 2000 characters. | - |
createdAt | string | false | The date and time when the data set was initially entered into the system. In ISO 8601 formatting with precision ranging up to the millisecond (YYYY-MM-DDTHH:mm:ss.sssZ), for example, 2019-04-25T20:41:18.181Z. The time in this field is set automatically when the data set is first created; therefore, the field does not need to be set explicitly. | - |
dataSets | string | false | A sorted list of data set IDs in the order in which they are processed in the workflow. The number of data sets cannot exceed 150. | - |
trigger | Trigger | false | Trigger for the workflow. | - |
Workflows
Name | Type | Required | Description | Accepted Values |
---|---|---|---|---|
items | [Workflow] | true | An array containing the current page of results. | - |
totalResults | integer(int32) | false | The total number of results for the specified parameters. | - |
firstLink | string | true | The first page of results. | - |
lastLink | string | false | The last page of results. | - |
prevLink | string | false | The previous page of results. | - |
nextLink | string | false | The next page of results. | - |
putWorkflows
Name | Type | Required | Description | Accepted Values |
---|---|---|---|---|
name | string | true | The human-friendly name of the data set. This value cannot exceed 255 characters. | - |
description | string | false | The description of the data set. This value cannot exceed 2000 characters. | - |
dataSets | [string] | false | A sorted list of data set IDs in the order in which they are processed in the workflow. The number of data sets cannot exceed 150. | - |
trigger | Trigger | false | Trigger for the workflow. | - |
postWorkflowsWorkflowidRuns
Name | Type | Required | Description | Accepted Values |
---|---|---|---|---|
offsetIndex | integer(int32) | false | No description | - |
WorkflowRun
Name | Type | Required | Description | Accepted Values |
---|---|---|---|---|
id | string | true | The unique ID of a workflow run. | - |
startTime | string | false | The date and time when the workflow run started. In ISO 8601 formatting with precision ranging up to the millisecond (YYYY-MM-DDTHH:mm:ss.sssZ), for example, 2019-04-25T20:41:18.181Z. The time in this field is set automatically when the workflow run is started; therefore, the field does not need to be set explicitly. | - |
endTime | string | false | The date and time when the workflow run ended. In ISO 8601 formatting with precision ranging up to the millisecond (YYYY-MM-DDTHH:mm:ss.sssZ), for example, 2019-04-25T20:41:18.181Z. The time in this field is set automatically when the workflow run is ended; therefore, the field does not need to be set explicitly. | - |
status | string | false | The current status of the workflow run. The following values are possible:
|
QUEUED, PROCESSING, SUCCEEDED, FAILED, CANCELLED |
workflowId | string | true | The ID of the workflow to use for this workflow run. | - |
createdAt | string | false | The date and time when the workflow run was initially entered into the system. In ISO 8601 formatting with precision ranging up to the millisecond (YYYY-MM-DDTHH:mm:ss.sssZ), for example, 2019-04-25T20:41:18.181Z. The time in this field is set automatically when the workflow run is first created; therefore, the field does not need to be set explicitly. | - |
offsetIndex | integer(int32) | false | The number in the list of data sets that the workflow run started with. | - |
dataSetRuns | [DataSetRun] | false | The data set runs in the workflow run. | - |
WorkflowRuns
Name | Type | Required | Description | Accepted Values |
---|---|---|---|---|
items | [WorkflowRun] | true | An array containing the current page of results. | - |
totalResults | integer(int32) | false | The total number of results for the specified parameters. | - |
firstLink | string | true | The first page of results. | - |
lastLink | string | false | The last page of results. | - |
prevLink | string | false | The previous page of results. | - |
nextLink | string | false | The next page of results. | - |
postSchemas
Name | Type | Required | Description | Accepted Values |
---|---|---|---|---|
name | string | false | The human-friendly name of the schema. This value may not exceed 255 characters. | - |
mnemonic | string | false | The name of the schema that is created in the data warehouse. Must be unique for the given tenant. This value may not exceed 128 characters. | - |
description | string | false | The description of the schema. This value may not exceed 255 characters. | - |
Schema
Name | Type | Required | Description | Accepted Values |
---|---|---|---|---|
id | string | true | The unique ID of the schema. | - |
name | string | false | The human-friendly name of the schema. This value may not exceed 255 characters. | - |
mnemonic | string | false | The name of the schema that is created in the data warehouse. Must be unique for the given tenant. This value may not exceed 128 characters. | - |
description | string | false | The description of the schema. This value may not exceed 255 characters. | - |
type | string | false | Indicates the type of the schema. Type can be one of the following:
|
CUSTOM_EDW, ANALYST, MILLENNIUM, POPHEALTH |
createdAt | string | false | The date and time when the data set was initially entered into the system. In ISO 8601 formatting with precision ranging up to the millisecond (YYYY-MM-DDTHH:mm:ss.sssZ), for example, 2019-04-25T20:41:18.181Z. The time in this field is set automatically when the data set is first created; therefore, the field does not need to be set explicitly. | - |
Schemas
Name | Type | Required | Description | Accepted Values |
---|---|---|---|---|
items | [Schema] | true | An array containing the current page of results. | - |
totalResults | integer(int32) | false | The total number of results for the specified parameters. | - |
firstLink | string | true | The first page of results. | - |
lastLink | string | false | The last page of results. | - |
prevLink | string | false | The previous page of results. | - |
nextLink | string | false | The next page of results. | - |
putSchemas
Name | Type | Required | Description | Accepted Values |
---|---|---|---|---|
name | string | false | The human-friendly name of the schema. This value may not exceed 255 characters. | - |
mnemonic | string | false | The name of the schema that is created in the data warehouse. Must be unique for the given tenant. This value may not exceed 128 characters. | - |
description | string | false | The description of the schema. This value may not exceed 255 characters. | - |
postFeeds
Name | Type | Required | Description | Accepted Values |
---|---|---|---|---|
dataSets | [string] | true | The array of data set ids for the data sets in the feed. | - |
name | string | true | The name of the feed. | - |
mnemonic | string | true | The unique identifier to reference the feed by.This value must be unique, must be between 3 and 20 characters long, must be lowercase alphanumeric, and must begin with a letter. This value cannot be modified after creation. | - |
description | string | true | The description of the feed. | - |
frequency | string | true | Determines how often the feed runs. This must be set to schedule the feed. Can be one of these:
|
DAILY, WEEKLY, MONTHLY N, MONTHLY LAST |
nextRun | string | true | The date and time when the feed will run next. In ISO 8601 formatting with precision ranging up to the millisecond (YYYY-MM-DDTHH:mm:ss.sssZ), for example, 2019-04-25T20:41:18.181Z. This must be set to schedule the feed. | - |
scheduleTimeZone | string | true | The time zone to use for the next run. This must be set to schedule the feed. The time zone should follow tz database/IANA format. | - |
compressionType | string | false | The type of compression used when zipping the files for download. Can be one of the two:
|
TAR_CONTAINING_LZ4, TAR_GZ |
Feed
Name | Type | Required | Description | Accepted Values |
---|---|---|---|---|
id | string | true | The unique identifier for the feed. | - |
name | string | true | The name of the feed. | - |
mnemonic | string | true | The unique identifier to reference the feed by.This value must be unique, must be between 3 and 20 characters long, must be lowercase alphanumeric, and must begin with a letter. This value cannot be modified after creation. | - |
description | string | true | The description of the feed. | - |
status | string | false | The current status of the feed. Can be one of these:
|
QUEUED, EXTRACTING, SUBMITTED, FAILED |
frequency | string | true | Determines how often the feed runs. This must be set to schedule the feed. Can be one of these:
|
DAILY, WEEKLY, MONTHLY N, MONTHLY LAST |
nextRun | string | true | The date and time when the feed will run next. In ISO 8601 formatting with precision ranging up to the millisecond (YYYY-MM-DDTHH:mm:ss.sssZ), for example, 2019-04-25T20:41:18.181Z. This must be set to schedule the feed. | - |
scheduleTimeZone | string | true | The time zone to use for the next run. This must be set to schedule the feed. The time zone should follow tz database/IANA format. | - |
compressionType | string | false | The type of compression used when zipping the files for download. Can be one of the two:
|
TAR_CONTAINING_LZ4, TAR_GZ |
deliveryChannelId | string | false | The unique identifier for the delivery channel in data syndication. | - |
createdAt | string | false | The date and time when the feed was initially entered into the system. In ISO 8601 formatting with precision ranging up to the millisecond (YYYY-MM-DDTHH:mm:ss.sssZ), for example, 2019-04-25T20:41:18.181Z. The time in this field is set automatically when the feed is first created; therefore, the field does not need to be set explicitly. | - |
dataSets | string | true | The array of data set ids for the data sets in the feed. | - |
Feeds
Name | Type | Required | Description | Accepted Values |
---|---|---|---|---|
items | [Feed] | true | An array containing the current page of results. | - |
totalResults | integer(int32) | false | The total number of results for the specified parameters. | - |
firstLink | string | true | The first page of results. | - |
lastLink | string | false | The last page of results. | - |
prevLink | string | false | The previous page of results. | - |
nextLink | string | false | The next page of results. | - |
putFeeds
Name | Type | Required | Description | Accepted Values |
---|---|---|---|---|
dataSets | [string] | true | The array of data set ids for the data sets in the feed. | - |
name | string | true | The name of the feed. | - |
mnemonic | string | true | The unique identifier to reference the feed by.This value must be unique, must be between 3 and 20 characters long, must be lowercase alphanumeric, and must begin with a letter. This value cannot be modified after creation. | - |
description | string | true | The description of the feed. | - |
frequency | string | true | Determines how often the feed runs. This must be set to schedule the feed. Can be one of these:
|
DAILY, WEEKLY, MONTHLY N, MONTHLY LAST |
nextRun | string | true | The date and time when the feed will run next. In ISO 8601 formatting with precision ranging up to the millisecond (YYYY-MM-DDTHH:mm:ss.sssZ), for example, 2019-04-25T20:41:18.181Z. This must be set to schedule the feed. | - |
scheduleTimeZone | string | true | The time zone to use for the next run. This must be set to schedule the feed. The time zone should follow tz database/IANA format. | - |
compressionType | string | false | The type of compression used when zipping the files for download. Can be one of the two:
|
TAR_CONTAINING_LZ4, TAR_GZ |
postFeedsFeedidRuns
Name | Type | Required | Description | Accepted Values |
---|---|---|---|---|
offsetIndex | integer(int32) | false | No description | - |
FeedRun
Name | Type | Required | Description | Accepted Values |
---|---|---|---|---|
id | string | true | The unique ID of a feed run. | - |
startTime | string | false | The date and time when the feed run started. In ISO 8601 formatting with precision ranging up to the millisecond (YYYY-MM-DDTHH:mm:ss.sssZ), for example, 2019-04-25T20:41:18.181Z. The time in this field is set automatically when the feed run is started; therefore, the field does not need to be set explicitly. | - |
endTime | string | false | The date and time when the feed run ended. In ISO 8601 formatting with precision ranging up to the millisecond (YYYY-MM-DDTHH:mm:ss.sssZ), for example, 2019-04-25T20:41:18.181Z. The time in this field is set automatically when the feed run is ended; therefore, the field does not need to be set explicitly. | - |
status | string | false | The current status of the feed run. The following values are possible:
|
FAILED, EXTRACTING, SUCCEEDED, QUEUED |
dataSetIds | string | false | The comma separated list of data set ids that are syndicated in this feed. | - |
feedId | string | true | The ID of the feed that the feed run belongs to. | - |
errorMessage | string | false | The error message (if any) that was received during the extraction of the feed. | - |
createdAt | string | false | The date and time when the feed run was initially entered into the system. In ISO 8601 formatting with precision ranging up to the millisecond (YYYY-MM-DDTHH:mm:ss.sssZ), for example, 2019-04-25T20:41:18.181Z. The time in this field is set automatically when the feed run is first created; therefore, the field does not need to be set explicitly. | - |
FeedRuns
Name | Type | Required | Description | Accepted Values |
---|---|---|---|---|
items | [FeedRun] | true | An array containing the current page of results. | - |
totalResults | integer(int32) | false | The total number of results for the specified parameters. | - |
firstLink | string | true | The first page of results. | - |
lastLink | string | false | The last page of results. | - |
prevLink | string | false | The previous page of results. | - |
nextLink | string | false | The next page of results. | - |