NAV
Ruby Shell

Analytics Data Warehouse API v1

The Analytics Data Warehouse service is the enterprise data warehouse (EDW) component of HealtheEDW. It pulls together data from different sources and makes it available for reporting and analytics. Data is sourced from HealtheIntent data that is ingested, normalized, transformed, and loaded into the data warehouse. When data is in the data warehouse, it can be used to create a data set. Data sets are grouped with associated data sets in schemas. Data sets primarily are composed of fields and transformations. Transformations are how data is inserted and transformed in the data set using processing. Fields determine the structure and data type of the data in the data warehouse. Workflows are how data sets are processed. They can be used to create dependencies between data sets and dictate the order in which data sets are processed.

URL: https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1

Feeds

Operations about Feeds

Create a new Feed

Example Request:




require 'httparty' # Using HTTParty 0.16.2
require 'json'

headers = {
  'Authorization' => '<auth_header>',
  'Content-Type' => 'application/json',
  'Accept' => 'application/json'
} 

result = HTTParty.post('https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/feeds', headers: headers)

print JSON.pretty_generate(result)




# You can also use wget
curl -X POST https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/feeds \
-H 'Authorization: {auth_header}' \
-H 'Content-Type: application/json' \
-H 'Accept: application/json'

POST /feeds

Creates a new feed from the provided body.

Parameters

Parameter In Type Required Default Description Accepted Values
body body postFeeds true N/A No description -

Response Statuses

Status Meaning Description Schema
201 Created Created. Feed
400 Bad Request Bad Request Error
401 Unauthorized Unauthorized Error
403 Forbidden Forbidden Error
404 Not Found Not Found Error

Retrieve a List of Feeds

Example Request:


require 'httparty' # Using HTTParty 0.16.2
require 'json'

headers = {
  'Authorization' => '<auth_header>',
  'Accept' => 'application/json'
} 

result = HTTParty.get('https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/feeds', headers: headers)

print JSON.pretty_generate(result)


# You can also use wget
curl -X GET https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/feeds \
-H 'Authorization: {auth_header}' \
-H 'Accept: application/json'

Example response

{
  "items": [
    {
      "id": "a413813c-bb7a-46b8-88d5-d6fd8bc5eaf8",
      "name": "Feed Name",
      "mnemonic": "testmnemonic7",
      "description": "This is the feed's description.",
      "status": "QUEUED",
      "frequency": "WEEKLY",
      "nextRun": "2019-04-25T20:41:18.181Z",
      "scheduleTimeZone": "America/Chicago",
      "compressionType": "TAR_GZ",
      "deliveryChannelId": "a413813c-bb7a-46b8-88d5-d6fd8bc5eaf8",
      "createdAt": "2019-04-25T20:41:18.181Z",
      "dataSets": [
        "100001234",
        "100001337",
        "100005656"
      ]
    }
  ],
  "totalResults": 1,
  "firstLink": "http://cernerdemo.api.us.healtheintent.com/example/v1/examples?offset=0&limit=20",
  "lastLink": "http://cernerdemo.api.us.healtheintent.com/example/v1/examples?offset=20&limit=20",
  "prevLink": "http://cernerdemo.api.us.healtheintent.com/example/v1/examples?offset=0&limit=20",
  "nextLink": "http://cernerdemo.api.us.healtheintent.com/example/v1/examples?offset=20&limit=20"
}

GET /feeds

Retrieves a list of feeds that match the query.

Parameters

Parameter In Type Required Default Description Accepted Values
mnemonic query string false N/A The feed mnemonic to filter by -
status query string false N/A The feed status to filter by -
offset query integer(int32) false 0 The number of results to skip from the beginning of the list of results (typically for the purpose of paging). The minimum offset is 0. There is no maximum offset. -
limit query integer(int32) false 20 The maximum number of results to display per page. The minimum limit is 1. The maximum limit is 100. -
orderBy query string false name A comma-separated list of fields by which to sort. name, -name, mnemonic, -mnemonic, status, -status, createdAt, -createdAt

Response Statuses

Status Meaning Description Schema
200 OK Retrieve a List of Feeds Feeds
400 Bad Request Bad Request Error
401 Unauthorized Unauthorized Error
403 Forbidden Forbidden Error

Delete a Feed

Example Request:


require 'httparty' # Using HTTParty 0.16.2
require 'json'

headers = {
  'Authorization' => '<auth_header>',
  'Accept' => 'application/json'
} 

result = HTTParty.delete('https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/feeds/3174fa83-4c61-4a05-8d03-e6880c81c83d', headers: headers)

print JSON.pretty_generate(result)


# You can also use wget
curl -X DELETE https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/feeds/3174fa83-4c61-4a05-8d03-e6880c81c83d \
-H 'Authorization: {auth_header}' \
-H 'Accept: application/json'

DELETE /feeds/{feedId}

Deletes a single feed.

Parameters

Parameter In Type Required Default Description Accepted Values
feedId path string true N/A The ID of the feed. -

Response Statuses

Status Meaning Description Schema
204 No Content Delete a Feed None
400 Bad Request Bad Request Error
401 Unauthorized Unauthorized Error
403 Forbidden Forbidden Error
404 Not Found Not Found Error

Update a feed

Example Request:




require 'httparty' # Using HTTParty 0.16.2
require 'json'

headers = {
  'Authorization' => '<auth_header>',
  'Content-Type' => 'application/json',
  'Accept' => 'application/json'
} 

result = HTTParty.put('https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/feeds/3174fa83-4c61-4a05-8d03-e6880c81c83d, headers: headers)

print JSON.pretty_generate(result)




# You can also use wget
curl -X PUT https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/feeds/3174fa83-4c61-4a05-8d03-e6880c81c83d \
-H 'Authorization: {auth_header}' \
-H 'Content-Type: application/json' \
-H 'Accept: application/json'

PUT /feeds/{feedId}

Updates a single feed

Parameters

Parameter In Type Required Default Description Accepted Values
feedId path string true N/A The ID of the feed. -
body body putFeeds true N/A No description -

Response Statuses

Status Meaning Description Schema
201 Created Update a feed Feed
400 Bad Request Bad Request Error
401 Unauthorized Unauthorized Error
403 Forbidden Forbidden Error
404 Not Found Not Found Error

Retrieve a Feed

Example Request:


require 'httparty' # Using HTTParty 0.16.2
require 'json'

headers = {
  'Authorization' => '<auth_header>',
  'Accept' => 'application/json'
} 

result = HTTParty.get('https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/feeds/3174fa83-4c61-4a05-8d03-e6880c81c83d', headers: headers)

print JSON.pretty_generate(result)


# You can also use wget
curl -X GET https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/feeds/3174fa83-4c61-4a05-8d03-e6880c81c83d \
-H 'Authorization: {auth_header}' \
-H 'Accept: application/json'

Example response

{
  "id": "a413813c-bb7a-46b8-88d5-d6fd8bc5eaf8",
  "name": "Feed Name",
  "mnemonic": "testmnemonic7",
  "description": "This is the feed's description.",
  "status": "QUEUED",
  "frequency": "WEEKLY",
  "nextRun": "2019-04-25T20:41:18.181Z",
  "scheduleTimeZone": "America/Chicago",
  "compressionType": "TAR_GZ",
  "deliveryChannelId": "a413813c-bb7a-46b8-88d5-d6fd8bc5eaf8",
  "createdAt": "2019-04-25T20:41:18.181Z",
  "dataSets": [
    "100001234",
    "100001337",
    "100005656"
  ]
}

GET /feeds/{feedId}

Retrieves a single feed.

Parameters

Parameter In Type Required Default Description Accepted Values
feedId path string true N/A The ID of the feed. -

Response Statuses

Status Meaning Description Schema
200 OK Retrieve a Feed Feed
400 Bad Request Bad Request Error
401 Unauthorized Unauthorized Error
403 Forbidden Forbidden Error
404 Not Found Not Found Error

Feed Runs

Operations about Feed Runs

Create a Feed Run

Example Request:




require 'httparty' # Using HTTParty 0.16.2
require 'json'

headers = {
  'Authorization' => '<auth_header>',
  'Content-Type' => 'application/json',
  'Accept' => 'application/json'
} 

result = HTTParty.post('https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/feeds/3174fa83-4c61-4a05-8d03-e6880c81c83d/runs', headers: headers)

print JSON.pretty_generate(result)




# You can also use wget
curl -X POST https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/feeds/3174fa83-4c61-4a05-8d03-e6880c81c83d/runs \
-H 'Authorization: {auth_header}' \
-H 'Content-Type: application/json' \
-H 'Accept: application/json'

POST /feeds/{feedId}/runs

Creates a new feed run from a feed ID and an offset index.

Parameters

Parameter In Type Required Default Description Accepted Values
feedId path string true N/A The ID of the feed. -
body body postFeedsFeedidRuns true N/A No description -

Response Statuses

Status Meaning Description Schema
201 Created Created. FeedRun
400 Bad Request Bad Request Error
401 Unauthorized Unauthorized Error
403 Forbidden Forbidden Error
404 Not Found Not Found Error

Retrieve a List of Feed Runs

Example Request:


require 'httparty' # Using HTTParty 0.16.2
require 'json'

headers = {
  'Authorization' => '<auth_header>',
  'Accept' => 'application/json'
} 

result = HTTParty.get('https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/feeds/3174fa83-4c61-4a05-8d03-e6880c81c83d/runs', headers: headers)

print JSON.pretty_generate(result)


# You can also use wget
curl -X GET https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/feeds/3174fa83-4c61-4a05-8d03-e6880c81c83d/runs \
-H 'Authorization: {auth_header}' \
-H 'Accept: application/json'

Example response

{
  "items": [
    {
      "id": "1a2b3c",
      "startTime": "2019-04-25T20:41:18.181Z",
      "endTime": "2019-04-25T20:41:18.181Z",
      "status": "FAILED",
      "dataSetIds": "1a2b3c",
      "feedId": "1a2b3c",
      "errorMessage": "1a2b3c",
      "createdAt": "2019-04-25T20:41:18.181Z"
    }
  ],
  "totalResults": 1,
  "firstLink": "http://cernerdemo.api.us.healtheintent.com/example/v1/examples?offset=0&limit=20",
  "lastLink": "http://cernerdemo.api.us.healtheintent.com/example/v1/examples?offset=20&limit=20",
  "prevLink": "http://cernerdemo.api.us.healtheintent.com/example/v1/examples?offset=0&limit=20",
  "nextLink": "http://cernerdemo.api.us.healtheintent.com/example/v1/examples?offset=20&limit=20"
}

GET /feeds/{feedId}/runs

Retrieves a list of Feed runs that match the query.

Parameters

Parameter In Type Required Default Description Accepted Values
feedId path string true N/A The ID of the feed. -
status query string false N/A The status of the feed run -
offset query integer(int32) false 0 The number of results to skip from the beginning of the list of results (typically for the purpose of paging). The minimum offset is 0. There is no maximum offset. -
limit query integer(int32) false 20 The maximum number of results to display per page. The minimum limit is 1. The maximum limit is 100. -
orderBy query string false -createdAt A comma-separated list of fields by which to sort. startTime, -startTime, endTime, -endTime, createdAt, -createdAt

Response Statuses

Status Meaning Description Schema
200 OK Retrieve a List of Feed Runs FeedRuns
400 Bad Request Bad Request Error
401 Unauthorized Unauthorized Error
403 Forbidden Forbidden Error

Retrieve a Feed Run

Example Request:


require 'httparty' # Using HTTParty 0.16.2
require 'json'

headers = {
  'Authorization' => '<auth_header>',
  'Accept' => 'application/json'
} 

result = HTTParty.get('https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/feeds/3174fa83-4c61-4a05-8d03-e6880c81c83d/runs/e996e2c6-85df-44e2-a320-af14691773a3', headers: headers)

print JSON.pretty_generate(result)


# You can also use wget
curl -X GET https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/feeds/3174fa83-4c61-4a05-8d03-e6880c81c83d/runs/e996e2c6-85df-44e2-a320-af14691773a3 \
-H 'Authorization: {auth_header}' \
-H 'Accept: application/json'

Example response

{
  "id": "1a2b3c",
  "startTime": "2019-04-25T20:41:18.181Z",
  "endTime": "2019-04-25T20:41:18.181Z",
  "status": "FAILED",
  "dataSetIds": "1a2b3c",
  "feedId": "1a2b3c",
  "errorMessage": "1a2b3c",
  "createdAt": "2019-04-25T20:41:18.181Z"
}

GET /feeds/{feedId}/runs/{feedRunId}

Retrieves a single feed run.

Parameters

Parameter In Type Required Default Description Accepted Values
feedId path string true N/A The ID of the feed. -
feedRunId path string true N/A The ID of the feed run. -

Response Statuses

Status Meaning Description Schema
200 OK Retrieve a Feed Run FeedRun
400 Bad Request Bad Request Error
401 Unauthorized Unauthorized Error
403 Forbidden Forbidden Error
404 Not Found Not Found Error

Data Sets

A data set is a collection of data that is created in the data warehouse.

Create a Data Set

Example Request:




require 'httparty' # Using HTTParty 0.16.2
require 'json'

headers = {
  'Authorization' => '<auth_header>',
  'Content-Type' => 'application/json',
  'Accept' => 'application/json'
} 

result = HTTParty.post('https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/data-sets', headers: headers)

print JSON.pretty_generate(result)




# You can also use wget
curl -X POST https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/data-sets \
-H 'Authorization: {auth_header}' \
-H 'Content-Type: application/json' \
-H 'Accept: application/json'

POST /data-sets

Creates a new data set, which enables you to view information about the data set.

Parameters

Parameter In Type Required Default Description Accepted Values
body body postDataSets true N/A No description -

Response Statuses

Status Meaning Description Schema
201 Created Created. DataSet
400 Bad Request Bad Request Error
401 Unauthorized Unauthorized Error
403 Forbidden Forbidden Error

Retrieve a List of Data Sets

Example Request:


require 'httparty' # Using HTTParty 0.16.2
require 'json'

headers = {
  'Authorization' => '<auth_header>',
  'Accept' => 'application/json'
} 

result = HTTParty.get('https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/data-sets', headers: headers)

print JSON.pretty_generate(result)


# You can also use wget
curl -X GET https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/data-sets \
-H 'Authorization: {auth_header}' \
-H 'Accept: application/json'

Example response

{
  "items": [
    {
      "id": "125",
      "name": "Data Set Name",
      "mnemonic": "DATA_SET_MNEMONIC",
      "schemaId": "143",
      "description": "143",
      "truncate": true,
      "cernerDefined": true,
      "createdAt": "2019-04-25T20:41:18.181Z",
      "version": 2,
    }
  ],
  "totalResults": 1,
  "firstLink": "http://cernerdemo.api.us.healtheintent.com/example/v1/examples?offset=0&limit=20",
  "lastLink": "http://cernerdemo.api.us.healtheintent.com/example/v1/examples?offset=20&limit=20",
  "prevLink": "http://cernerdemo.api.us.healtheintent.com/example/v1/examples?offset=0&limit=20",
  "nextLink": "http://cernerdemo.api.us.healtheintent.com/example/v1/examples?offset=20&limit=20"
}

GET /data-sets

Retrieves a list of data sets that match the query.

Parameters

Parameter In Type Required Default Description Accepted Values
schemaId query array[string] false N/A Filters to only the data sets that are included in the given schema. -
workflowId query string false N/A Filters to only the data sets that are included in the given workflow. -
name query string false N/A Filters the response by the name of the data set. The response includes all data sets that contain the name. Partial matches are included and matching is not case sensitive. -
mnemonic query string false N/A Filters the response by the mnemonic of the data set. The response includes all data sets that contain the mnemonic. Partial matches are included and matching is not case sensitive. -
cernerDefined query array[string] false N/A Filters the data sets based on whether they are Cerner-defined. -
offset query integer(int32) false 0 The number of results to skip from the beginning of the list of results (typically for the purpose of paging). The minimum offset is 0. There is no maximum offset. -
limit query integer(int32) false 20 The maximum number of results to display per page. The minimum limit is 1. The maximum limit is 100. -
orderBy query string false name A comma-separated list of fields by which to sort. name, -name, mnemonic, -mnemonic, createdAt, -createdAt

Response Statuses

Status Meaning Description Schema
200 OK Retrieve a List of Data Sets DataSets
400 Bad Request Bad Request Error
401 Unauthorized Unauthorized Error
403 Forbidden Forbidden Error

Delete a Data Set

Example Request:


require 'httparty' # Using HTTParty 0.16.2
require 'json'

headers = {
  'Authorization' => '<auth_header>',
  'Accept' => 'application/json'
} 

result = HTTParty.delete('https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/data-sets/404510', headers: headers)

print JSON.pretty_generate(result)


# You can also use wget
curl -X DELETE https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/data-sets/404510 \
-H 'Authorization: {auth_header}' \
-H 'Accept: application/json'

DELETE /data-sets/{dataSetId}

Deletes a single data set.

Parameters

Parameter In Type Required Default Description Accepted Values
dataSetId path string true N/A The ID of the data set. -

Response Statuses

Status Meaning Description Schema
204 No Content Delete a Data Set None
400 Bad Request Bad Request Error
401 Unauthorized Unauthorized Error
403 Forbidden Forbidden Error
404 Not Found Not Found Error

Update a Data Set

Example Request:




require 'httparty' # Using HTTParty 0.16.2
require 'json'

headers = {
  'Authorization' => '<auth_header>',
  'Content-Type' => 'application/json',
  'Accept' => 'application/json'
} 

result = HTTParty.put('https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/data-sets/404510', headers: headers)

print JSON.pretty_generate(result)




# You can also use wget
curl -X PUT https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/data-sets/404510 \
-H 'Authorization: {auth_header}' \
-H 'Content-Type: application/json' \
-H 'Accept: application/json'

PUT /data-sets/{dataSetId}

Updates a single data set.

Parameters

Parameter In Type Required Default Description Accepted Values
dataSetId path string true N/A The ID of the data set. -
body body putDataSets true N/A No description -

Response Statuses

Status Meaning Description Schema
200 OK Update a Data Set DataSet
400 Bad Request Bad Request Error
401 Unauthorized Unauthorized Error
403 Forbidden Forbidden Error
404 Not Found Not Found Error

Retrieve a Data Set

Example Request:


require 'httparty' # Using HTTParty 0.16.2
require 'json'

headers = {
  'Authorization' => '<auth_header>',
  'Accept' => 'application/json'
} 

result = HTTParty.get('https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/data-sets/404510', headers: headers)

print JSON.pretty_generate(result)


# You can also use wget
curl -X GET https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/data-sets/404510 \
-H 'Authorization: {auth_header}' \
-H 'Accept: application/json'

Example response

{
  "id": "125",
  "name": "Data Set Name",
  "mnemonic": "DATA_SET_MNEMONIC",
  "schemaId": "143",
  "description": "143",
  "truncate": true,
  "cernerDefined": true,
  "createdAt": "2019-04-25T20:41:18.181Z",
  "version": 2,
  "fields": {
    "id": "125",
    "name": "Field Name",
    "mnemonic": "field_mnemonic",
    "dataType": "VARCHAR",
    "precision": 500,
    "scale": 5,
    "primaryKey": true,
    "principalColumn": true,
    "createdAt": "2019-04-25T20:41:18.181Z"
  },
  "transformations": [
    {
      "id": "125",
      "name": "Insert a File",
      "index": 1,
      "type": "INSERT_FILE",
      "description": "This transformation inserts columns \\[a\\], \\[b\\], and \\[c\\] into the data set.",
      "delimiter": "|",
      "errorStrategy": "SKIP",
      "loadStrategy": "ALL",
      "insertOrUpdate": true,
      "loadStrategyVersion": "all",
      "createdAt": "2019-04-25T20:41:18.181Z",
      "query": "",
      "fromClause": "",
      "whereClause": "",
      "fieldValueMap": {
        "value": "121",
        "fieldMnemonic": "header_one"
      },
      "fileFieldMap": {
        "fileHeader": "File_Header",
        "fieldMnemonic": "field_mnemonic"
      },
      "files": "filename1,filename2,filename3"
    }
  ],
  "versions": [
    {
      "version": 2,
      "createdAt": "2019-04-25T20:41:18.181Z"
    }
  ]
}

GET /data-sets/{dataSetId}

Retrieves a single data set.

Parameters

Parameter In Type Required Default Description Accepted Values
dataSetId path string true N/A The ID of the data set. -
version query string false N/A The version of the data set. -

Response Statuses

Status Meaning Description Schema
200 OK Retrieve a Data Set DataSet
400 Bad Request Bad Request Error
401 Unauthorized Unauthorized Error
403 Forbidden Forbidden Error
404 Not Found Not Found Error

Data Set Runs

A data set run is the processing of a data set.

Retrieve a List of Data Set Runs

Example Request:


require 'httparty' # Using HTTParty 0.16.2
require 'json'

headers = {
  'Authorization' => '<auth_header>',
  'Accept' => 'application/json'
} 

result = HTTParty.get('https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/data-sets/404510/runs', headers: headers)

print JSON.pretty_generate(result)


# You can also use wget
curl -X GET https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/data-sets/404510/runs \
-H 'Authorization: {auth_header}' \
-H 'Accept: application/json'

Example response

{
  "items": [
    {
      "id": "1a2b3c",
      "name": "Data Set Run name",
      "startTime": "2019-04-25T20:41:18.181Z",
      "endTime": "2019-04-25T20:41:18.181Z",
      "status": "CANCELLED",
      "dataSetId": "1a2b3c",
      "workflowId": "1a2b3c",
      "workflowRunId": "1a2b3c",
      "createdAt": "2019-04-25T20:41:18.181Z",
      "tableMigrated": "false",
      "workflowName": "1a2b3c",
      "index": 123,
      "numberOfRecordsInserted": 123,
      "numberOfRecordsUpdated": 123,
      "numberOfRecordsDeleted": 123,
      "transformationRuns": [
        {
          "id": "1a2b3c",
          "status": "FAILED",
          "transformationId": "1a2b3c",
          "startTime": "2019-04-25T20:41:18.181Z",
          "endTime": "2019-04-25T20:41:18.181Z",
          "query": "select * from mytable",
          "createdAt": "2019-04-25T20:41:18.181Z",
          "index": 123,
          "name": "Insert from query",
          "type": "insert_query",
          "numberOfRecordsInserted": 123,
          "numberOfRecordsUpdated": 123,
          "numberOfRecordsDeleted": 123,
          "transformationRunErrors": [
            {
              "id": "1a2b3c",
              "errorKey": "File_Parsing",
              "message": "Pipeline job failed.",
              "createdAt": "2019-04-25T20:41:18.181Z"
            }
          ]
        }
      ],
      "dataSetRunErrors": [
        {
          "id": "1a2b3c",
          "errorKey": "File_Parsing",
          "message": "Pipeline job failed.",
          "createdAt": "2019-04-25T20:41:18.181Z"
        }
      ]
    }
  ],
  "totalResults": 1,
  "firstLink": "http://cernerdemo.api.us.healtheintent.com/example/v1/examples?offset=0&limit=20",
  "lastLink": "http://cernerdemo.api.us.healtheintent.com/example/v1/examples?offset=20&limit=20",
  "prevLink": "http://cernerdemo.api.us.healtheintent.com/example/v1/examples?offset=0&limit=20",
  "nextLink": "http://cernerdemo.api.us.healtheintent.com/example/v1/examples?offset=20&limit=20"
}

GET /data-sets/{dataSetId}/runs

Retrieves a list of Data Set runs that match the query.

Parameters

Parameter In Type Required Default Description Accepted Values
dataSetId path string true N/A The ID of the data set. -
workflowRunId query string false N/A The ID of the workflow run. -
offset query integer(int32) false 0 The number of results to skip from the beginning of the list of results (typically for the purpose of paging). The minimum offset is 0. There is no maximum offset. -
limit query integer(int32) false 20 The maximum number of results to display per page. The minimum limit is 1. The maximum limit is 100. -
orderBy query string false -createdAt A comma-separated list of fields by which to sort. startTime, -startTime, endTime, -endTime, index, -index, name, -name, createdAt, -createdAt

Response Statuses

Status Meaning Description Schema
200 OK Retrieve a List of Data Set Runs DataSetRuns
400 Bad Request Bad Request Error
401 Unauthorized Unauthorized Error
403 Forbidden Forbidden Error

Retrieve a Data Set Run

Example Request:


require 'httparty' # Using HTTParty 0.16.2
require 'json'

headers = {
  'Authorization' => '<auth_header>',
  'Accept' => 'application/json'
} 

result = HTTParty.get('https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/data-sets/404510/runs/118347178', headers: headers)

print JSON.pretty_generate(result)


# You can also use wget
curl -X GET https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/data-sets/404510/runs/118347178 \
-H 'Authorization: {auth_header}' \
-H 'Accept: application/json'

Example response

{
  "id": "1a2b3c",
  "name": "Data Set Run name",
  "startTime": "2019-04-25T20:41:18.181Z",
  "endTime": "2019-04-25T20:41:18.181Z",
  "status": "CANCELLED",
  "dataSetId": "1a2b3c",
  "workflowId": "1a2b3c",
  "workflowRunId": "1a2b3c",
  "createdAt": "2019-04-25T20:41:18.181Z",
  "tableMigrated": "false",
  "workflowName": "1a2b3c",
  "index": 123,
  "numberOfRecordsInserted": 123,
  "numberOfRecordsUpdated": 123,
  "numberOfRecordsDeleted": 123,
  "transformationRuns": [
    {
      "id": "1a2b3c",
      "status": "FAILED",
      "transformationId": "1a2b3c",
      "startTime": "2019-04-25T20:41:18.181Z",
      "endTime": "2019-04-25T20:41:18.181Z",
      "query": "select * from mytable",
      "createdAt": "2019-04-25T20:41:18.181Z",
      "index": 123,
      "name": "Insert from query",
      "type": "insert_query",
      "numberOfRecordsInserted": 123,
      "numberOfRecordsUpdated": 123,
      "numberOfRecordsDeleted": 123,
      "transformationRunErrors": [
        {
          "id": "1a2b3c",
          "errorKey": "File_Parsing",
          "message": "Pipeline job failed.",
          "createdAt": "2019-04-25T20:41:18.181Z"
        }
      ]
    }
  ],
  "dataSetRunErrors": [
    {
      "id": "1a2b3c",
      "errorKey": "File_Parsing",
      "message": "Pipeline job failed.",
      "createdAt": "2019-04-25T20:41:18.181Z"
    }
  ]
}

GET /data-sets/{dataSetId}/runs/{dataSetRunId}

Retrieves a single data set run.

Parameters

Parameter In Type Required Default Description Accepted Values
dataSetId path string true N/A The ID of the data set. -
dataSetRunId path string true N/A The ID of the data set run. -

Response Statuses

Status Meaning Description Schema
200 OK Retrieve a Data Set Run DataSetRun
400 Bad Request Bad Request Error
401 Unauthorized Unauthorized Error
403 Forbidden Forbidden Error
404 Not Found Not Found Error

Workflows

Workflows are used to process one or many data sets and create processing dependencies between data sets.

Create a Workflow

Example Request:




require 'httparty' # Using HTTParty 0.16.2
require 'json'

headers = {
  'Authorization' => '<auth_header>',
  'Content-Type' => 'application/json',
  'Accept' => 'application/json'
} 

result = HTTParty.post('https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/workflows', headers: headers)

print JSON.pretty_generate(result)




# You can also use wget
curl -X POST https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/workflows \
-H 'Authorization: {auth_header}' \
-H 'Content-Type: application/json' \
-H 'Accept: application/json'

POST /workflows

Creates a new workflow that is comprised of a name, description, and ordered list of data set IDs.

Parameters

Parameter In Type Required Default Description Accepted Values
body body postWorkflows true N/A No description -

Response Statuses

Status Meaning Description Schema
201 Created Created. Workflow
400 Bad Request Bad Request Error
401 Unauthorized Unauthorized Error
403 Forbidden Forbidden Error

Retrieve a List of Workflows

Example Request:


require 'httparty' # Using HTTParty 0.16.2
require 'json'

headers = {
  'Authorization' => '<auth_header>',
  'Accept' => 'application/json'
} 

result = HTTParty.get('https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/workflows', headers: headers)

print JSON.pretty_generate(result)


# You can also use wget
curl -X GET https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/workflows \
-H 'Authorization: {auth_header}' \
-H 'Accept: application/json'

Example response

{
  "items": [
    {
      "id": "125",
      "name": "Workflow Name",
      "description": "This workflow is used to process data sets related to the Sepsis Dashboard.",
      "createdAt": "2019-04-25T20:41:18.181Z",
      "dataSets": "555,556,557"
    }
  ],
  "totalResults": 1,
  "firstLink": "http://cernerdemo.api.us.healtheintent.com/example/v1/examples?offset=0&limit=20",
  "lastLink": "http://cernerdemo.api.us.healtheintent.com/example/v1/examples?offset=20&limit=20",
  "prevLink": "http://cernerdemo.api.us.healtheintent.com/example/v1/examples?offset=0&limit=20",
  "nextLink": "http://cernerdemo.api.us.healtheintent.com/example/v1/examples?offset=20&limit=20"
}

GET /workflows

Retrieves a list of workflows that match the query.

Parameters

Parameter In Type Required Default Description Accepted Values
name query string false N/A Filters the response by the name of the workflow. The response includes all the workflows that contain the name. Partial matches are included and matching is not case sensitive. -
offset query integer(int32) false 0 The number of results to skip from the beginning of the list of results (typically for the purpose of paging). The minimum offset is 0. There is no maximum offset. -
limit query integer(int32) false 20 The maximum number of results to display per page. The minimum limit is 1. The maximum limit is 100. -
orderBy query string false name A comma-separated list of fields by which to sort. name, -name, createdAt, -createdAt

Response Statuses

Status Meaning Description Schema
200 OK Retrieve a List of Workflows Workflows
400 Bad Request Bad Request Error
401 Unauthorized Unauthorized Error
403 Forbidden Forbidden Error

Delete a Workflow

Example Request:


require 'httparty' # Using HTTParty 0.16.2
require 'json'

headers = {
  'Authorization' => '<auth_header>',
  'Accept' => 'application/json'
} 

result = HTTParty.delete('https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/workflows/45014', headers: headers)

print JSON.pretty_generate(result)


# You can also use wget
curl -X DELETE https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/workflows/45014 \
-H 'Authorization: {auth_header}' \
-H 'Accept: application/json'

DELETE /workflows/{workflowId}

Deletes a single workflow.

Parameters

Parameter In Type Required Default Description Accepted Values
workflowId path string true N/A The ID of the workflow. -

Response Statuses

Status Meaning Description Schema
204 No Content Delete a Workflow None
400 Bad Request Bad Request Error
401 Unauthorized Unauthorized Error
403 Forbidden Forbidden Error
404 Not Found Not Found Error

Retrieve a Workflow

Example Request:


require 'httparty' # Using HTTParty 0.16.2
require 'json'

headers = {
  'Authorization' => '<auth_header>',
  'Accept' => 'application/json'
} 

result = HTTParty.get('https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/workflows/45014', headers: headers)

print JSON.pretty_generate(result)


# You can also use wget
curl -X GET https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/workflows/45014 \
-H 'Authorization: {auth_header}' \
-H 'Accept: application/json'

Example response

{
  "id": "125",
  "name": "Workflow Name",
  "description": "This workflow is used to process data sets related to the Sepsis Dashboard.",
  "createdAt": "2019-04-25T20:41:18.181Z",
  "dataSets": "555,556,557"
}

GET /workflows/{workflowId}

Retrieves a single workflow.

Parameters

Parameter In Type Required Default Description Accepted Values
workflowId path string true N/A The ID of the workflow. -

Response Statuses

Status Meaning Description Schema
200 OK Retrieve a Workflow Workflow
400 Bad Request Bad Request Error
401 Unauthorized Unauthorized Error
403 Forbidden Forbidden Error
404 Not Found Not Found Error

Workflow Runs

A workflow run is the processing of a workflow.

Create a Workflow Run

Example Request:




require 'httparty' # Using HTTParty 0.16.2
require 'json'

headers = {
  'Authorization' => '<auth_header>',
  'Content-Type' => 'application/json',
  'Accept' => 'application/json'
} 

result = HTTParty.post('https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/workflows/45014/runs', headers: headers)

print JSON.pretty_generate(result)




# You can also use wget
curl -X POST https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/workflows/45014/runs \
-H 'Authorization: {auth_header}' \
-H 'Content-Type: application/json' \
-H 'Accept: application/json'

POST /workflows/{workflowId}/runs

Creates a new workflow run from a workflow ID and an offset index.

Parameters

Parameter In Type Required Default Description Accepted Values
workflowId path string true N/A The ID of the workflow. -
body body postWorkflowsWorkflowidRuns true N/A No description -

Response Statuses

Status Meaning Description Schema
201 Created Created. WorkflowRun
400 Bad Request Bad Request Error
401 Unauthorized Unauthorized Error
403 Forbidden Forbidden Error
404 Not Found Not Found Error

Retrieve a List of Workflow Runs

Example Request:


require 'httparty' # Using HTTParty 0.16.2
require 'json'

headers = {
  'Authorization' => '<auth_header>',
  'Accept' => 'application/json'
} 

result = HTTParty.get('https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/workflows/45014/runs', headers: headers)

print JSON.pretty_generate(result)


# You can also use wget
curl -X GET https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/workflows/45014/runs \
-H 'Authorization: {auth_header}' \
-H 'Accept: application/json'

Example response

{
  "items": [
    {
      "id": "125",
      "startTime": "2019-04-25T20:41:18.181Z",
      "endTime": "2019-04-25T20:41:18.181Z",
      "status": "QUEUED",
      "workflowId": "143",
      "createdAt": "2019-04-25T20:41:18.181Z",
      "offsetIndex": 0,
      "dataSetRuns": [
        {
          "id": "1a2b3c",
          "name": "Data Set Run name",
          "startTime": "2019-04-25T20:41:18.181Z",
          "endTime": "2019-04-25T20:41:18.181Z",
          "status": "CANCELLED",
          "dataSetId": "1a2b3c",
          "workflowId": "1a2b3c",
          "workflowRunId": "1a2b3c",
          "createdAt": "2019-04-25T20:41:18.181Z",
          "tableMigrated": "false",
          "workflowName": "1a2b3c",
          "index": 123,
          "numberOfRecordsInserted": 123,
          "numberOfRecordsUpdated": 123,
          "numberOfRecordsDeleted": 123,
          "transformationRuns": [
            {
              "id": "1a2b3c",
              "status": "FAILED",
              "transformationId": "1a2b3c",
              "startTime": "2019-04-25T20:41:18.181Z",
              "endTime": "2019-04-25T20:41:18.181Z",
              "query": "select * from mytable",
              "createdAt": "2019-04-25T20:41:18.181Z",
              "index": 123,
              "name": "Insert from query",
              "type": "insert_query",
              "numberOfRecordsInserted": 123,
              "numberOfRecordsUpdated": 123,
              "numberOfRecordsDeleted": 123,
              "transformationRunErrors": [
                {
                  "id": "1a2b3c",
                  "errorKey": "File_Parsing",
                  "message": "Pipeline job failed.",
                  "createdAt": "2019-04-25T20:41:18.181Z"
                }
              ]
            }
          ],
          "dataSetRunErrors": [
            {
              "id": "1a2b3c",
              "errorKey": "File_Parsing",
              "message": "Pipeline job failed.",
              "createdAt": "2019-04-25T20:41:18.181Z"
            }
          ]
        }
      ]
    }
  ],
  "totalResults": 1,
  "firstLink": "http://cernerdemo.api.us.healtheintent.com/example/v1/examples?offset=0&limit=20",
  "lastLink": "http://cernerdemo.api.us.healtheintent.com/example/v1/examples?offset=20&limit=20",
  "prevLink": "http://cernerdemo.api.us.healtheintent.com/example/v1/examples?offset=0&limit=20",
  "nextLink": "http://cernerdemo.api.us.healtheintent.com/example/v1/examples?offset=20&limit=20"
}

GET /workflows/{workflowId}/runs

Retrieves a list of workflow runs that match the query.

Parameters

Parameter In Type Required Default Description Accepted Values
workflowId path string true N/A The ID of the workflow. -
offset query integer(int32) false 0 The number of results to skip from the beginning of the list of results (typically for the purpose of paging). The minimum offset is 0. There is no maximum offset. -
limit query integer(int32) false 20 The maximum number of results to display per page. The minimum limit is 1. The maximum limit is 100. -
orderBy query string false -createdAt A comma-separated list of fields by which to sort. startTime, -startTime, endTime, -endTime, createdAt, -createdAt

Response Statuses

Status Meaning Description Schema
200 OK Retrieve a List of Workflow Runs WorkflowRuns
400 Bad Request Bad Request Error
401 Unauthorized Unauthorized Error
403 Forbidden Forbidden Error

Retrieve a Workflow Run

Example Request:


require 'httparty' # Using HTTParty 0.16.2
require 'json'

headers = {
  'Authorization' => '<auth_header>',
  'Accept' => 'application/json'
} 

result = HTTParty.get('https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/workflows/45014/runs/5393608', headers: headers)

print JSON.pretty_generate(result)


# You can also use wget
curl -X GET https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/workflows/45014/runs/5393608 \
-H 'Authorization: {auth_header}' \
-H 'Accept: application/json'

Example response

{
  "id": "125",
  "startTime": "2019-04-25T20:41:18.181Z",
  "endTime": "2019-04-25T20:41:18.181Z",
  "status": "QUEUED",
  "workflowId": "143",
  "createdAt": "2019-04-25T20:41:18.181Z",
  "offsetIndex": 0,
  "dataSetRuns": [
    {
      "id": "1a2b3c",
      "name": "Data Set Run name",
      "startTime": "2019-04-25T20:41:18.181Z",
      "endTime": "2019-04-25T20:41:18.181Z",
      "status": "CANCELLED",
      "dataSetId": "1a2b3c",
      "workflowId": "1a2b3c",
      "workflowRunId": "1a2b3c",
      "createdAt": "2019-04-25T20:41:18.181Z",
      "tableMigrated": "false",
      "workflowName": "1a2b3c",
      "index": 123,
      "numberOfRecordsInserted": 123,
      "numberOfRecordsUpdated": 123,
      "numberOfRecordsDeleted": 123,
      "transformationRuns": [
        {
          "id": "1a2b3c",
          "status": "FAILED",
          "transformationId": "1a2b3c",
          "startTime": "2019-04-25T20:41:18.181Z",
          "endTime": "2019-04-25T20:41:18.181Z",
          "query": "select * from mytable",
          "createdAt": "2019-04-25T20:41:18.181Z",
          "index": 123,
          "name": "Insert from query",
          "type": "insert_query",
          "numberOfRecordsInserted": 123,
          "numberOfRecordsUpdated": 123,
          "numberOfRecordsDeleted": 123,
          "transformationRunErrors": [
            {
              "id": "1a2b3c",
              "errorKey": "File_Parsing",
              "message": "Pipeline job failed.",
              "createdAt": "2019-04-25T20:41:18.181Z"
            }
          ]
        }
      ],
      "dataSetRunErrors": [
        {
          "id": "1a2b3c",
          "errorKey": "File_Parsing",
          "message": "Pipeline job failed.",
          "createdAt": "2019-04-25T20:41:18.181Z"
        }
      ]
    }
  ]
}

GET /workflows/{workflowId}/runs/{workflowRunId}

Retrieves a single workflow run.

Parameters

Parameter In Type Required Default Description Accepted Values
workflowId path string true N/A The ID of the workflow. -
workflowRunId path string true N/A The ID of the workflow run. -

Response Statuses

Status Meaning Description Schema
200 OK Retrieve a Workflow Run WorkflowRun
400 Bad Request Bad Request Error
401 Unauthorized Unauthorized Error
403 Forbidden Forbidden Error
404 Not Found Not Found Error

Schemas

A schema is a collection of data sets in the data warehouse. It can contain one or many data sets.

Retrieve a List of Schemas

Example Request:


require 'httparty' # Using HTTParty 0.16.2
require 'json'

headers = {
  'Authorization' => '<auth_header>',
  'Accept' => 'application/json'
} 

result = HTTParty.get('https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/schemas', headers: headers)

print JSON.pretty_generate(result)


# You can also use wget
curl -X GET https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/schemas \
-H 'Authorization: {auth_header}' \
-H 'Accept: application/json'

Example response

{
  "items": [
    {
      "id": "125",
      "name": "Schema Name",
      "mnemonic": "SCHEMA_MNEMONIC",
      "description": "This schema houses all user-defined data sets for client A.",
      "type": "CUSTOM_EDW",
      "createdAt": "2019-04-25T20:41:18.181Z"
    }
  ],
  "totalResults": 1,
  "firstLink": "http://cernerdemo.api.us.healtheintent.com/example/v1/examples?offset=0&limit=20",
  "lastLink": "http://cernerdemo.api.us.healtheintent.com/example/v1/examples?offset=20&limit=20",
  "prevLink": "http://cernerdemo.api.us.healtheintent.com/example/v1/examples?offset=0&limit=20",
  "nextLink": "http://cernerdemo.api.us.healtheintent.com/example/v1/examples?offset=20&limit=20"
}

GET /schemas

Retrieves a list of schemas that match the query.

Parameters

Parameter In Type Required Default Description Accepted Values
type query string false N/A Filters the schemas by type. -

Response Statuses

Status Meaning Description Schema
200 OK Retrieve a List of Schemas Schemas
400 Bad Request Bad Request Error
401 Unauthorized Unauthorized Error
403 Forbidden Forbidden Error

Retrieve a Single Schema

Example Request:


require 'httparty' # Using HTTParty 0.16.2
require 'json'

headers = {
  'Authorization' => '<auth_header>',
  'Accept' => 'application/json'
} 

result = HTTParty.get('https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/schemas/269', headers: headers)

print JSON.pretty_generate(result)


# You can also use wget
curl -X GET https://cernerdemo.api.us-1.healtheintent.com/analytics-data-warehouse/v1/schemas/269 \
-H 'Authorization: {auth_header}' \
-H 'Accept: application/json'

Example response

{
  "id": "125",
  "name": "Schema Name",
  "mnemonic": "SCHEMA_MNEMONIC",
  "description": "This schema houses all user-defined data sets for client A.",
  "type": "CUSTOM_EDW",
  "createdAt": "2019-04-25T20:41:18.181Z"
}

GET /schemas/{schemaId}

Retrieves a single schema.

Parameters

Parameter In Type Required Default Description Accepted Values
schemaId path string true N/A The ID of the schema. -

Response Statuses

Status Meaning Description Schema
200 OK Retrieve a Single Schema Schema
400 Bad Request Bad Request Error
401 Unauthorized Unauthorized Error
403 Forbidden Forbidden Error
404 Not Found Not Found Error

Schema Definitions

postDataSets

Name Type Required Description Accepted Values
name string true No description -
mnemonic string true No description -
schemaId string true No description -
fields [object] false No description -
» name string true No description -
» mnemonic string true No description -
» dataType string true No description VARCHAR, INTEGER, DOUBLE, DECIMAL, TIMESTAMP, TIMESTAMPTZ, BOOLEAN, DATE, VARBINARY, varchar, integer, double, decimal, timestamp, timestamptz, boolean, date, varbinary
» precision integer(int32) false No description -
» scale integer(int32) false No description -
» primaryKey boolean false No description -
» principalColumn boolean false No description -
transformations [object] false No description -
» name string true No description -
» index integer(int32) true No description -
» type string true No description INSERT_FILE, INSERT_QUERY, UPDATE, DELETE, insert_file, insert_query, update, delete
» description string false No description -
» delmiter string false No description -
» errorStrategy string false No description FAIL, SKIP, fail, skip
» loadStategy string false No description LATEST, ALL, NEW, latest, all, new
» loadStrategyVersion string false No description -
» insertOrUpdate boolean false No description -
» query string false No description -
» fromClause string false No description -
» whereClause string false No description -
truncate boolean false No description -
description string false No description -

DataSet

Name Type Required Description Accepted Values
id string true The unique ID of a data set. -
name string true The human-friendly name of the data set. This value cannot exceed 255 characters. -
mnemonic string true A single-word ID of the data set to be created. You use this mnemonic when you query the data set in the data warehouse. This value cannot have spaces, start with a number, or exceed 32 characters. -
schemaId string true The ID of the schema in which the data set resides. -
description string false The description of the data set. This value cannot exceed 2000 characters. -
truncate boolean false Indicates whether to remove all data in the table before repopulating it when the data set is processed next. -
cernerDefined boolean false Indicates whether the data set was defined by Cerner. Data sets defined by Cerner cannot be modified by users and should be considered read-only. -
createdAt string false The date and time when the data set was initially entered into the system. In ISO 8601 formatting with precision ranging up to the millisecond (YYYY-MM-DDTHH:mm:ss.sssZ), for example, 2019-04-25T20:41:18.181Z. The time in this field is set automatically when the data set is first created; therefore, the field does not need to be set explicitly. -
version integer(int32) true The version number associated with the data set retrieved. -
fields Field false A list of the fields included in the data set. The number of fields cannot exceed 1000. -
transformations [Transformation] false A list of the transformations included in the data set. The number of transformations cannot exceed 150. -
versions [DataSetVersions] false A list of versions available for the data set. -

Field

Name Type Required Description Accepted Values
id string true The unique ID of a field. -
name string true The human-friendly name of the field. This value cannot exceed 255 characters. -
mnemonic string true A single-word ID for the field on the data set. You use this mnemonic when querying this field in the data warehouse. This value cannot have spaces, start with a number, or exceed 100 characters. -
dataType string true The data type of the column to be created in the data warehouse. The following values are possible: * VARCHAR: A text value limited to the number of characters defined in the precision. * INTEGER: A numeric value with no decimals. * DOUBLE: A more precise numeric value with decimals. * DECIMAL: A numeric value limited to the number of digits after the decimal point defined in the scale. * TIMESTAMP: An ISO 8601 formatted timestamp without time zone information. * TIMESTAMPTZ: An ISO 8601 formatted timestamp with time zone information included. * BOOLEAN: A binary true or false value. * DATE: An ISO 8601 date. * VARBINARY: A binary value limited to the number of bytes defined in the precision. VARCHAR, INTEGER, DOUBLE, DECIMAL, TIMESTAMP, TIMESTAMPTZ, BOOLEAN, DATE, VARBINARY
precision integer(int32) true The number of allowed characters or bytes for the data type in the data warehouse. This applies only to fields with a dataType value of VARCHAR or VARBINARY. -
scale integer(int32) true The number of allowed digits after the decimal point in a number. This applies only to fields with a dataType value of DECIMAL. -
primaryKey boolean true Indicates whether the field is used as a primary key for the data set. -
principalColumn boolean false Indicates whether the field is used as a principal column for the data set. This is used for database optimization. -
createdAt string false The date and time when the field was initially entered into the system. In ISO 8601 formatting with precision ranging up to the millisecond (YYYY-MM-DDTHH:mm:ss.sssZ), for example, 2019-04-25T20:41:18.181Z. The time in this field is set automatically when the field is first created; therefore, the field does not need to be set explicitly. -

Transformation

Name Type Required Description Accepted Values
id string true The unique ID of a transformation. -
name string true The human-friendly name of the transformation. This value cannot exceed 255 characters. -
index integer(int32) true The index that represents the order in the transformations in which the step is executed. -
type string true The type of transformation to be executed. The following types are possible: * INSERT_FILE: Inserts data from a file into the data set. * INSERT_QUERY: Uses a query to select data and insert it into the data set. * UPDATE: Uses an update statement to update data that is already in the data set. * DELETE: Defines a qualifier to delete certain data from the data set. INSERT_FILE, INSERT_QUERY, UPDATE, DELETE
description string false The human-friendly description of the transformation. This value cannot exceed 2000 characters. -
delimiter string false A character that represents the delimiter that is used to parse the file. -
errorStrategy string false The strategy used to report errors. The following values are available: * FAIL: When bad data is encountered, data set processing is failed. * SKIP: When bad data is encountered, that data point is skipped and processing continues. FAIL, SKIP
loadStrategy string false The strategy used for loading files into the data set. This is required if the action type is INSERT_FILE. The following options are supported: * ALL: Loads all files that match the given file. * LATEST: Loads only the latest file. * NEW: Loads only files that were uploaded after the last successful run of the data set. LATEST, ALL, NEW
insertOrUpdate boolean false Indicates whether to treat rows with duplicate primary keys as updates rather than inserts. Primary keys must be defined to use this option. true, false
loadStrategyVersion string false The release version with which to start when loading files for this transformation. -
createdAt string false The date and time when the transformation was initially entered into the system. In ISO 8601 formatting with precision ranging up to the millisecond (YYYY-MM-DDTHH:mm:ss.sssZ), for example, 2019-04-25T20:41:18.181Z. The time in this field is set automatically when the transformation is first created; therefore, the field does not need to be set explicitly. -
query string false An ANSI SQL query that selects data to be inserted into the data set. -
fromClause string false An ANSI SQL clause that indicates from where the data is updated. -
whereClause string false An ANSI SQL clause that indicates the conditions that need to be satisfied for the data to be updated. -
fieldValueMap FieldValueMapping false A list of the mappings of column headers to field mnemonics. -
fileFieldMap FileFieldMapping false A list of the mappings of column headers to field mnemonics. -
files string false A list of the file names for which releases are processed for the transformation. -

FieldValueMapping

Name Type Required Description Accepted Values
value string true The expression that indicates the value that the field is set at for the update. -
fieldMnemonic string true A single-word ID of the field on the data set. You use this mnemonic when you query this field in the data warehouse. This value cannot have spaces, start with a number, or exceed 100 characters. -

FileFieldMapping

Name Type Required Description Accepted Values
fileHeader string true The name of the header in the delimited file that is processed. -
fieldMnemonic string true A single-word ID of the field on the data set. You use this mnemonic when you query this field in the data warehouse. This value cannot have spaces, start with a number, or exceed 100 characters. -

DataSetVersions

Name Type Required Description Accepted Values
version integer(int32) true The version of a singular data set. -
createdAt string true The date and time when the data set version was initially entered into the system. In ISO 8601 formatting with precision ranging up to the millisecond (YYYY-MM-DDTHH:mm:ss.sssZ), for example, 2019-04-25T20:41:18.181Z. The time in this field is set automatically when the data set version is first created; therefore, the field does not need to be set explicitly. -

Error

Name Type Required Description Accepted Values
code integer(int32) true The HTTP response status code that represents the error. -
message string true A human-readable description of the error. -
errorDetails [ErrorDetail] false A list of additional error details. -

ErrorDetail

Name Type Required Description Accepted Values
domain string false A subsystem or context where an error occurred. -
reason string false A codified value that represents the specific error that caused the current error status. -
message string false A human-readable description of an error. -
locationType string false The location or type of the field that caused an error. query, header, path, formData, body
location string false The name of the field that caused an error. -

DataSets

Name Type Required Description Accepted Values
items [DataSet] true An array containing the current page of results. -
totalResults integer(int32) false The total number of results for the specified parameters. -
firstLink string true The first page of results. -
lastLink string false The last page of results. -
prevLink string false The previous page of results. -
nextLink string false The next page of results. -

putDataSets

Name Type Required Description Accepted Values
name string true No description -
mnemonic string true No description -
schemaId string true No description -
fields [object] true No description -
» name string true No description -
» mnemonic string true No description -
» dataType string true No description VARCHAR, INTEGER, DOUBLE, DECIMAL, TIMESTAMP, TIMESTAMPTZ, BOOLEAN, DATE, VARBINARY, varchar, integer, double, decimal, timestamp, timestamptz, boolean, date, varbinary
» precision integer(int32) false No description -
» scale integer(int32) false No description -
» primaryKey boolean false No description -
» principalColumn boolean false No description -
» oldMnemonic string false No description -
transformations [object] true No description -
» name string true No description -
» index integer(int32) true No description -
» type string true No description INSERT_FILE, INSERT_QUERY, UPDATE, DELETE, insert_file, insert_query, update, delete
» description string false No description -
» delmiter string false No description -
» errorStrategy string false No description FAIL, SKIP, fail, skip
» loadStategy string false No description LATEST, ALL, NEW, latest, all, new
» loadStrategyVersion string false No description -
» insertOrUpdate boolean false No description -
» query string false No description -
» fromClause string false No description -
» whereClause string false No description -
truncate boolean false No description -
description string false No description -

DataSetRuns

Name Type Required Description Accepted Values
items [DataSetRun] true An array containing the current page of results. -
totalResults integer(int32) false The total number of results for the specified parameters. -
firstLink string true The first page of results. -
lastLink string false The last page of results. -
prevLink string false The previous page of results. -
nextLink string false The next page of results. -

DataSetRun

Name Type Required Description Accepted Values
id string true The unique ID of a data set run. -
name string false The name of the data set run. -
startTime string false The date and time when the data set run started. In ISO 8601 formatting with precision ranging up to the millisecond (YYYY-MM-DDTHH:mm:ss.sssZ), for example, 2019-04-25T20:41:18.181Z. The time in this field is set automatically when the data set run is started; therefore, the field does not need to be set explicitly. -
endTime string false The date and time when the data set run ended. In ISO 8601 formatting with precision ranging up to the millisecond (YYYY-MM-DDTHH:mm:ss.sssZ), for example, 2019-04-25T20:41:18.181Z. The time in this field is set automatically when the data set run is ended; therefore, the field does not need to be set explicitly. -
status string false The current status of the data set run. The following values are possible: * CANCELLED: The data set run was cancelled. * FAILED: The data set run failed. * FINISHING: All the transformations have completed and the data set run is trying to commit. * PROCESSING: The data set run is processing. * QUEUED: The data set run is waiting to execute. * SUCCEEDED: The data set run succeeded. * UNKNOWN: The status of the data set run cannot be determined. CANCELLED, FAILED, FINISHING, PROCESSING, SUCCEEDED, UNKNOWN, QUEUED
dataSetId string true The ID of the data set to which this data set run belongs. -
workflowId string false The ID of the workflow that the workflow run for this data set run belongs to. -
workflowRunId string false The ID of the workflow run that this data set run belongs. -
createdAt string false The date and time when the data set run was initially entered into the system. In ISO 8601 formatting with precision ranging up to the millisecond (YYYY-MM-DDTHH:mm:ss.sssZ), for example, 2019-04-25T20:41:18.181Z. The time in this field is set automatically when the data set run is first created; therefore, the field does not need to be set explicitly. -
tableMigrated string false Changes to the data set forced the table to be migrated during this data set run. -
workflowName string false The name of the workflow to which this data set run belongs. -
index integer(int32) false Represents the placement of the data set run within the workflow run to which it belongs. -
numberOfRecordsInserted integer(int32) false The number of records inserted by this data set run. -
numberOfRecordsUpdated integer(int32) false The number of records updated by this data set run. -
numberOfRecordsDeleted integer(int32) false The number of records deleted by this data set run. -
transformationRuns [TransformationRun] false The transformation runs in the data set run. -
dataSetRunErrors [DataSetRunError] false The errors for the data set run. -

TransformationRun

Name Type Required Description Accepted Values
id string true The unique ID of a transformation run. -
status string false The current status of the transformation run. The following values are possible: * FAILED: The transformation run failed. * PREPARING: The transformation run is waiting while one or more files needed by the data set are being loaded. * PROCESSING: The transformation run is processing. * SUCCEEDED: The transformation run succeeded. FAILED, PREPARING, PROCESSING, SUCCEEDED
transformationId string true The ID of the transformation to which this transformation run belongs. -
startTime string false The date and time when the transformation run started. In ISO 8601 formatting with precision ranging up to the millisecond (YYYY-MM-DDTHH:mm:ss.sssZ), for example, 2019-04-25T20:41:18.181Z. The time in this field is set automatically when the transformation run is started; therefore, the field does not need to be set explicitly. -
endTime string false The date and time when the transformation run ended. In ISO 8601 formatting with precision ranging up to the millisecond (YYYY-MM-DDTHH:mm:ss.sssZ), for example, 2019-04-25T20:41:18.181Z. The time in this field is set automatically when the transformation run is ended; therefore, the field does not need to be set explicitly. -
query string false The text of a query of a query-based transformation -
createdAt string false The date and time when the data set run was initially entered into the system. In ISO 8601 formatting with precision ranging up to the millisecond (YYYY-MM-DDTHH:mm:ss.sssZ), for example, 2019-04-25T20:41:18.181Z. The time in this field is set automatically when the data set run is first created; therefore, the field does not need to be set explicitly. -
index integer(int32) false Represents the placement within the data set of the transformation that this transformation run is associated with. -
name string false The name of the transformation that this run is associated with. -
type string false The action type of the transformation that this run is associated with -
numberOfRecordsInserted integer(int32) false The number of records inserted by this transformation run. -
numberOfRecordsUpdated integer(int32) false The number of records updated by this transformation run. -
numberOfRecordsDeleted integer(int32) false The number of records deleted by this transformation run. -
transformationRunErrors [TransformationRunError] false The errors for the transformation run. -

TransformationRunError

Name Type Required Description Accepted Values
id string true The unique ID of a transformation run error. -
errorKey string false The error key for the transformation run error. -
message string false The message for the transformation run error. -
createdAt string false The date and time when the transformation run error was initially entered into the system. In ISO 8601 formatting with precision ranging up to the millisecond (YYYY-MM-DDTHH:mm:ss.sssZ), for example, 2019-04-25T20:41:18.181Z. -

DataSetRunError

Name Type Required Description Accepted Values
id string true The unique ID of a data set run error. -
errorKey string false The error key for the data set run error. -
message string false The message for the data set run error. -
createdAt string false The date and time when the data set run error was initially entered into the system. In ISO 8601 formatting with precision ranging up to the millisecond (YYYY-MM-DDTHH:mm:ss.sssZ), for example, 2019-04-25T20:41:18.181Z. -

postWorkflows

Name Type Required Description Accepted Values
name string true No description -
description string false No description -
dataSets [string] false No description -

Workflow

Name Type Required Description Accepted Values
id string true The unique ID of a data set. -
name string true The human-friendly name of the data set. This value cannot exceed 255 characters. -
description string false The description of the data set. This value cannot exceed 2000 characters. -
createdAt string false The date and time when the data set was initially entered into the system. In ISO 8601 formatting with precision ranging up to the millisecond (YYYY-MM-DDTHH:mm:ss.sssZ), for example, 2019-04-25T20:41:18.181Z. The time in this field is set automatically when the data set is first created; therefore, the field does not need to be set explicitly. -
dataSets string false A sorted list of data set IDs in the order in which they are processed in the workflow. The number of data sets cannot exceed 150. -

Workflows

Name Type Required Description Accepted Values
items [Workflow] true An array containing the current page of results. -
totalResults integer(int32) false The total number of results for the specified parameters. -
firstLink string true The first page of results. -
lastLink string false The last page of results. -
prevLink string false The previous page of results. -
nextLink string false The next page of results. -

postWorkflowsWorkflowidRuns

Name Type Required Description Accepted Values
offsetIndex integer(int32) false No description -

WorkflowRun

Name Type Required Description Accepted Values
id string true The unique ID of a workflow run. -
startTime string false The date and time when the workflow run started. In ISO 8601 formatting with precision ranging up to the millisecond (YYYY-MM-DDTHH:mm:ss.sssZ), for example, 2019-04-25T20:41:18.181Z. The time in this field is set automatically when the workflow run is started; therefore, the field does not need to be set explicitly. -
endTime string false The date and time when the workflow run ended. In ISO 8601 formatting with precision ranging up to the millisecond (YYYY-MM-DDTHH:mm:ss.sssZ), for example, 2019-04-25T20:41:18.181Z. The time in this field is set automatically when the workflow run is ended; therefore, the field does not need to be set explicitly. -
status string false The current status of the workflow run. The following values are possible: * QUEUED: The workflow run is queued and waiting to run. * PROCESSING: The workflow run is processing. * SUCCEEDED: The workflow run succeeded. * FAILED: The workflow run failed. * CANCELLED: The workflow run was cancelled. QUEUED, PROCESSING, SUCCEEDED, FAILED, CANCELLED
workflowId string true The ID of the workflow to use for this workflow run. -
createdAt string false The date and time when the workflow run was initially entered into the system. In ISO 8601 formatting with precision ranging up to the millisecond (YYYY-MM-DDTHH:mm:ss.sssZ), for example, 2019-04-25T20:41:18.181Z. The time in this field is set automatically when the workflow run is first created; therefore, the field does not need to be set explicitly. -
offsetIndex integer(int32) false The number in the list of data sets that the workflow run started with. -
dataSetRuns [DataSetRun] false The data set runs in the workflow run. -

WorkflowRuns

Name Type Required Description Accepted Values
items [WorkflowRun] true An array containing the current page of results. -
totalResults integer(int32) false The total number of results for the specified parameters. -
firstLink string true The first page of results. -
lastLink string false The last page of results. -
prevLink string false The previous page of results. -
nextLink string false The next page of results. -

Schemas

Name Type Required Description Accepted Values
items [Schema] true An array containing the current page of results. -
totalResults integer(int32) false The total number of results for the specified parameters. -
firstLink string true The first page of results. -
lastLink string false The last page of results. -
prevLink string false The previous page of results. -
nextLink string false The next page of results. -

Schema

Name Type Required Description Accepted Values
id string true The unique ID of the schema. -
name string true The human-friendly name of the schema. This value may not exceed 255 characters. -
mnemonic string true The name of the schema that is created in the data warehouse. Must be unique for the given tenant. This value may not exceed 100 characters. -
description string false The description of the schema. This value may not exceed 2000 characters. -
type string false Indicates the type of the schema. Type can be one of the following: * CUSTOM_EDW: A user-defined schema. Used to house user-defined data sets. * ANALYST: A Cerner-defined schema. Used to house user-defined data sets. * MILLENNIUM: A Cerner-defined schema. Used to house Cerner-defined Cerner Millennium-based data sets. * POPHEALTH: A Cerner-defined schema. Used to house Cerner-defined population health data sets. CUSTOM_EDW, ANALYST, MILLENNIUM, POPHEALTH
createdAt string false The date and time when the data set was initially entered into the system. In ISO 8601 formatting with precision ranging up to the millisecond (YYYY-MM-DDTHH:mm:ss.sssZ), for example, 2019-04-25T20:41:18.181Z. The time in this field is set automatically when the data set is first created; therefore, the field does not need to be set explicitly. -

postFeeds

Name Type Required Description Accepted Values
dataSets [string] true No description -
name string true No description -
mnemonic string true No description -
description string true No description -
frequency string true No description -
nextRun string true No description -
scheduleTimeZone string true No description -
compressionType string false No description TAR_CONTAINING_LZ4, TAR_GZ

Feed

Name Type Required Description Accepted Values
id string true The unique identifier for the feed. -
name string false The name of the feed. -
mnemonic string true The unique identifier to reference the feed by.This value must be unique, must be between 3 and 20 characters long, must be lowercase alphanumeric, and must begin with a letter. This value cannot be modified after creation. -
description string false The description of the feed. -
status string false The current status of the feed. Can be one of these: * QUEUED * EXTRACTING * SUBMITTED * FAILED QUEUED, EXTRACTING, SUBMITTED, FAILED
frequency string false Determines how often the feed runs. This must be set to schedule the feed. Can be one of these: * DAILY: The feed runs daily. * WEEKLY The feed runs weekly on the day the next run is set. * MONTHLY N The feed runs on the Nth day every month. N can be a number or ‘LAST’ to run on the last day of the month. DAILY, WEEKLY, MONTHLY N, MONTHLY LAST
nextRun string false The date and time when the feed will run next. In ISO 8601 formatting with precision ranging up to the millisecond (YYYY-MM-DDTHH:mm:ss.sssZ), for example, 2019-04-25T20:41:18.181Z. This must be set to schedule the feed. -
scheduleTimeZone string false The time zone to use for the next run. This must be set to schedule the feed. The time zone should follow tz database/IANA format. -
compressionType string false The type of compression used when zipping the files for download. Can be one of the two: * TAR_CONTAINING_LZ4 * TAR_GZ TAR_CONTAINING_LZ4, TAR_GZ
deliveryChannelId string false The unique identifier for the delivery channel in data syndication. -
createdAt string false The date and time when the feed was initially entered into the system. In ISO 8601 formatting with precision ranging up to the millisecond (YYYY-MM-DDTHH:mm:ss.sssZ), for example, 2019-04-25T20:41:18.181Z. The time in this field is set automatically when the feed is first created; therefore, the field does not need to be set explicitly. -
dataSets string true The array of data set ids for the data sets in the feed. -

Feeds

Name Type Required Description Accepted Values
items [Feed] true An array containing the current page of results. -
totalResults integer(int32) false The total number of results for the specified parameters. -
firstLink string true The first page of results. -
lastLink string false The last page of results. -
prevLink string false The previous page of results. -
nextLink string false The next page of results. -

putFeeds

Name Type Required Description Accepted Values
dataSets [string] true No description -
name string true No description -
mnemonic string true No description -
description string true No description -
frequency string true No description -
nextRun string true No description -
scheduleTimeZone string true No description -
compressionType string false No description TAR_CONTAINING_LZ4, TAR_GZ

## postFeedsFeedidRuns

Name Type Required Description Accepted Values
offsetIndex integer(int32) false No description -

FeedRun

Name Type Required Description Accepted Values
id string true The unique ID of a feed run. -
startTime string false The date and time when the feed run started. In ISO 8601 formatting with precision ranging up to the millisecond (YYYY-MM-DDTHH:mm:ss.sssZ), for example, 2019-04-25T20:41:18.181Z. The time in this field is set automatically when the feed run is started; therefore, the field does not need to be set explicitly. -
endTime string false The date and time when the feed run ended. In ISO 8601 formatting with precision ranging up to the millisecond (YYYY-MM-DDTHH:mm:ss.sssZ), for example, 2019-04-25T20:41:18.181Z. The time in this field is set automatically when the feed run is ended; therefore, the field does not need to be set explicitly. -
status string false The current status of the feed run. The following values are possible: * FAILED: The feed run failed. * EXTRACTING: The feed is currently extracting. * QUEUED: The feed run is waiting to extract. * SUCCEEDED: The feed run succeeded. FAILED, EXTRACTING, SUCCEEDED, QUEUED
dataSetIds string true The comma separated list of data set ids that are syndicated in this feed. -
feedId string false The ID of the feed that the feed run belongs to. -
errorMessage string false The error message (if any) that was received during the extraction of the feed. -
createdAt string false The date and time when the feed run was initially entered into the system. In ISO 8601 formatting with precision ranging up to the millisecond (YYYY-MM-DDTHH:mm:ss.sssZ), for example, 2019-04-25T20:41:18.181Z. The time in this field is set automatically when the feed run is first created; therefore, the field does not need to be set explicitly. -

FeedRuns

Name Type Required Description Accepted Values
items [FeedRun] true An array containing the current page of results. -
totalResults integer(int32) false The total number of results for the specified parameters. -
firstLink string true The first page of results. -
lastLink string false The last page of results. -
prevLink string false The previous page of results. -
nextLink string false The next page of results. -