The following sections specify how to implement an API for partner app according to the Qmiix App Protocol.
HTTPS
The production version of your app API must be served over HTTPS.
API URL prefix
Designate an API URL prefix for all of your API endpoints in your app configuration.
Endpoint paths:
Endpoints are scoped to the current version of the Qmiix App Protocol by appending your API URL prefix with /qmiix/v1 for all requests.
For examples,
{{api_url_prefix}}/qmiix/v1/triggers/new_file_in_folder
{{api_url_prefix}}/qmiix/v1/actions/download_file
Headers with authentication
Use UTF-8 as the response encoding and support HTTP-level compression. Requests from Qmiix to your app API have the following headers:
Authorization: Bearer {{user_access_token}}
Accept: application/json
Accept-Charset: utf-8
Accept-Encoding: gzip, deflate
Content-Type: application/json
X-Request-ID: {{random_uuid}}
Headers without authentication
Use UTF-8 as the response encoding and support HTTP-level compression. Requests from Qmiix to your app API have the following headers:
Qmiix-App-Key: {{qmiix_app_key}}
Accept: application/json
Accept-Charset: utf-8
Accept-Encoding: gzip, deflate
Content-Type: application/json
X-Request-ID: {{random_uuid}}
HTTP status codes
Use the following set of HTTP response status codes. For actions, if received with errors with 5xx codes, Qmiix will retry for several times unless skip is specified (see for more information); however, for errors with 4xx errors, Qmiix will not retry.
Status Code |
Description |
200 |
The request was a success. |
400 |
There was something wrong with incoming data from Qmiix. Provide an error response body to clarify what went wrong. |
401 |
Qmiix sent an invalid OAuth2 access token. As with all other endpoints which require authentication via access token, partner app should return a 401 status to indicate that the access token is invalid or expired. |
404 |
Qmiix is trying to reach a URL that doesn’t exist. |
500 |
There was an error in your application logic, i.e., internal server error. |
503 |
Your service is not available at the moment, but Qmiix should try again later. |
Response body format
Provide response bodies as JSON objects. Success responses have a top-level wrapper object called data.
Raw body on success
{ "data": { // The value of `data` varies, but is typically // either an object or array ... } }
Error responses have a top-level errors array. Each element of errors is an object with a message property whose value is a user-friendly error message.
Raw body on error
{ "errors": [ { "message": "Something went wrong!" } ] }
If the partner app requires user authentication, users must connect partner app's OAuth service before they can use its triggers or actions. The following section specifies the required OAuth endpoints. Note that for partner apps as proxy for actual service provider, there is no need for implementation of OAuth2 endpoints in house since actual service provider has implemented these endpoints.
App connection has three steps:
Authorization Flow. This step authorizes Qmiix to make requests to the partner app API on behalf of the user.
Token Exchange. Qmiix exchanges token in this step.
Fetching and storing basic user information from the partner app API. Note that this endpoint is different from OAuth endpoints. This should be implemented by partner app in house.
Authentication Flow
Qmiix app connection protocol should support OAuth2 authentication and refresh tokens if so desired.
Qmiix client credentials:
When configuring your app, provide Qmiix with a client ID and client secret for authentication-related requests.
Qmiix authorization:
Request
To begin authentication, Qmiix UI redirects the user to your Oauth2 Authorization URL, specified in the App Authentication settings, and makes the following request:
Request
Method |
GET |
URL |
Your Oauth2 Authorization URL |
Parameters
scope: is qmiix
client_id: is Qmiix's client ID for your app as set in your app configuration.
response_type: is code
redirect_uri:
is the Qmiix UI url which will finish the OAuth flow after the browser redirects the user back to Qmiix UI from the apps OAuth page.
state: is the user state information added by Qmiix UI before redirect to Oauth2 authorization URL and this state information will be carried back to the Qmiix UI in order to continue the state.
Example:
https://dev-account.dev-myqnapcloud.com/oauth/auth?scope=qmiix&client_id=94b26e58a3a88d5c&response_type=code&redirect_uri=https://qmiix.com/apps/qnap/authorize&state=sauiewyioqewbhl1239801
Response
After the user is redirected to your authorization request endpoint, the partner oauth service would authenticate the user and prompt to grant Qmiix access to the user’s resources on the app.
Authorization grant
Once a user authorizes Qmiix, the partner oauth service would redirect the user to Qmiix's app authorization URL along with an authorization code which Qmiix UI can exchange for a bearer token in the next step.
Redirect URL
https://qmiix.com/apps/{app_id}/authorize
The app_idis a string used to represent your app in URLs. You can set it in your app configuration.
Parameters:
code: The authorization code generated from service oauth.
state: The anti-forgery token provided by IFTTT in Step 2.
Example:
https://qmiix.com/apps/{app_id}/authorize?code=qweewiopiqqwe50&state=sauiewyioqewbhl1239801
User denies Qmiix:
If the user denies Qmiix access to the partner's oauth service, the partner should redirect to Qmiix access_denied endpoint (specified by UI)
Example:
https://qmiix.com/apps/qnap/authorize?error=access_denied
Token Exchange:
Request:
After Qmiix UI has received an authorization code for the user, Qmiix should make a POST request to your Oauth2 Token URL, specified in the App authentication settings, and exchange the code for an access token.
Body Parameters
grant_type: authorization_code
code: The authorization code generated from previous step.
client_id: Qmiix's client ID for your App as set in your app configuration.
client_secret: Qmiiix's client secret for your app as set in your app configuration.
redirect_uri: https://qmiix.com/apps/{app_id}/authorize
The app_id is a string used to represent your app in URLs. You can set it in your app configuration.
Example:
POST /oauth2/token HTTP/1.1
Host: dev-account.dev-myqnapcloud.com
Content-Type: application/x-www-form-urlencoded
grant_type=authorization_code&code=ewrwe45243f&client_id=8346dffgdfgdf2&client_secret=c4f7dedfgdgdrtf9b23&redirect_uri=https://qmiix.com/apps/{app_id}/authorize
Response:
If the authorization code is not valid, partner will respond with a 401 status code and error response body. If the request is valid, partner will provide the following response:
HTTP:
Status |
Header |
200 |
Content-Type application/json; charset=utf-8 |
401 |
Content-Type application/json; charset=utf-8 |
Body:
token: Bearer
access_token: A token Qmiix will use to make authenticated calls to your API.
refresh_token: (optional) If enabled, refresh token Qmiix will use to refresh access tokens.
expires_in: Expiry time of access token, in seconds.
Example Response:
HTTP/1.1 200 OK Content-Type: application/json; charset=utf-8 { "token_type": "Bearer", "access_token": "324878904ehioawhlkqewopque0492038", "refresh_token": "b29a7ertertuoptertt578a6be6402d2", "expires_in" : 86500 }
Token Refresh:
Request:
After token expiry, Qmiix will make a POST request to your Oauth2 Token Endpoint, specified in the App authentication settings, and use the refresh token to retrieve a new access token. This is the default behavior when user wants to reconnect an app.
HTTP:
Method |
POST |
URL |
Oauth2 Token Endpoint |
Headers |
Content-Type application/x-www-form-urlencoded |
grant_type: refresh_token
client_id: Qmiix's client ID for your app as set in your app configuration
client_secret: Qmiix's secret for your app as set in your app configuration
refresh_token: The refresh token retrieved in the authentication flow.
Example:
POST /oauth2/token HTTP/1.1
Host: dev-account.dev-myqnapcloud.com
Content-Type: application/x-www-form-urlencoded
grant_type=refresh_token&client_id=trryt57586768678&client_secret=6976996868769696796&refresh_token=6963434sdryyit3434534
Response:
If the refresh token is not valid, partner will respond with a 401 status code.If the refresh token is valid, partner will provide the following response:
HTTP:
Status |
200 |
Headers |
Content-Type application/json; charset=utf-8 |
Body:
token_type: Bearer
access_token: The updated access token
expires_in: Expiry time of the updated access token, in seconds.
Example:
HTTP/1.1 200 OK Content-Type: application/json; charset=utf-8 { "token_type": "Bearer", "access_token": "324878904ehioawhlkqewopque0492038", "refresh_token": "b29a7ertertuoptertt578a6be6402d2", "expires_in" : 86500 }
Fetch User Information:
Request:
After acquiring an access token, Qmiix UI will make a request to partner's user information endpoint. This information is considered private, and will only be displayed to the user who connected to partner's app.
Qmiix UI will make requests to this endpoint to verify that the user’s access token is still valid for checking whether the app connection is still valid.
HTTP
Method |
GET |
URL |
{{api_url}}/qmiix/v1/user/info |
Headers:
Authorization: Bearer {{user_access_token}}
Accept: application/json
Accept-Charset: utf-8
Accept-Encoding: gzip, deflate
X-Request-ID: {{random_uuid}}
Example :
GET /qmiix/v1/user/info HTTP/1.1
Host: dev-qmiix.api.dev-myqnapcloud.com
Authorization: Bearer {{user_access_token}}
Accept: application/json
Accept-Charset: utf-8
Accept-Encoding: gzip, deflate
X-Request-ID: {{random_uuid}}
Response:
Requests to the partner's user information endpoint should generate the following response:
HTTP:
Status |
200 |
Header |
Content-Type application/json; charset=utf-8 |
Body:
name: Full name, username, email, or other identification to display to the user.
id: Username, email, number, or other identification to uniquely identify the resource owner within your app.
url: URL to user’s dashboard or configuration page on partner's website.
Example:
HTTP/1.1 200 OK Content-Type: application/json; charset=utf-8 { "data": { "name": "John Millman", "id": "johnmillman@qnap.com", "url": "http://www.myqnapcloud.com/users/johnmillman" } }
As with all other endpoints which require authentication via access token, you should return a 401 status to indicate that the access token is invalid or expired.
Each trigger requires a unique API endpoint. Qmiix supports two types of triggers, they are
Real Time Trigger: Qmiix suggest using Realtime API for the triggers if a user would expect its Miixes to run in realtime and to ensure a great Miix experience for your users. It’s like we have a app to watch any folder modification, if any change has happened, then the real-time notification API will be called to perform such action. The advantage of this implementation is there won’t be any delay.
Trigger Polling: For each miix using a given trigger, Qmiix will poll that trigger’s endpoint for events once about every 10 minutes (The period might be changed). For each new item returned by the trigger, Qmiix will fire the miix's associated action.
A trigger endpoint should return (by default) up to the 50 most recent events (The number of default events might be changed), duplicate events or old events will be discarded by Qmiix. The number of returned items can be overridden by Qmiix by specifying a limit parameter in the request. Please do not exceed the number of events returned as specified by the limit parameter in the request as Qmiix will discard the excessive events. Events should remain on the timeline for some longer time and should not expire within that time period.
Note that if the Realtime API is used for a given trigger, it will be polled at a longer period. However, Qmiix is guaranteed to at least poll once about every 20 minutes (The period might be changed).
Request
To fetch new items Qmiix will make the following request to your trigger endpoints:
HTTP:
Method |
POST |
URL |
{{api_url}}/qmiix/v1/triggers/{trigger_slug} |
Headers Authenticated Apps
Authorization: Bearer {{user_access_token}}
Accept: application/json
Accept-Charset: utf-8
Accept-Encoding: gzip, deflate
Content-Type: application/json
X-Request-ID: {{random_uuid}}
Headers Non-Authenticated Apps
Qmiix-App-Key: {{qmiix_app_key}}
Accept: application/json
Accept-Charset: utf-8
Accept-Encoding: gzip, deflate
Content-Type: application/json
X-Request-ID: {{random_uuid}}
Body:
trigger_identity: A unique identifier for this set of trigger essentials for a given miix.
trigger_essentials: Map of trigger essential slugs to values.
limit: Maximum number of items to be returned, default 50 (Could be changed)
user: Information about the Qmiix user related to this request.
qmiix_source: Information about the user miix on Qmiix that triggered this request. This will have an id uniquely identifying the miix and a url pointing to a web page describing it. Note that only the user will be able to see this page, since user miixes are private. In the future, these fields may point to an entity other than a personal miix.
Example : default limit
POST /qmiix/v1/triggers/new_file_in_a_folder HTTP/1.1 Host: dropbox.qmiix.service.api.com Authorization: Bearer {{dropbox user_access_token}} Accept: application/json Accept-Charset: utf-8 Accept-Encoding: gzip, deflate X-Request-ID: {{random_uuid}} Body: { "trigger_identity": "92429d82a41e93048", "trigger_essentials": { "folder_path": "/qmiix", "file_type": "all" }, "qmiix_source": { "id": "2", "url": "https://qmiix.com/mymiixes/personal/2" }, "user": { "timezone": "Pacific Time (US & Canada)", "id": "123618726hdjkdahksa" } }
This example provides the limit parameter
Example : explicit limit
POST /qmiix/v1/triggers/new_file_in_a_folder HTTP/1.1 Host: dropbox.qmiix.service.api.com Authorization: Bearer {{dropbox user_access_token}} Accept: application/json Accept-Charset: utf-8 Accept-Encoding: gzip, deflate X-Request-ID: {{random_uuid}} Body: { "trigger_identity": "92429d82a41e93048", "trigger_essentials": { "folder_path": "/qmiix", "file_type": "all" }, “limit”:10, "qmiix_source": { "id": "2", "url": "https://qmiix.com/mymiixes/personal/2" }, "user": { "timezone": "Pacific Time (US & Canada)", "id": "123618726hdjkdahksa" } }
Response:
Responses contain an array of item objects. Items are a stream of unique events on a timeline, and each item has:
One field for every element in the trigger.
meta[id]: A unique identifier used to identify an unique event.
meta[timestamp]: A timestamp in Unix seconds. Items in the stream must be in descending order by the timestamp.
Response should be structured as follows
HTTP
Status |
200 |
Headers |
Content-Type application/json; charset=utf-8 |
For the Body you get a JSON object which contains an array, data, of item objects. Items have one key-value pair for each trigger element slug and value, and a meta object with two fields: id and timestamp.
Example:
Body: { "data": [ { "image_url": "http://example.com/images/128", "tags": "banksy, brooklyn", "posted_at": "2013-11-04T09:23:00-07:00" "meta": { "id": "14b9-1fd2-acaa-5df5", "timestamp": 1383597267 } }, { "image_url": "http://example.com/images/125", "tags": "banksy, nyc", "posted_at": "2013-11-04T03:23:00-07:00" "meta": { "id": "ffb27-a63e-18e0-18ad", "timestamp": 1383596355 } } ] }
Date and time elements
Elements that use the Date or Date with time are timestamps in the W3 flavor of ISO8601 formats.
Example: Date only
2013-12-31
2014-01-01
Example: Date & Time
2013-11-04T09:23:00Z
2013-11-04T09:23:00-07:00
Trigger Identity
When a user miix is created or enabled, if a new trigger_identity is created or resumed, Qmiix will call POST trigger identity API to partner app endpoints to notify trigger partner app that your app should start monitoring the events for this trigger identity. On the other hand, if there is no active user miix using this trigger identity. A DELETE trigger identity API will be called to notify trigger partner app that your app should stop monitoring the events for this trigger identity and stop sending events to Qmiix.
Note that if the POST or DELETE API is not successfully called to partner app, Qmiix will not try to call again. Therefore, partner app should rely on these apis to start or stop monitoring of events.
POST Request
HTTP
Method |
POST |
URL |
{{api_url}}/qmiix/v1/triggers/{trigger_slug}/trigger_identity/{trigger_identity} |
Example:
POST/qmiix/v1/triggers/new_file_in_a_folder/trigger_identity/aaa438792g3201 HTTP/1.1 Host: dropbox.qmiix.service.api.com Authorization: Bearer {{dropbox user_access_token}} Accept: application/json Accept-Charset: utf-8 Accept-Encoding: gzip, deflate X-Request-ID: {{random_uuid}} Body: { "trigger_essentials":{ "folder_path": "/qmiix", "file_type": "all" }, "qmiix_source": { "id": "2", "url": "https://qmiix.com/mymiixes/personal/2" }, "user": { "timezone": "Pacific Time (US & Canada)", "id": "123618726hdjkdahksa" } }
Response
HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
DELETE Request
HTTP
Method |
DELETE |
URL |
{{api_url}}/qmiix/v1/triggers/{trigger_slug}/trigger_identity/{trigger_identity} |
Example
DELETE /qmiix/v1/triggers/new_file_in_a_folder/trigger_identity/aaa438792g3201 HTTP/1.1
Host: dropbox.qmiix.service.api.com
Authorization: Bearer {{dropbox user_access_token}}
Accept: application/json
Accept-Charset: utf-8
Accept-Encoding: gzip, deflate
X-Request-ID: {{random_uuid}}
Response
HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
Trigger essentials can have dynamic options and dynamic validation. Each dynamic option and validation requires a unique endpoint.
Dynamic options
Options have both a label, which the user sees, and a value, which is sent when the trigger is executed.
Request
For drop-down selector trigger essentials, you can dynamically provide user-specific options. Each time the drop-down is displayed, Qmiix will fetch a list of options from your trigger essential’s dynamic options endpoint.
HTTP
Method |
POST |
URL |
{{api_url}}/qmiix/v1/triggers/{{trigger_slug}}/essentials/{{trigger_essential_slug}}/options |
HEADERS: authenticated services
Authorization: Bearer {{user_access_token}}
Accept: application/json
Accept-Charset: utf-8
Accept-Encoding: gzip, deflate
Content-Type: application/json
X-Request-ID: {{random_uuid}}
HEADERS: non-authenticated services
Qmiix-App-Key: {{qmiix_app_key}}
Accept: application/json
Accept-Charset: utf-8
Accept-Encoding: gzip, deflate
Content-Type: application/json
X-Request-ID: {{random_uuid}}
Example
POST/qmiix/v1/triggers/new_song_in_my_nas/essentials/singer/options HTTP/1.1 Host: dropbox.qmiix.service.api.com Authorization: Bearer {{user_access_token}} Qmiix-App-Key: {{qmiix_app_key}} Accept: application/json Accept-Charset: utf-8 Accept-Encoding: gzip, deflate X-Request-ID: {{random_uuid}} Body: { "data" : [ { "dependency_sequence": 0, "key_name": "nas", "value": "howardNASTS-Pro" } ] }
Response
HTTP/1.1 200 OK Content-Type: application/json; charset=utf-8 { "data": [ { "label": "Bruno Mars", "value": "Bruno Mars" }, { "label": "Mariah Carey", "value": "Mariah Carey" } ] }
Dynamic validation
Request
For date, time, text, list of text and location input trigger essentials, you can dynamically validate user input. Qmiix will make the following request to your partner app API:
HTTP
Method |
POST |
URL |
{{api_url}}/qmiix/v1/triggers/{trigger_slug}/essentials/{{trigger_essential_slug}}/validate |
HEADERS: authenticated services
Authorization: Bearer {{user_access_token}}
Accept: application/json
Accept-Charset: utf-8
Accept-Encoding: gzip, deflate
Content-Type: application/json
X-Request-ID: {{random_uuid}}
HEADERS: non-authenticated services
Qmiix-App-Key: {{qmiix_app_key}}
Accept: application/json
Accept-Charset: utf-8
Accept-Encoding: gzip, deflate
Content-Type: application/json
X-Request-ID: {{random_uuid}}
Example
POST/qmiix/v1/triggers/song_played_in_my_nas/essentials/song/validate HTTP/1.1 Host: dropbox.qmiix.service.api.com Authorization: Bearer {{user_access_token}} Qmiix-App-Key: {{qmiix_app_key}} Accept: application/json Accept-Charset: utf-8 Accept-Encoding: gzip, deflate X-Request-ID: {{random_uuid}} Body: { "value": "uptown funking" "data" : [ { "dependency_sequence": 0, "key_name": "nas", "value": "howardNASTS-Pro" }, { "dependency_sequence": 1, "key_name": "singer", "value": "Bruno Mars" } ] }
Response
HTTP/1.1 200 OK Content-Type: application/json; charset=utf-8 { "data": { "valid": false, "message": "Sorry, no song exists with the name \"uptown funking\" from Bruno Mars." } }
When you create a new trigger and define the data the trigger will make available via elements, we'll ask you to provide examples of how the trigger would be best used in various categories of actions. This helps to guide users create powerful miixes more efficiently from helpful defaults instead of empty essentials.
Below you'll find helpful tips for each of the action categories:
Mobile push notification tips:
Should be friendly and personal! Use 'you' instead of 'my'.
Don't make the content too dense. Inform the user of the most important information about the event.
Avoid elements that point to URLs.
Short message tips:
Provide the most important information using elements that keep the message contextual to the event.
Keep in mind that the message might be truncated based on character restrictions.
Include a URL element if one is available.
Long post tips:
Use the 'Post body' essential to craft a delightful message for users. HTML is accepted, so be sure to add formatting that might enhance the message.
Keep in mind that this content is used in email actions, which are widely used. 'Post title' is the subject and 'Post body' is the body of the email.
Plaintext file tips:
Plaintext files are great for record keeping. Be sure to use all relevant elements in the 'Plaintext body'.
If the 'Filename' is static (ex. Saved tracks on Spotify), one document will be created and then appended to for each subsequent event.
If the 'Filename' contains a dynamic element (ex. Track saved on {{SavedAt}}), a new file will be created with each event because it will have a unique filename.
The 'Folder path' specified will be created if it does not yet exist for the user.
Spreadsheet tips:
Spreadsheets are great for record keeping. Be sure to use all relevant elements.
Use ||| to separate cells in a row of a spreadsheet (ex. "{{Element1}}|||{{Element2}}|||{{Element3}}")
If you would like to use an image in one of the cells use =IMAGE("{{element}}";1).
Phone call tips:
The contents here will be read aloud when the phone call action runs. Keep that in mind when formatting the template.
A good starting place for this template is to reference the contents you wrote for the notification template.
Calendar event tips:
You need to have at least one timestamp element included in the 'Quick add text' so that the calendar action knows when to create the event.
If your trigger produces the start time and end time of the event, be sure to use both elements in the template (ex. "Some event occurred from {{StartTime}} to {{EndTime}}")
Each action requires a unique API endpoint. Qmiix supports two types of actions, they are
Real Time Action: These are the actions of the channel apps that can execute in realtime, so that the Miix can run in realtime to ensure a great Miix experience for your users.
Action Polling: Qmiix provides an Action Polling API for the apps to actively poll the unexecuted actions on a non-realtime basis. This API is helpful for the apps that are inevitably unavailable from time to time. Take mobile devices as an example. There are often times mobile devices are offline (turned off or network disconnected) while Qmiix sending actions to them. In this manner, the mobile app should be polling for the non-executed actions temporarily queued in Qmiix from time to time to ensure there are no missing actions especially when resuming network or restarted.
Request
For each new trigger item, Qmiix will push data to your action endpoint with the following request structure:
HTTP
Method |
POST |
URL |
{{api_url}}/qmiix/v1/actions/{{action_slug}} |
HEADERS: authenticated services
Authorization: Bearer {{user_access_token}}
Accept: application/json
Accept-Charset: utf-8
Accept-Encoding: gzip, deflate
Content-Type: application/json
X-Request-ID: {{random_uuid}}
HEADERS: non-authenticated services
Qmiix-App-Key: {{qmiix_app_key}}
Accept: application/json
Accept-Charset: utf-8
Accept-Encoding: gzip, deflate
Content-Type: application/json
X-Request-ID: {{random_uuid}}
Example
POST /qmiix/v1/actions/download_file HTTP/1.1 Host: dropbox.qmiix.service.api.com Authorization: Bearer {{dropbox user_access_token}} Qmiix-App-Key: {{qmiix_app_key}} Accept: application/json Accept-Charset: utf-8 Accept-Encoding: gzip, deflate X-Request-ID: {{random_uuid}} Body: { "action_essentials": { "folder_path": "/qmiix", "file_path": "http://image.com/123456" }, "qmiix_source": { "id": "2", "url": "https://qmiix.com/mymiixes/personal/2", "execution_id": "oiea1268910acdr83212" }, "user": { "timezone": "Pacific Time (US & Canada)", "id": "123618726hdjkdahksa" } }
Response
HTTP/1.1 200 OK Content-Type application/json; charset=utf-8 Body: { "data": [ { "id": "234325", "url": "http://example.com/posts/234325", "asynchronous": true } ] }
Skipping Actions
If an action fails and returns an error, Qmiix will retry several times. If the action continues to fail, the offending event will eventually be skipped. However, partner app could specify status: “SKIP” in error response to Qmiix to tell Qmiix skip this section. Note that, Qmiix will not retry the action regardless the actual action execution result. It will be partner app's responsible to take care of the retry.
Example:
HTTP/1.1 400 Bad request Content-Type: application/json; charset=utf-8 { "errors": [ { "status": "SKIP", "message": "Media file size too big" } ] }
Action essentials can have dynamic options and dynamic validation. Each dynamic option and validation requires a unique endpoint.
Dynamic Options:
Options have both a label, which the user sees, and a value, which is sent when the action is executed.
Request
For drop-down selector action essentials, you can dynamically provide user-specific options. Each time the action essential is displayed, Qmiix will fetch a list of options from your action essential's dynamic options endpoint:
HTTP
Method |
POST |
URL |
{{api_url}}/qmiix/v1/actions/{action_slug}/essentials/{{action_essential_slug}}/options |
HEADERS: authenticated services
Authorization: Bearer {{user_access_token}}
Accept: application/json
Accept-Charset: utf-8
Accept-Encoding: gzip, deflate
Content-Type: application/json
X-Request-ID: {{random_uuid}}
HEADERS: non-authenticated services
Qmiix-App-Key: {{qmiix_app_key}}
Accept: application/json
Accept-Charset: utf-8
Accept-Encoding: gzip, deflate
Content-Type: application/json
X-Request-ID: {{random_uuid}}
Example
POST/qmiix/v1/actions/download_file_to_nas/essentials/folder_name/options HTTP/1.1 Host: dropbox.qmiix.service.api.com Authorization: Bearer {{user_access_token}} Accept: application/json Accept-Charset: utf-8 Accept-Encoding: gzip, deflate X-Request-ID: {{random_uuid}} Body: { "data" : [ { "dependency_sequence": 0, "key_name": "nas", "value": "howardNASTS-Pro" } ] }
Response
HTTP/1.1 200 OK Content-Type: application/json; charset=utf-8 { "data": [ { "label": "/Public", "value": "/public" }, { "label": "/Share", "value": "/share" } ] }
Dynamic Validations
For date, time, text and list of of text action essentials, you can dynamically validate user input. Qmiix will make the following request to your partner app API. Note that if trigger element is used in action essential. The UI will not fire dynamic validations API since validation is not feasible.
Request
HTTP
Method |
POST |
URL |
{{api_url}}/qmiix/v1/actions/{action_slug}/essentials/{{action_essential_slug}}/validate |
Example
POST/qmiix/v1/actions/play_a_song_in_my_nas/essentials/song_name/validate HTTP/1.1 Host: dropbox.qmiix.service.api.com Authorization: Bearer {{user_access_token}} Qmiix-App-Key: {{qmiix_app_key}} Accept: application/json Accept-Charset: utf-8 Accept-Encoding: gzip, deflate X-Request-ID: {{random_uuid}} Body: { "value": "uptown funking" "data" : [ { "dependency_sequence": 0, "key_name": "nas", "value": "howardNASTS-Pro" }, { "dependency_sequence": 1, "key_name": "singer", "value": "Bruno Mars" } ] }
Example Response
HTTP/1.1 200 OK Content-Type: application/json; charset=utf-8 { "data": { "valid": false, "message": "Sorry, no song exists with the name \"uptown funking\" from Bruno Mars." } }
Provide an API endpoint which Qmiix can periodically check for partner app's availability. This endpoint is not user-specific, and thus does not require an access token.
Request
Qmiix will make the following request to check you app's API status:
HTTP
Method |
GET |
URL |
{{api_url}}/qmiix/v1/status |
Example
GET /Qmiix/v1/status HTTP/1.1
Host api.example-service.com
Qmiix-App-Key: vFRqPGZBmZjB8JPp3mBFqOdt
Accept: application/json
Accept-Charset: utf-8
Accept-Encoding: gzip, deflate
X-Request-ID: 0715f98e65f749aba2fc243eac1e3c09
Response - Service OK
HTTP/1.1 200 OK
Response - Service Unavailable
HTTP/1.1 503 Unavailable
General
All the channel app components, micro-services are recommended to be containerized, except for infrastructure components, e.g., redis, mongo, postgreSQL, kafka, ..etc
The channel app API endpoints implementation should fully support the requirements and limitations addressed in RequiredPartnerAppAPI
The channel apps could use QmiixPublicAPI and QNAP private vendors could use QmiixPrivateAPI for channel app development.
Channel apps should be independent from each other
Qmiix Token Management
Each channel app will be given a unique Qmiix OAuth2 client id, secret pair and Qmiix Service key.
Client id and secret pair are used to generate Qmiix access token by Qmiix OAuth2 flow.
Channel app should maintain Qmiix access token and only requests for new access token while the old one is expired to avoid frequently exchanging tokens.
No Auth Authentication
For channel app using no auth authentication, channel app should verify the Qmiix-App-Key in the header.
Trigger Design
Collection of trigger event data from channel could be implemented in the following modes
On demand (Synchronous call)
This implementation is fit for scenarios that do not require timely event collection.
Whenever requested, e.g., by Qmiix trigger polling, trigger implementation should be able to query the service provider for new event data.
Webhook callback API
If webhook callback is available in channel for new event data, it is suggested to register callbacks in channel for timely event collection
If timely event collection is not required, It is suggested to buffer the new event data and use trigger event API to send batch events to Qmiix.
Periodically polling
If webhook callback is not available in channel for new event data, it is suggested to periodically initiate polling calls to channel to get the new event data as soon as possible
It is suggested to periodically poll the channel for new event data and buffer the data and use trigger event API to send batch events to Qmiix.
It is suggested to design the interval value for polling configurable
It is required to abide by channel's rate limit policy in cases being banned by the channel.
It is required to filter duplicate trigger event data from channel. The definition for duplicate event data might be different based on the characteristics of the event data type. The rule of thumb is that events with different creation timestamp would be considered different.
Trigger implementation should guarantee least event data loss from channel.
Only discard the data when the data available could not be mapped to proper trigger elements
Only discard the data when receiving schema error message for a specific event data (identified by meta_id ) from response of Qmiix trigger event API.
To periodically poll for new events from channel, channel app should utilize the channel OAuth access token from Qmiix trigger polling request and keep the token.
On receipt of POST Trigger Identity API from Qmiix, channel app should start to monitor trigger events immediately.
On receipt of unseen but valid trigger polling from Qmiix, channel app should start monitoring trigger events immediately.
On receipt of DELETE Trigger Identity API from Qmiix, channel app should stop monitoring trigger events immediately.
If not received trigger polling from Qmiix for a long time, channel app should stop monitoring trigger events.
Any trigger event data before start of event collection should not be collected.
Dynamic API Design
Dynamic options API results could be cached for a configurable period of time in channel app to speed up consequently requests from Qmiix in a short time.
If the essential (either trigger or actin essentials) using dynamic options depending on other essential values from user's input, please create these essentials in order in Qmiix partner portal. Qmiix will fires the dynamic options API to channel plugin with the requireddependent essential values. The following is the example request body of a dynamic options API, showing that this essential is dependent on the value, which is "howardNASTS-Pro", of other essential which is "nas"
{ "connected_account_id": "connected_account_id_chosen_for_the_action", "data" : [ { "dependency_sequence": 0, "key_name": "nas", "value": "howardNASTS-Pro" } ] }
If the essential (either trigger or actin essentials) using dynamic validation depending on other essential values from user's input, please create these essentials in order in Qmiix partner portal. Qmiix will fires the dynamic validation API to channel plugin with the required dependent essential values. The following is the example request body of a dynamic validation API, showing that this essential is dependent on the other essentials: "nas" and "singer"
{ "connected_account_id": "connected_account_id_chosen_for_the_action", "value": "uptown funk" "data" : [ { "dependency_sequence": 0, "key_name": "nas", "value": "howardNASTS-Pro" }, { "dependency_sequence": 1, "key_name": "singer", "value": "Bruno Mars" } ] }
If a action essential using dynamic validation, if the value of it contains any trigger elements , Qmiix will still fire dynamic validation APIs to channel app. The trigger element will be wrapped by double brackets: {{trigger_element}}. Channel app should take responsibility for validating these values.
{ "connected_account_id": "connected_account_id_chosen_for_the_action", "value": "{{sheet1}},{{sheet2}}" }
Action Design
Execution of action task could be performed in the following ways:
Synchronous execution
When Qmiix fires an Action API request to channel app, the action task is synchronously executed and returns the execution result in response to Action API request.
If the action is designed to be synchronously executed, Qmiix will ensure that when the trigger events come, the according action will be executed in event order.
If an action for an event is executed failed, Qmiix will retry this action for the event several times and the later events will be queued until the current action is executed successfully or confirmed to be failed
The retry condition is as followed:
If the action API responds with error, Qmiix will retry the action except for the following conditions:
the action API responds with 4xx error other than 401, Qmiix will not retry the failure action.
the action API responds with skip error, Qmiix will not retry the failure action.
Asynchronous execution
When Qmiix fires an Action API request to channel app, the channel app could create an asynchronous action task and respond to Action API request immediately with asynchronous field set to true to avoid waiting for the execution result.
Channel app should use execution_id received in Action API request from Qmiix to update execution result to Qmiix by Action Status API.
Asynchronous action task should not last for 30 minutes. If the task is longer, channel should update the ongoing status to Qmiix within 30 minutes before Qmiix considers this task failure
If the action is designed to be asynchronously executed, Qmiix will ensure that when the trigger events come, the according action API will be fired in event order. However, the actual execution order will not be ensured.
If an action for an event is executed failed, Qmiix will retry this action for the event several times and the later events actions will still be fired to channel app
The action retry condition is as followed:
If the action API responds with error, Qmiix will retry the action except for the following conditions:
the action API responds with 4xx error other than 401, Qmiix will not retry the failure action.
the action API responds with skip error, Qmiix will not retry the failure action.
e.g., the download url is not existed, ...etc
Channel app should take care of duplicate action (described by same execution id in Action API from Qmiix) execution.
For actions which require events to be executed in order, channel app should handle the out of order execution error gracefully.
Channel app should be able to handle duplicate actions execution gracefully.
e.g., If "Power on the NAS" is executed within a very short period for several times, the action handler should ignore the subsequent duplicate actions
Channel app should detect and stop possible infinite action execution.
e.g., If the miix is configured as "If there is new file in "/qmiix" folder in my dropbox, download the file to "/qmiix" folder in my dropbox". Since the source folder and target folder is the same, the miix executions could become infinite.
Performance Requirements
With regards to end to end performance benchmarking, the performance analysis of each related components should be done (by documentation and inducting) in order to support the benchmarking figures.
All the end to end performance benchmarking should be repeatable and results from them should be reproducible within 5% level of significance.
When conducting performance benchmarking or after performance benchmarking is finished, the channel apps should always be in the normal state and there should not be any health metric indicating the system is unhealthy or unstable.
The performance goals of QIF system include the following types of end to end performance:
Performance Benchmarking for QIF Trigger APIs response times
Performance Benchmarking for QIF Action APIs response times
Performance Benchmarking for QIF Dynamic APIs response times
Performance Benchmarking for QIF collection of trigger event data
Note that the target channel app infrastructure and performance figures are on a single channel app basis.
Performance Attributes and assumptions
Each of the end to end performance benchmarking would assume the following workload and attributes to simplify the estimation of the performance goals:
Attributes |
Estimate Value |
Latency from service provider to channel app |
EL sec |
Number of unique trigger identities |
100K |
Events generation rate per trigger identity |
1/s |
Qmiix min polling period in seconds |
300sec |
Channel app min polling period in seconds |
1sec |
Performance Benchmarking for QIF Trigger APIs (Synchronous Call) Response Times
Desired Performance KPIs |
Description |
Desired Performance |
Response time |
1. Maximum response time for the Trigger API 2. Average response time for the Trigger API |
1+ EL sec 0.3+ EL sec |
Minimum transaction per second |
Minimum number of Trigger API executed within a second |
100/sec |
Performance Benchmarking for QIF Action APIs (Synchronous Call) Response Times
Desired Performance KPIs |
Description |
Desired Performance |
Response time |
1. Maximum response time for the Trigger API 2. Average response time for the Trigger API |
1+ EL sec 0.3+ EL sec |
Minimum transaction per second |
Minimum number of Trigger API executed within a second |
100/sec |
Performance Benchmarking for QIF Action APIs (Asynchronous Call) Response Times
Desired Performance KPIs |
Description |
Desired Performance |
Response time |
1. Maximum response time for the Trigger API 2. Average response time for the Trigger API |
1 sec 0.3 sec |
Minimum transaction per second |
Minimum number of Trigger API executed within a second |
300/sec |
Task execution time |
1. Maximum action task execution time 2. Average action task execution time |
2+ EL sec 1+ EL sec |
Concurrent task per second |
Maximum concurrent Action task executed within a second |
1000/sec |
Performance Benchmarking for QIF Dynamic API (Synchronous Call) Response Times
Desired Performance KPIs |
Description |
Desired Performance |
Response time |
1. Maximum response time for the Trigger API 2. Average response time for the Trigger API |
1+ EL sec 0.3+ EL sec |
Minimum transaction per second |
Minimum number of Trigger API executed within a second |
100/sec |
Performance Benchmarking for Collection of Trigger Data (prepare for calling Realtime Event API) - Periodically Polling
Desired Performance KPIs |
Description |
Desired Performance |
Task execution time |
1. Maximum action task execution time 2. Average action task execution time |
5+ EL sec 2+ EL sec |
Concurrent polling task per second |
Maximum concurrent polling task executed within a second |
100K |
Performance Benchmarking for Collection of Trigger Data (prepare for calling Realtime Event API) - Rest Callback with event data
Desired Performance KPIs |
Description |
Desired Performance |
Event Available Time |
1. Maximum time duration between starting/stopping receiving callback data from actual service provider for a new event 2. Average time duration between starting/stopping receiving callback data from actual service provider for a new event |
2+ EL sec 1+ EL sec |
Performance Benchmarking for Collection of Trigger Data (prepare for calling Realtime Event API) - Rest Callback with event data
Desired Performance KPIs |
Description |
Desired Performance |
Event Available Time |
1. Maximum time duration between callback notification received from actual service provider for a new event and actual event data fetched 2. Average time duration between callback notification received from actual service provider for a new event and actual event data fetched |
3+ EL sec 1.5+ EL sec |
Target Channel App Infrastructure (TBD) for a Channel App
Resource |
Node 1-3 (VM) |
Node 4-6 (VM) |
Node 7-9 (VM) |
Node 10-11 (VM) |
Node 12-14 (VM) |
Container/Node |
Kubernetes Worker Node (channel app containers) |
Kafka |
Zookeeper |
Mongo (router, config) |
Mongo (Shard Node) |
CPU (cores) |
2 |
2 |
2 |
2 |
2 |
RAM (GB) |
8 Per container is limited to 200MB |
4 |
4 |
4 |
8 |
DISK (GB) |
100 |
100 |
100 |
100 |
100 |
Note: The above configuration may change based on changes in architecture at any time during the execution. The above configuration is a basis for an initial understanding of starting with the performance tests.
Scalability Requirements
The system can expand or contract its resource to accommodate heavier or lighter loading. The components should be all containerized and support the following requirements:
Elasticity in scale:
The system can deliver better performance (lower response time and higher throughput) by adding more resources to the system to eliminate the bottleneck of performance. In case the system is under-utilized, its resources can be removed in order to reduce the operation cost.
Independent scalability at component level:
If a component in the system is identified as the bottleneck or under-utilized, its resources can be added to improve the performance of the whole system or removed to reduce the cost, while the other components in the system remain unchanged. For example, we could scale in/out containers of Gmail channel apps without affecting Facebook channel apps.
Flexibility for horizontal and vertical scalability:
When changing the resource of a component in the system, the system administrator can choose to upgrade or downgrade the existed instances, like using more or fewer CPU cores and memory for containers (scale-up/scale-down), or to add or remove containers from worker nodes (scale-out/scale-in).
High availability while changing the scale:
When changing the scale of channel app component in the system, the system remains operational without downtime and there will be no data loss under any circumstances. The system performance could be slightly degraded, but once the scaling operation is ended, the performance becomes normal.
Security Requirements
The system is required to protect itself from intrusion, data theft, data tampering, and denial of service attack. It must be compliant with OWASP Secure Coding Practices Checklist as well as the following requirements. The security service goals based on current solution are summarized below:
All data access to infrastructure components should follow the security protocol QNAP apply. For example, we will enable SSL for postgreSQL access and channel app should use SSL for postgreSQL access.
All data in transit across the Internet must be encrypted if available.
All data access must be authenticated and access control must be enforced
All data access must be logged.
All data log must be backed up and retained for a period of time.
The software must detect abusing behavior and stop the abuse for specific users or applications
All data input must be validated before executing the logic
Sensitive data must be encrypted at application level, in addition to the database level and is masked in logs.
The software must be free from known vulnerabilities, malware, virus, backdoor, and intellectual property infringement. QNAP will provide 3rd party vulnerability scanning tools and all of the vulnerabilities reported by the tool should be fixed.
The software must be free from application-level denial-of-service attack
The software must be free from command injection
The software must be free from cross-site script and cross-site request forgery
The software must prevent user identity theft (assume another user’ identity)
Deployment & Updates Requirements
All of the channel app components will be containerized, the infrastructure components (message broker, database, store) used will be VM based. Channel apps can be deployed by scripts in a CD system managed by Qmiix operator. The CD system has the following capabilities:
The deployment to controlled environment like UAT can also be triggered by an integrated CI system when some predefined events happen, like code modification (git push) or release promotion (merge request) on specific branch (master/develop etc.).
The CD system is integrated (or equipped) with scripted tests, such that it can verify the deployment automatically, including basic sanity tests like checking status/ping URLs to verify services are up etc, e.g., deployment scripts would test for status/ping URL. Therefore, channel apps should provide such sanity tests scripts for CD system.
Channel apps can be deployed to multiple sites with different system configurations by single CD system, in order to support development, testing, UAT, and production environment. The architecture of each site would be the same, except for the differences in scale, like number of instances, and instance types etc.
The software architecture should allow Qmiix operator to update specific channel apps required without affecting other channel apps.
The software architecture should allow rolling updates or rollbacks for channel app components without introducing downtime, data inconsistency.
The deployment of channel app components can be done within 30 minutes based on the assumption that the cloud infrastructure (like VM instances and networks) is available.
Data Consistency Requirements
All data received or derived from users or channel apps is replicated and backed up to ensure high data availability and point of recovery by QNAP operator.
The backup and restore process is executed by scripts and the restored data can be verified by scripts by channel apps developers.
The data consistency for channel apps should be guaranteed.
For example, if cache introduced in some components, cache missed or refreshing the cached data should be taken care of by application components.
For example, Assume a operation in channel apps require two mongo insertions. If operation fails in the second mongo insertion, channel apps should be able to recover from the data inconsistency.
Monitoring Requirements
Channel app components are monitored by a monitoring system by QNAP, which helps the system administrators to discover short-term issues and long-term risks to the health of channel apps. It also helps the system administrators to understand the workload and usage patterns of all channel app components.
Channel app should provide status API or status monitoring scripts for the monitoring system to periodically collect the metrics and check the status from all the channel app components.
The monitoring system monitors the channel app components, including CPU usage, memory usage, and others that illustrate the resources consumed by the application and the workload characteristics of the application such as number of sessions, number of jobs, number of threads, job processing time, and API response time. Channel app should not show abnormalities in the metrics.
e.g., memory leak causing memory not released or possible deadlock causing high CPU even without tasks or requests
Enable logging (INFO, Error and Warn level) while achieving performance goals for channel apps.
Analytics Requirements
Channel app components should provide logs for QNAP ETL system to collect and transfer them to cloud storage, which will do the data analytics and show displays by web UI for QNAP administrator to track issues or market planners to derive insights. Channel app components Logs generally include the following types of logs but not confined to these:
Server (API, portal, console and... etc) access logs
Headers & Protocols: method, x_scheme, http version, request id,(x_request_id) accept_charset, x_forwarded_for
Endpoint info: rest url, rest version, host,
Client info: user agent, client ip, geo ip info, lonlat, location
Request: request_length, content length, bytes sent, content type, request json_body (json object), query strings, http version,
Response: http status code, http response body (json object), response time
Authorization: user_id, app_id, access_token
File source
Access log files should be in format of line delimited json file. Each record should be in correct format of json and include the following information but not confined to these fields
The unique request id should be the same and passed across server components in order for effective tracing. All the logs related for the same request in related request/process/task should have the same request id.
The maximum field size for string type should be configurable and the above string will be truncated.
The log file name should contain information of log source and time information and the new log files should be generated daily.
The log should constantly append to existing log files or generate new log files as long as the service component is writing logs. Therefore, removing existing log files or editing log files should not affect the new logs appending.
The logs should not contain sensitive information such as secret or password information.
The ETL (e.g., ELK) solution should be able to handle and parse the string value of the key to the multilayer json object if applicable and the scripts (e.g., logstash filter) for parsing the object should be provided.
The log stored in server local storage would be rotated everyday.
The log would be transferred to cloud storage (e.g., s3) for use of analytics
This log is mainly input for analytics/visualization tools such as ELK (elasticsearch, logstash, kibana).
API debug logs
Debug log files should include unique request id (may get it from server request) for tracing flows across different server components. The unique request id should be the same and passed across server components in order for effective tracing. All
the logs related for the same request in related request/process/task should share the same request id.
Debug log should have at least support log level of ERROR, WARN, INFO and DEBUG. INFO is default enabled for all server components.
The logs for all log level should not contain sensitive information such as secret or password information.
The log file name should contain information of log source and time information and the new log files should be generated daily.
The log should constantly append to existing log files or generate new log files as long as the service component is writing logs. Therefore, removing existing log files or editing log files should not affect the new logs appending.
The ETL (e.g., ELK) solution should be able to handle and parse the string value of the key to the multilayer json object if applicable and the scripts (e.g., logstash filter) for parsing the object should be provided.
The log stored in server local storage would be rotated everyday.
The log would be transferred to cloud storage (e.g., s3) for use of debug
This log is mainly input for debug trace tools such as ELK (elasticsearch, logstash, kibana).
Worker/runner task logs
Task id and request id
Task input objects
Task state
Task result objects
Task generated, received and execution time
Worker/runner task logs should be in format of line delimited json file. Each record should be in correct format of json and include the following information but not confined to these fields:
The maximum field size for string type should be configurable and the above string will be truncated.
Worker/runner task log files should include unique request id (may get it from server request) for tracing flows across different server components. The unique request id should be the same and passed across server components in order for effective
tracing. All the logs related for the same request in related request/process/task should share the same request id. Note that task
may generate its task id since task may be aborted and retried.
The logs should not contain sensitive information such as secret or password information.
The log file name should contain information of log source and time information and the new log files should be generated daily.
The log should constantly append to existing log files or generate new log files as long as the service component is writing logs. Therefore, removing existing log files or editing log files should not affect the new logs appending.
The ETL (e.g., ELK) solution should be able to handle and parse the string value of the key to the multilayer json object if applicable and the scripts (e.g., logstash filter) for parsing the object should be provided.
The log stored in server local storage would be rotated everyday.
The log would be transferred to cloud storage (e.g., s3) for use of analytics
This log is mainly input for analytics/visualization tools such as ELK (elasticsearch, logstash, kibana).
Worker/runner debug logs
Worker/runner debug log files should include unique request id (may get it from server request) for tracing flows across different server components. The unique request id should be the same and passed across server components in order for effective
tracing. All the logs related for the same request in related request/process/task should share the same request id. Note that task
may generate its task id since task may be aborted and retried.
Debug log should have at least support log level of ERROR, WARN, INFO and DEBUG. INFO is default enabled for all server components.
The logs for all log level should not contain sensitive information such as secret or password information.
The log file name should contain information of log source and time information and the new log files should be generated daily.
The log should constantly append to existing log files or generate new log files as long as the service component is writing logs. Therefore, removing existing log files or editing log files should not affect the new logs appending.
The ETL (e.g., ELK) solution should be able to handle and parse the string value of the key to the multilayer json object if applicable and the scripts (e.g., logstash filter) for parsing the object should be provided.
The log stored in server local storage would be rotated everyday.
The log would be transferred to cloud storage (e.g., s3) for use of debug
This log is mainly input for debug trace tools such as ELK (elasticsearch, logstash, kibana).
The log level should be able to configure if the infrastructure code foundation provides the configuration.
The performance goals of QIF server should be achieved with at least INFO, Error, and Warn log enabled in the channel app components. Based on the debugging requirements, DEBUG level log might also be turned on. The information in the different log level would be as below:
ERROR: This log level should include very severe error events that will presumably lead the application to abort and error events that might still allow the application to continue running.
WARN: This log level includes potentially harmful situations.
INFO: This log level includes informational messages that highlight the progress of the application at coarse-grained level. The following information are required in this level:
Other network component connection and access, e.g., db query, db access, produce topic to kafka, receive topic from kafka, and etc.
DEBUG: This log level includes fine-grained informational events that are most useful to debug an application.