Skip to content

Commit

Permalink
Merge pull request #183 from quixio/v2-branch
Browse files Browse the repository at this point in the history
Documentation updates for Quix major changes
  • Loading branch information
tbedford authored Sep 26, 2023
2 parents 23d8e49 + 7cff257 commit 4a72781
Show file tree
Hide file tree
Showing 305 changed files with 3,583 additions and 2,472 deletions.
3 changes: 2 additions & 1 deletion WRITING-STYLE.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ Use the following guidelines regarding the company's name:
* Quix Portal is our product, which includes:
* Managed Kafka
* Serverless compute
* Data catalogue
* Quix data store
* APIs
* Quix Streams is our client library:
* A client library is a collection of code specific to one programming language.
Expand Down Expand Up @@ -74,6 +74,7 @@ Use the following guidelines for industry-standard terms, and Quix terms:
* Event-stream processing, not event stream processing
* DevOps, never devops
* Startup and scale-up
* Dropdown (as in dropdown menu) is one word and [not hyphenated](https://learn.microsoft.com/en-us/style-guide/a-z-word-list-term-collections/d/dropdown).

## Use topic-based writing

Expand Down
59 changes: 0 additions & 59 deletions docs/apis/data-catalogue-api/aggregate-time.md

This file was deleted.

17 changes: 0 additions & 17 deletions docs/apis/data-catalogue-api/get-swagger.md

This file was deleted.

78 changes: 0 additions & 78 deletions docs/apis/data-catalogue-api/request.md

This file was deleted.

6 changes: 3 additions & 3 deletions docs/apis/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,9 @@

The Quix Platform provides the following APIs:

## Data Catalogue
## Query

The [Data Catalogue HTTP API](data-catalogue-api/intro.md) allows you to fetch data stored in the Quix platform. You can use it for exploring the platform, prototyping applications, or working with stored data in any language with HTTP capabilities.
The [Query API](query-api/intro.md) allows you to fetch data stored in the Quix platform. You can use it for exploring the platform, prototyping applications, or working with stored data in any language with HTTP capabilities.

## Streaming Writer

Expand All @@ -16,4 +16,4 @@ As an alternative to the client library, the Quix platform supports real-time da

## Portal API

The [Portal API](portal-api.md) gives access to the Portal interface allowing you to automate access to data including Users, Workspaces, and Projects.
The [Portal API](portal-api.md) gives access to the Portal interface enabling you to programmatically control projects, environments, applications, and deployments.
2 changes: 1 addition & 1 deletion docs/apis/portal-api.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# Portal API

The Quix Portal API gives access to the Portal interface allowing you to automate access to data including Users, Workspaces, and Projects.
The Portal API gives access to the Portal interface enabling you to programmatically control projects, environments, applications, and deployments.

Refer to [Portal API Swagger](https://portal-api.platform.quix.ai/swagger){target=_blank} for more information.
Original file line number Diff line number Diff line change
@@ -1,23 +1,18 @@
# Aggregate data by tags

If you need to compare data across different values for a given tag,
you’ll want to group results by that tag. You can do so via the
`/parameters/data` endpoint.
If you need to compare data across different values for a given tag, you’ll want to group results by that tag. You can do so using the `/parameters/data` endpoint.

## Before you begin

- If you don’t already have any Stream data in your workspace, you can use any Source from our [Code Samples](../../platform/samples/samples.md) to set some up.
If you don’t already have any stream data in your environment, you can use a Source from the [Code Samples](../../platform/samples/samples.md) to provide suitable data.

- [Get a Personal Access Token](authenticate.md)
to authenticate each request.
You'll need to obtain a [Personal Access Token](authenticate.md) to authenticate each request.

## Using the groupBy property

You can supply a list of Tags in the `groupBy` array to aggregate
results by. For example, you could group a set of Speed readings by the
LapNumber they occurred on using something like:
You can supply a list of Tags in the `groupBy` array to aggregate results by. For example, you could group a set of Speed readings by the LapNumber they occurred on using something like:

``` json
```json
{
"from": 1612191286000000000,
"to": 1612191386000000000,
Expand All @@ -28,11 +23,9 @@ LapNumber they occurred on using something like:
}
```

With these settings alone, we’ll get the `LapNumber` tag included in our
results, alongside the existing timestamps and requested parameters,
e.g.
With these settings alone, we’ll get the `LapNumber` tag included in our results, alongside the existing timestamps and requested parameters, for example:

``` json
```json
{
"timestamps": [
1612191286000000000,
Expand All @@ -58,24 +51,18 @@ e.g.

## Using aggregationType

For meaningful aggregations, you should specify a type of aggregation
function for each parameter. When specifying the parameters to receive,
include the `aggregationType` in each parameter object like so:
For meaningful aggregations, you should specify a type of aggregation function for each parameter. When specifying the parameters to receive, include the `aggregationType` in each parameter object like so:

``` json
```json
"numericParameters": [{
"parameterName": "Speed",
"aggregationType": "mean"
}]
```

Ten standard aggregation functions are provided including `max`,
`count`, and `spread`. When you group by a tag and specify how to
aggregate parameter values, the result will represent that aggregation.
For example, the following results demonstrate the average speed that
was recorded against each lap:
Standard aggregation functions are provided including `max`, `count`, and `spread`. When you group by a tag and specify how to aggregate parameter values, the result will represent that aggregation. For example, the following results demonstrate the average speed that was recorded against each lap:

``` json
```json
{
"timestamps": [
1612191286000000000,
Expand Down
48 changes: 48 additions & 0 deletions docs/apis/query-api/aggregate-time.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
# Aggregate data by time

You can downsample and upsample persisted data using the `/parameters/data` endpoint.

## Before you begin

If you don’t already have any Stream data in your environment, you can use a Source from the [Code Samples](../../platform/samples/samples.md) to provide suitable data.

You'll need to obtain a [Personal Access Token](authenticate.md) to authenticate each request.

## Aggregating and interpolating

The JSON payload can include a `groupByTime` property, an object with the following members:

* `timeBucketDuration` - The duration, in nanoseconds, for one aggregated value.
* `interpolationType` - Specify how additional values should be generated when interpolating.

For example, imagine you have a set of speed data, with values recorded at 1-second intervals. You can group such data into 2-second intervals, aggregated by mean average, with the following:

```json
{
"groupByTime": {
"timeBucketDuration": 2000000000,
},
"numericParameters": [{
"parameterName": "Speed",
"aggregationType": "Mean"
}]
}
```

You can specify an `interpolationType` to define how any missing values are generated. `Linear` will provide a value in linear proportion, while `Previous` will repeat the value before the one that was missing:

```json
{
"from": 1612191286000000000,
"to": 1612191295000000000,
"numericParameters": [{
"parameterName": "Speed",
"aggregationType": "First"
}],
"groupByTime": {
"timeBucketDuration": 2000000000,
"interpolationType": "None"
},
"streamIds": [ "302b1de3-2338-43cb-8148-3f0d6e8c0b8a" ]
}
```
Original file line number Diff line number Diff line change
@@ -1,24 +1,22 @@
# Authenticate

You need a Personal Access Token (PAT) to authenticate requests made with the Query API.

## Before you begin

- Sign up on the Quix Portal
- [Sign up for Quix](https://portal.platform.quix.ai/self-sign-up){target=_blank}

## Get a Personal Access Token

You should authenticate requests to the Catalogue API using a Personal
Access Token (PAT). This is a time-limited token which you can revoke if
necessary.
You should authenticate requests to the Query API using a Personal Access Token (PAT). This is a time-limited token which you can revoke if necessary.

Follow these steps to generate a PAT:

1. Click the user icon in the top-right of the Portal and select the
Tokens menu.
1. Click the user icon in the top-right of the Portal and select the Tokens menu.

2. Click **GENERATE TOKEN**.

3. Choose a name to describe the token’s purpose, and an expiration
date, then click **CREATE**.
3. Choose a name to describe the token’s purpose, and an expiration date, then click **CREATE**.

4. Copy the token and store it in a secure place.

Expand All @@ -32,16 +30,13 @@ Follow these steps to generate a PAT:

## Sign all your requests using this token

Make sure you accompany each request to the API with an `Authorization`
header using your PAT as a bearer token, as follows:
Make sure you accompany each request to the API with an `Authorization` header using your PAT as a bearer token, as follows:

``` http
Authorization: bearer <token>
```

Replace `<token>` with your Personal Access Token. For example, if
you’re using curl on the command line, you can set the header using
the `-H` flag:
Replace `<token>` with your Personal Access Token. For example, if you’re using curl on the command line, you can set the header using the `-H` flag:

``` shell
curl -H "Authorization: bearer <token>" ...
Expand Down
Loading

0 comments on commit 4a72781

Please sign in to comment.