diff --git a/WRITING-STYLE.md b/WRITING-STYLE.md index b6406abc..8a167659 100644 --- a/WRITING-STYLE.md +++ b/WRITING-STYLE.md @@ -36,7 +36,7 @@ Use the following guidelines regarding the company's name: * Quix Portal is our product, which includes: * Managed Kafka * Serverless compute - * Data catalogue + * Quix data store * APIs * Quix Streams is our client library: * A client library is a collection of code specific to one programming language. @@ -74,6 +74,7 @@ Use the following guidelines for industry-standard terms, and Quix terms: * Event-stream processing, not event stream processing * DevOps, never devops * Startup and scale-up +* Dropdown (as in dropdown menu) is one word and [not hyphenated](https://learn.microsoft.com/en-us/style-guide/a-z-word-list-term-collections/d/dropdown). ## Use topic-based writing diff --git a/docs/apis/data-catalogue-api/aggregate-time.md b/docs/apis/data-catalogue-api/aggregate-time.md deleted file mode 100644 index 738871b4..00000000 --- a/docs/apis/data-catalogue-api/aggregate-time.md +++ /dev/null @@ -1,59 +0,0 @@ -# Aggregate data by time - -You can downsample and upsample data from the catalogue using the -`/parameters/data` endpoint. - -## Before you begin - - - If you don’t already have any Stream data in your workspace, you can use any Source from our [Code Samples](../../platform/samples/samples.md) to set some up. - - - [Get a Personal Access Token](authenticate.md) - to authenticate each request. - -## Aggregating and interpolating - -The JSON payload can include a `groupByTime` property, an object with -the following members: - - - `timeBucketDuration` - The duration, in nanoseconds, for one aggregated value. - - - `interpolationType` - Specify how additional values should be generated when - interpolating. - -For example, imagine you have a set of speed data, with values recorded -at 1-second intervals. You can group such data into 2-second intervals, -aggregated by mean average, with the following: - -``` json -{ - "groupByTime": { - "timeBucketDuration": 2000000000, - }, - "numericParameters": [{ - "parameterName": "Speed", - "aggregationType": "Mean" - }] -} -``` - -You can specify an `interpolationType` to define how any missing values -are generated. `Linear` will provide a value in linear proportion, -whilst `Previous` will repeat the value before the one that was missing. - -``` json -{ - "from": 1612191286000000000, - "to": 1612191295000000000, - "numericParameters": [{ - "parameterName": "Speed", - "aggregationType": "First" - }], - "groupByTime": { - "timeBucketDuration": 2000000000, - "interpolationType": "None" - }, - "streamIds": [ "302b1de3-2338-43cb-8148-3f0d6e8c0b8a" ] -} -``` diff --git a/docs/apis/data-catalogue-api/get-swagger.md b/docs/apis/data-catalogue-api/get-swagger.md deleted file mode 100644 index 2a66370a..00000000 --- a/docs/apis/data-catalogue-api/get-swagger.md +++ /dev/null @@ -1,17 +0,0 @@ -# Getting the Swagger documentation URL - -You can access [Swagger documentation](https://swagger.io/){target=_blank} and then use it to try out the [Data Catalogue API](intro.md). - -The URL is workspace-specific, and follows this pattern: - - https://telemetry-query-${organization}-${workspace}.platform.quix.ai/swagger - -The workspace ID is a combination based on your organization and workspace names. For example, for an `acme` organization with a `weather` workspace, the URL would have the following format: - - https://telemetry-query-acme-weather.platform.quix.ai/swagger - -To help determine the URL, you can [find out how to get your workspace id](../../platform/how-to/get-workspace-id.md). - -!!! tip - - Once you access the Swagger documentation, you can select the version of the API you require from the `Select a definition` dropdown list. diff --git a/docs/apis/data-catalogue-api/request.md b/docs/apis/data-catalogue-api/request.md deleted file mode 100644 index fb65a13f..00000000 --- a/docs/apis/data-catalogue-api/request.md +++ /dev/null @@ -1,78 +0,0 @@ -# Forming a request - -How you send requests to the Data Catalogue API will vary depending on -the client or language you’re using. But the API still has behavior and -expectations that is common across all clients. - -!!! tip - - The examples in this section show how to use the popular [`curl`](https://curl.se/){target=_blank} command line tool. - -## Before you begin - - - Sign up on the Quix Portal - - - Read about [Authenticating with the Data Catalogue - API](authenticate.md) - -## Endpoint URLs - -The Data Catalogue API is available on a per-workspace basis, so the -subdomain is based on a combination of your organization and workspace -names. See [How to get a workspace -ID](../../platform/how-to/get-workspace-id.md) to find out how to get the -exact hostname required. It will be in this format: - - https://telemetry-query-${organization}-${workspace}.platform.quix.ai/ - -So your final endpoint URL will look something like: - - https://telemetry-query-acme-weather.platform.quix.ai/ - -## Method - -Most endpoints use the `POST` method, even those that just fetch data. -Ensure your HTTP client sends `POST` requests as appropriate. - -Using `curl`, the `-X POST` flag specifies a POST request. Note that -this is optional if you’re using the `-d` flag to send a payload (see -below). - -``` bash -curl -X POST ... -``` - -## Payload - -For most methods, you’ll need to send a JSON object containing supported -parameters. You’ll also need to set the appropriate content type for the -payload you’re sending: - -``` bash -curl -H "Content-Type: application/json" ... -``` - -!!! warning - - You **must** specify the content type of your payload. Failing to include this header will result in a `415 UNSUPPORTED MEDIA TYPE` status code. - -You can send data via a POST request using the `curl` flag `-d`. This -should be followed by either a string of JSON data, or a string starting -with the *@* symbol, followed by a filename containing the JSON data. - -``` bash -curl -d '{"key": "value"}' ... -curl -d "@data.json" ... -``` - -## Complete curl example - -You should structure most of your requests to the API around this -pattern: - -``` bash -curl -H "Authorization: ${token}" \ - -H "Content-Type: application/json" \ - -d "@data.json" \ - https://${domain}.platform.quix.ai/${endpoint} -``` diff --git a/docs/apis/index.md b/docs/apis/index.md index 84960fbe..163b58d1 100644 --- a/docs/apis/index.md +++ b/docs/apis/index.md @@ -2,9 +2,9 @@ The Quix Platform provides the following APIs: -## Data Catalogue +## Query -The [Data Catalogue HTTP API](data-catalogue-api/intro.md) allows you to fetch data stored in the Quix platform. You can use it for exploring the platform, prototyping applications, or working with stored data in any language with HTTP capabilities. +The [Query API](query-api/intro.md) allows you to fetch data stored in the Quix platform. You can use it for exploring the platform, prototyping applications, or working with stored data in any language with HTTP capabilities. ## Streaming Writer @@ -16,4 +16,4 @@ As an alternative to the client library, the Quix platform supports real-time da ## Portal API -The [Portal API](portal-api.md) gives access to the Portal interface allowing you to automate access to data including Users, Workspaces, and Projects. +The [Portal API](portal-api.md) gives access to the Portal interface enabling you to programmatically control projects, environments, applications, and deployments. diff --git a/docs/apis/portal-api.md b/docs/apis/portal-api.md index 370007bb..80803cac 100644 --- a/docs/apis/portal-api.md +++ b/docs/apis/portal-api.md @@ -1,5 +1,5 @@ # Portal API -The Quix Portal API gives access to the Portal interface allowing you to automate access to data including Users, Workspaces, and Projects. +The Portal API gives access to the Portal interface enabling you to programmatically control projects, environments, applications, and deployments. Refer to [Portal API Swagger](https://portal-api.platform.quix.ai/swagger){target=_blank} for more information. diff --git a/docs/apis/data-catalogue-api/aggregate-tags.md b/docs/apis/query-api/aggregate-tags.md similarity index 54% rename from docs/apis/data-catalogue-api/aggregate-tags.md rename to docs/apis/query-api/aggregate-tags.md index 3dec5996..370f13ee 100644 --- a/docs/apis/data-catalogue-api/aggregate-tags.md +++ b/docs/apis/query-api/aggregate-tags.md @@ -1,23 +1,18 @@ # Aggregate data by tags -If you need to compare data across different values for a given tag, -you’ll want to group results by that tag. You can do so via the -`/parameters/data` endpoint. +If you need to compare data across different values for a given tag, you’ll want to group results by that tag. You can do so using the `/parameters/data` endpoint. ## Before you begin - - If you don’t already have any Stream data in your workspace, you can use any Source from our [Code Samples](../../platform/samples/samples.md) to set some up. +If you don’t already have any stream data in your environment, you can use a Source from the [Code Samples](../../platform/samples/samples.md) to provide suitable data. - - [Get a Personal Access Token](authenticate.md) - to authenticate each request. +You'll need to obtain a [Personal Access Token](authenticate.md) to authenticate each request. ## Using the groupBy property -You can supply a list of Tags in the `groupBy` array to aggregate -results by. For example, you could group a set of Speed readings by the -LapNumber they occurred on using something like: +You can supply a list of Tags in the `groupBy` array to aggregate results by. For example, you could group a set of Speed readings by the LapNumber they occurred on using something like: -``` json +```json { "from": 1612191286000000000, "to": 1612191386000000000, @@ -28,11 +23,9 @@ LapNumber they occurred on using something like: } ``` -With these settings alone, we’ll get the `LapNumber` tag included in our -results, alongside the existing timestamps and requested parameters, -e.g. +With these settings alone, we’ll get the `LapNumber` tag included in our results, alongside the existing timestamps and requested parameters, for example: -``` json +```json { "timestamps": [ 1612191286000000000, @@ -58,24 +51,18 @@ e.g. ## Using aggregationType -For meaningful aggregations, you should specify a type of aggregation -function for each parameter. When specifying the parameters to receive, -include the `aggregationType` in each parameter object like so: +For meaningful aggregations, you should specify a type of aggregation function for each parameter. When specifying the parameters to receive, include the `aggregationType` in each parameter object like so: -``` json +```json "numericParameters": [{ "parameterName": "Speed", "aggregationType": "mean" }] ``` -Ten standard aggregation functions are provided including `max`, -`count`, and `spread`. When you group by a tag and specify how to -aggregate parameter values, the result will represent that aggregation. -For example, the following results demonstrate the average speed that -was recorded against each lap: +Standard aggregation functions are provided including `max`, `count`, and `spread`. When you group by a tag and specify how to aggregate parameter values, the result will represent that aggregation. For example, the following results demonstrate the average speed that was recorded against each lap: -``` json +```json { "timestamps": [ 1612191286000000000, diff --git a/docs/apis/query-api/aggregate-time.md b/docs/apis/query-api/aggregate-time.md new file mode 100644 index 00000000..d36e6658 --- /dev/null +++ b/docs/apis/query-api/aggregate-time.md @@ -0,0 +1,48 @@ +# Aggregate data by time + +You can downsample and upsample persisted data using the `/parameters/data` endpoint. + +## Before you begin + +If you don’t already have any Stream data in your environment, you can use a Source from the [Code Samples](../../platform/samples/samples.md) to provide suitable data. + +You'll need to obtain a [Personal Access Token](authenticate.md) to authenticate each request. + +## Aggregating and interpolating + +The JSON payload can include a `groupByTime` property, an object with the following members: + +* `timeBucketDuration` - The duration, in nanoseconds, for one aggregated value. +* `interpolationType` - Specify how additional values should be generated when interpolating. + +For example, imagine you have a set of speed data, with values recorded at 1-second intervals. You can group such data into 2-second intervals, aggregated by mean average, with the following: + +```json +{ + "groupByTime": { + "timeBucketDuration": 2000000000, + }, + "numericParameters": [{ + "parameterName": "Speed", + "aggregationType": "Mean" + }] +} +``` + +You can specify an `interpolationType` to define how any missing values are generated. `Linear` will provide a value in linear proportion, while `Previous` will repeat the value before the one that was missing: + +```json +{ + "from": 1612191286000000000, + "to": 1612191295000000000, + "numericParameters": [{ + "parameterName": "Speed", + "aggregationType": "First" + }], + "groupByTime": { + "timeBucketDuration": 2000000000, + "interpolationType": "None" + }, + "streamIds": [ "302b1de3-2338-43cb-8148-3f0d6e8c0b8a" ] +} +``` diff --git a/docs/apis/data-catalogue-api/authenticate.md b/docs/apis/query-api/authenticate.md similarity index 66% rename from docs/apis/data-catalogue-api/authenticate.md rename to docs/apis/query-api/authenticate.md index bcc6712c..902d7b40 100644 --- a/docs/apis/data-catalogue-api/authenticate.md +++ b/docs/apis/query-api/authenticate.md @@ -1,24 +1,22 @@ # Authenticate +You need a Personal Access Token (PAT) to authenticate requests made with the Query API. + ## Before you begin - - Sign up on the Quix Portal + - [Sign up for Quix](https://portal.platform.quix.ai/self-sign-up){target=_blank} ## Get a Personal Access Token -You should authenticate requests to the Catalogue API using a Personal -Access Token (PAT). This is a time-limited token which you can revoke if -necessary. +You should authenticate requests to the Query API using a Personal Access Token (PAT). This is a time-limited token which you can revoke if necessary. Follow these steps to generate a PAT: -1. Click the user icon in the top-right of the Portal and select the - Tokens menu. +1. Click the user icon in the top-right of the Portal and select the Tokens menu. 2. Click **GENERATE TOKEN**. -3. Choose a name to describe the token’s purpose, and an expiration - date, then click **CREATE**. +3. Choose a name to describe the token’s purpose, and an expiration date, then click **CREATE**. 4. Copy the token and store it in a secure place. @@ -32,16 +30,13 @@ Follow these steps to generate a PAT: ## Sign all your requests using this token -Make sure you accompany each request to the API with an `Authorization` -header using your PAT as a bearer token, as follows: +Make sure you accompany each request to the API with an `Authorization` header using your PAT as a bearer token, as follows: ``` http Authorization: bearer ``` -Replace `` with your Personal Access Token. For example, if -you’re using curl on the command line, you can set the header using -the `-H` flag: +Replace `` with your Personal Access Token. For example, if you’re using curl on the command line, you can set the header using the `-H` flag: ``` shell curl -H "Authorization: bearer " ... diff --git a/docs/apis/data-catalogue-api/filter-tags.md b/docs/apis/query-api/filter-tags.md similarity index 52% rename from docs/apis/data-catalogue-api/filter-tags.md rename to docs/apis/query-api/filter-tags.md index fbca6f75..e44e1caf 100644 --- a/docs/apis/data-catalogue-api/filter-tags.md +++ b/docs/apis/query-api/filter-tags.md @@ -1,34 +1,24 @@ # Tag filtering -If you supply Tags with your parameter data, they will act as indexes, -so they can be used to efficiently filter data. +If you supply Tags with your parameter data, they will act as indexes, so they can be used to efficiently filter data. ## Before you begin - - If you don’t already have any Stream data in your workspace, you can use any Source from our [Code Samples](../../platform/samples/samples.md) to set some up. +If you don’t already have any Stream data in your environment, you can use a Source from the [Code Samples](../../platform/samples/samples.md) to generate some. - - [Get a Personal Access Token](authenticate.md) - to authenticate each request. +You'll need to obtain a [Personal Access Token](authenticate.md) to authenticate each request. ## Using tag filters -When calling the `/parameters/data` endpoint, you can include a -`tagFilters` property in your payload. This property references an array -of objects, each with the following structure: +When calling the `/parameters/data` endpoint, you can include a `tagFilters` property in your payload. This property references an array of objects, each with the following structure: - - tag - The name of the tag to filter by +* tag - The name of the tag to filter by +* operator - A comparison operator +* value - The value to compare against - - operator - A comparison operator +For example, to fetch only the data recorded on the second lap, you can filter on a `LapNumber` tag as follows: - - value - The value to compare against - -For example, to fetch only the data recorded on the second lap, we can -filter on a `LapNumber` tag as follows: - -``` json +```json { "tagFilters": [{ "tag": "LapNumber", @@ -38,10 +28,9 @@ filter on a `LapNumber` tag as follows: } ``` -Note that the value can also be an array, in which case data that -matches the chosen operator for any value is returned: +Note that the value can also be an array, in which case data that matches the chosen operator for any value is returned: -``` json +```json { "tagFilters": [{ "tag": "LapNumber", @@ -51,10 +40,9 @@ matches the chosen operator for any value is returned: } ``` -But also note that multiple filters for the same tag apply in -combination, so: +But also note that multiple filters for the same tag apply in combination, so: -``` json +```json { "tagFilters": [{ "tag": "LapNumber", @@ -68,29 +56,22 @@ combination, so: } ``` -Is useless because a LapNumber cannot be both "2.0" and "4.0". +Is incorrect because a LapNumber cannot be both "2.0" and "4.0". ### Supported operators -Each object in the `tagFilters` array can support the following -`operator` values: - - - Equal - - - NotEqual - - - Like +Each object in the `tagFilters` array can support the following `operator` values: - - NotLike +* Equal +* NotEqual +* Like +* NotLike `Equal` and `NotEqual` test for true/false exact string matches. -`Like` and `NotLike` will perform a regular expression match, so you can -search by pattern. For example, to get the Speed parameter values tagged -with a LapNumber which is either 2 or 4, you can use the expression -"^\[24\]\\." to match values 2.0 and 4.0: +`Like` and `NotLike` perform a regular expression match, so you can search by pattern. For example, to get the Speed parameter values tagged with a LapNumber which is either 2 or 4, you can use the expression `^\[24\]\\.` to match values 2.0 and 4.0: -``` bash +```bash curl "https://telemetry-query-testing-quickstart.platform.quix.ai/parameters/data" \ -H "Authorization: bearer ${token}" \ -H "Content-Type: application/json" \ diff --git a/docs/apis/query-api/get-swagger.md b/docs/apis/query-api/get-swagger.md new file mode 100644 index 00000000..a4724970 --- /dev/null +++ b/docs/apis/query-api/get-swagger.md @@ -0,0 +1,17 @@ +# Getting the Swagger documentation URL + +You can access [Swagger documentation](https://swagger.io/){target=_blank} and then use it to try out the [Query API](intro.md). + +The URL is environment-specific, and follows this pattern: + + https://telemetry-query-${organization}-${environment}.platform.quix.ai/swagger + +The environment ID is a combination based on your organization and environment names. For example, for an `acme` organization with a `weather` environment, the URL would have the following format: + + https://telemetry-query-acme-weather.platform.quix.ai/swagger + +To help determine the URL, you can [find out how to get your environment id](../../platform/how-to/get-environment-id.md). + +!!! tip + + Once you access the Swagger documentation, you can select the version of the API you require from the `Select a definition` dropdown list. diff --git a/docs/apis/data-catalogue-api/intro.md b/docs/apis/query-api/intro.md similarity index 80% rename from docs/apis/data-catalogue-api/intro.md rename to docs/apis/query-api/intro.md index 89fc9d42..3d7726a0 100644 --- a/docs/apis/data-catalogue-api/intro.md +++ b/docs/apis/query-api/intro.md @@ -1,6 +1,10 @@ # Introduction -The Data Catalogue HTTP API allows you to fetch data stored in the Quix platform. You can use it for exploring the platform, prototyping applications, or working with stored data in any language with HTTP capabilities. +The Query API allows you to fetch persisted data stored in the Quix platform. You can use it for exploring the platform, prototyping applications, or working with stored data in any language with HTTP capabilities. + +!!! note + + The Query API is primarily designed for **testing purposes only**. For production storage of data, Quix recommends using one of the numerous [connectors](../../platform/connectors/index.md) to persist data in the database technology of your choice. The API is fully described in our [Swagger documentation](get-swagger.md). Read on for a guide to using the API, including real-world examples you can invoke from your language of choice, or using the command line using `curl`. diff --git a/docs/apis/data-catalogue-api/raw-data.md b/docs/apis/query-api/raw-data.md similarity index 54% rename from docs/apis/data-catalogue-api/raw-data.md rename to docs/apis/query-api/raw-data.md index d2390a52..24276ed1 100644 --- a/docs/apis/data-catalogue-api/raw-data.md +++ b/docs/apis/query-api/raw-data.md @@ -1,15 +1,12 @@ # Raw data -Access persisted raw data by specifyng the parameters you’re interested -in. Add restrictions based on Stream or timings for finer-grained -results. +Access persisted raw data by specifyng the parameters you’re interested in. Add restrictions based on Stream or timings for finer-grained results. ## Before you begin - - If you don’t already have any Stream data in your workspace, you can use any Source from our [Code Samples](../../platform/samples/samples.md) to set some up. +If you don’t already have any Stream data in your environment, you can use a Source from our [Code Samples](../../platform/samples/samples.md) generate some data. - - [Get a Personal Access Token](authenticate.md) - to authenticate each request. +[Get a Personal Access Token](authenticate.md) to authenticate each request. ## Using the /parameters/data endpoint @@ -19,10 +16,9 @@ Raw telemetry data is available via the `/parameters/data` endpoint. ### Request -You can filter by a number of different factors but, at minimum, you’ll -need to supply one or more parameters to fetch: +You can filter by a number of different factors but, at minimum, you’ll need to supply one or more parameters to fetch: -``` json +```json { "numericParameters": [{ "parameterName": "Speed" @@ -30,15 +26,11 @@ need to supply one or more parameters to fetch: } ``` -In this example, we’re requesting a single numeric parameter, `Speed`. -Each array of parameters is indexed based on parameter type, which can -be `numericParameters`, `stringParameters` or `binaryParameters`. -Parameters are returned in a union, so if you request several, you’ll -get back all parameters that match. +In this example, we’re requesting a single numeric parameter, `Speed`. Each array of parameters is indexed based on parameter type, which can be `numericParameters`, `stringParameters` or `binaryParameters`. Parameters are returned in a union, so if you request several, you’ll get back all parameters that match. ### Example -``` bash +```bash curl "https://${domain}.platform.quix.ai/parameters/data" \ -H "accept: text/plain" \ -H "Authorization: bearer " \ @@ -46,10 +38,9 @@ curl "https://${domain}.platform.quix.ai/parameters/data" \ -d '{"numericParameters":[{"parameterName":"Speed"}]}' ``` -If you just had a single parameter value in the catalogue, the response -from the above call might look something like this: +If you just had a single parameter value in the stored data, the response from the above call might look something like this: -``` json +```json { "timestamps": [ 1612191100000000000 @@ -67,9 +58,7 @@ from the above call might look something like this: ### Restricting by Stream or time -In reality, you’ll have far more data in the catalogue, so you’ll want -to filter it. Three remaining properties of the request object allow you -to do so: +In reality, you’ll have far more data in the stored data, so you’ll want to filter it. Three remaining properties of the request object allow you to do so: - `streamIds` @@ -77,11 +66,9 @@ to do so: - `to` -Each stream you create has a unique ID. You can view the ID of a -persisted via the Data section of the Quix Portal. Supply a list of -stream IDs to restrict fetched data to just those streams: +Each stream you create has a unique ID. You can view the ID of a persisted via the Data section of the Quix Portal. Supply a list of stream IDs to restrict fetched data to just those streams: -``` json +```json { "streamIds": [ "302b1de3-2338-43cb-8148-3f0d6e8c0b8a", @@ -90,11 +77,9 @@ stream IDs to restrict fetched data to just those streams: } ``` -You can also restrict data to a certain time span using the `from` and -`to` properties. These each expect a timestamp in nanoseconds, for -example: +You can also restrict data to a certain time span using the `from` and `to` properties. These each expect a timestamp in nanoseconds, for example: -``` json +```json { "from": 1612191286000000000, "to": 1612191386000000000 diff --git a/docs/apis/query-api/request.md b/docs/apis/query-api/request.md new file mode 100644 index 00000000..deaf0280 --- /dev/null +++ b/docs/apis/query-api/request.md @@ -0,0 +1,63 @@ +# Forming a request + +How you send requests to the Query API will vary depending on the client or language you’re using. But the API still has behavior and expectations that is common across all clients. + +!!! tip + + The examples in this section show how to use the popular [`curl`](https://curl.se/){target=_blank} command line tool. + +## Before you begin + +Sign up for a [free Quix account](https://portal.platform.quix.ai/self-sign-up). + +Read about [authenticating](authenticate.md) with the Query API. + +## Endpoint URLs + +The Query API is available on a per-environment basis, so the subdomain is based on a combination of your organization and environment names. See [How to get a environment ID](../../platform/how-to/get-environment-id.md) to find out how to get the exact hostname required. It will be in this format: + + https://telemetry-query-${organization}-${environment}.platform.quix.ai/ + +So your final endpoint URL will look something like: + + https://telemetry-query-acme-weather.platform.quix.ai/ + +## Method + +Most endpoints use the `POST` method, even those that just fetch data. Ensure your HTTP client sends `POST` requests as appropriate. + +Using `curl`, the `-X POST` flag specifies a POST request. Note that this is optional if you’re using the `-d` flag to send a payload (see below). + +```bash +curl -X POST ... +``` + +## Payload + +For most methods, you’ll need to send a JSON object containing supported parameters. You’ll also need to set the appropriate content type for the payload you’re sending: + +```bash +curl -H "Content-Type: application/json" ... +``` + +!!! warning + + You **must** specify the content type of your payload. Failing to include this header will result in a `415 UNSUPPORTED MEDIA TYPE` status code. + +You can send data via a POST request using the `curl` flag `-d`. This should be followed by either a string of JSON data, or a string starting with the *@* symbol, followed by a filename containing the JSON data. + +```bash +curl -d '{"key": "value"}' ... +curl -d "@data.json" ... +``` + +## Complete curl example + +You should structure most of your requests to the API around this pattern: + +```bash +curl -H "Authorization: ${token}" \ + -H "Content-Type: application/json" \ + -d "@data.json" \ + https://${domain}.platform.quix.ai/${endpoint} +``` diff --git a/docs/apis/data-catalogue-api/streams-filtered.md b/docs/apis/query-api/streams-filtered.md similarity index 66% rename from docs/apis/data-catalogue-api/streams-filtered.md rename to docs/apis/query-api/streams-filtered.md index 2ddfc9e8..5cda2303 100644 --- a/docs/apis/data-catalogue-api/streams-filtered.md +++ b/docs/apis/query-api/streams-filtered.md @@ -1,39 +1,31 @@ # Filtered streams -To fetch specific streams, you can include various filters with your -request to the `/streams` endpoint. +To fetch specific streams, you can include various filters with your request to the `/streams` endpoint. ## Before you begin - - If you don’t already have any Stream data in your workspace, you can use any Source from our [Code Samples](../../platform/samples/samples.md) to set some up. +If you don’t already have any Stream data in your environment, you can use any Source from our [Code Samples](../../platform/samples/samples.md) to set some up. - - [Get a Personal Access Token](authenticate.md) - to authenticate each request. +[Get a Personal Access Token](authenticate.md) to authenticate each request. ## Fetch a single stream via ID -The most basic filter matches against a stream’s ID. +The most basic filter matches against a stream’s ID: -``` bash +```bash curl "https://${domain}.platform.quix.ai/streams" \ -H "Authorization: bearer ${token}" \ -H "Content-Type: application/json" \ -d '{"streamIds": ["302b1de3-2338-43cb-8148-3f0d6e8c0b8a"]}' ``` -Note that you can supply multiple IDs in the `streamIds` array to match -multiple streams. +Note that you can supply multiple IDs in the `streamIds` array to match multiple streams. ## Filtering streams on basic properties -The **location** of a stream defines its position in a hierarchy. A -stream location looks just like a filesystem path. You can filter -streams based on the start of this path, so you can easily find streams -contained within any point in the hierarchy. For example, this query -will find streams with a location of `/one` but it will also find -streams with a `/one/two` location: +The **location** of a stream defines its position in a hierarchy. A stream location looks just like a filesystem path. You can filter streams based on the start of this path, so you can easily find streams contained within any point in the hierarchy. For example, this query will find streams with a location of `/one` but it will also find streams with a `/one/two` location: -``` bash +```bash curl "https://${domain}.platform.quix.ai/streams" \ -H "Authorization: bearer ${token}" \ -H "Content-Type: application/json" \ @@ -48,24 +40,18 @@ curl "https://${domain}.platform.quix.ai/streams" \ Filtering on topic uses a case insensitive *Equals* match. Filtering on a topic named "MyTopic" will match "mytopic" but will not match "MyTopic123" -You can filter streams based on their use of a given **parameter** with -the `parameterIds` property. For example, to find all streams that -contain at least one single occurence of `Gear` data: +You can filter streams based on their use of a given **parameter** with the `parameterIds` property. For example, to find all streams that contain at least one single occurence of `Gear` data: -``` bash +```bash curl "https://${domain}.platform.quix.ai/streams" \ -H "Authorization: bearer ${token}" \ -H "Content-Type: application/json" \ -d '{"parameterIds": [ "Gear"] }' ``` -You can filter based on the presence or absence of a certain stream -**status**, for example, if the stream is `Open` or was `Interrupted`. -The `includeStatuses` and `excludeStatuses` properties each take an -array of values to act on. So to get all streams that aren’t Interrupted -or Closed, use this query: +You can filter based on the presence or absence of a certain stream **status**, for example, if the stream is `Open` or was `Interrupted`. The `includeStatuses` and `excludeStatuses` properties each take an array of values to act on. So to get all streams that aren’t Interrupted or Closed, use this query: -``` bash +```bash curl "https://${domain}.platform.quix.ai/streams" \ -H "Authorization: bearer ${token}" \ -H "Content-Type: application/json" \ @@ -74,13 +60,9 @@ curl "https://${domain}.platform.quix.ai/streams" \ ## Filtering streams on metadata -You can associate metadata with your streams. This can be used, for -example, to store the circuit a car has travelled around, or the player -of a particular run of a game. +You can associate metadata with your streams. This can be used, for example, to store the circuit a car has travelled around, or the player of a particular run of a game. -To filter on metadata, include the `metadata` property in the JSON -object in your request body. This property’s value is an array of -objects, each of which has two properties, `key` and `value`: +To filter on metadata, include the `metadata` property in the JSON object in your request body. This property’s value is an array of objects, each of which has two properties, `key` and `value`: - `key` The exact, case-sensitive key of the metadata you’re interested in. @@ -88,10 +70,9 @@ objects, each of which has two properties, `key` and `value`: - `value` The exact, case-sensitive value of the metadata to match on -If you have a metadata entry keyed as "circuit", you can match against -it for an example value with this payload: +If you have a metadata entry keyed as "circuit", you can match against it for an example value with this payload: -``` json +```json "metadata": [{ "key": "circuit", "value": "Sakhir Short" @@ -100,7 +81,7 @@ it for an example value with this payload: As before, the response is an array of Stream objects: -``` json +```json [{ "streamId":"e6545c18-d20d-47bd-8997-f3f825c1a45c", "name":"cardata", @@ -122,19 +103,17 @@ As before, the response is an array of Stream objects: ## Ordering results -Calls to the `/streams` endpoint can include an `ordering` property in -the payload. This references an array of properties to sort on, each one -an object with the following properties: +Calls to the `/streams` endpoint can include an `ordering` property in the payload. This references an array of properties to sort on, each one an object with the following properties: - - by + - `by` A string representing the property to order by. - - direction + - `direction` A string, either "Asc" or "Desc", to define the sort direction. For example, to sort all streams in ascending order by topic: -``` bash +```bash curl "https://${domain}.platform.quix.ai/streams" \ -H "Authorization: bearer ${token}" \ -H "Content-Type: application/json" \ diff --git a/docs/apis/data-catalogue-api/streams-models.md b/docs/apis/query-api/streams-models.md similarity index 82% rename from docs/apis/data-catalogue-api/streams-models.md rename to docs/apis/query-api/streams-models.md index 39c0f64a..4fe3f49c 100644 --- a/docs/apis/data-catalogue-api/streams-models.md +++ b/docs/apis/query-api/streams-models.md @@ -1,30 +1,22 @@ # Streams with models -One stream can derive from another, for example, acting as a model in a -pipeline. This relationship can be inspected using the `/streams/models` -endpoint. +One stream can derive from another, for example, acting as a model in a pipeline. This relationship can be inspected using the `/streams/models` endpoint. ## Before you begin - - If you don’t already have any Stream data in your workspace, you can use any Source from our [Code Samples](../../platform/samples/samples.md) to set some up. +If you don’t already have any Stream data in your environment, you can use any Source from our [Code Samples](../../platform/samples/samples.md) to set some up. - - [Get a Personal Access Token](authenticate.md) - to authenticate each request. +[Get a Personal Access Token](authenticate.md) to authenticate each request. ## Fetching model data -The hierarchy is represented as a parent/child structure where a stream -can have an optional parent and any number of children. +The hierarchy is represented as a parent/child structure where a stream can have an optional parent and any number of children. -The `/streams/models` endpoint will return data in the same structure as -[the `/streams` endpoint](streams-paged.md), with -an additional property for each stream: `children`. This is an array of -stream objects which may have their own children. +The `/streams/models` endpoint will return data in the same structure as [the `/streams` endpoint](streams-paged.md), with an additional property for each stream: `children`. This is an array of stream objects which may have their own children. -The payload requirements are the same as those for `/streams`. You can -fetch model information across all streams with an empty payload: +The payload requirements are the same as those for `/streams`. You can fetch model information across all streams with an empty payload: -``` shell +```shell curl "https://${domain}.platform.quix.ai/streams/models" \ -H "Authorization: bearer ${token}" \ -H "Content-Type: application/json" \ @@ -33,7 +25,7 @@ curl "https://${domain}.platform.quix.ai/streams/models" \ Here’s an example result for a stream with two children: -``` json +```json [{ "children": [{ "children": [], @@ -80,7 +72,7 @@ Here’s an example result for a stream with two children: And here’s an example with a child and a grandchild: -``` json +```json [{ "children": [{ "children": [{ diff --git a/docs/apis/data-catalogue-api/streams-paged.md b/docs/apis/query-api/streams-paged.md similarity index 61% rename from docs/apis/data-catalogue-api/streams-paged.md rename to docs/apis/query-api/streams-paged.md index 474ae0e7..44fe9944 100644 --- a/docs/apis/data-catalogue-api/streams-paged.md +++ b/docs/apis/query-api/streams-paged.md @@ -1,23 +1,16 @@ # Paged streams -You can fetch all streams within a -[workspace](../../platform/glossary.md#workspace), across -[topics](../../platform/glossary.md#topics) and locations, with a -single call. If you’re working with a large number of streams, you can -use pagination parameters to group the results into smaller pages. +You can fetch all streams within a [environment](../../platform/glossary.md#environment), across [topics](../../platform/glossary.md#topics) and locations, with a single call. If you’re working with a large number of streams, you can use pagination parameters to group the results into smaller pages. ## Before you begin - - If you don’t already have any Stream data in your workspace, you can use any Source from our [Code Samples](../../platform/samples/samples.md) to set some up. +If you don’t already have any Stream data in your environment, you can use any Source from our [Code Samples](../../platform/samples/samples.md) to set some up. - - [Get a Personal Access Token](authenticate.md) - to authenticate each request. +[Get a Personal Access Token](authenticate.md) to authenticate each request. ## Fetching all streams -The `/streams` endpoint provides read access to all streams within -the workspace. Sending an empty JSON object in your request body will -return all streams. +The `/streams` endpoint provides read access to all streams within the environment. Sending an empty JSON object in your request body will return all streams. !!! warning @@ -25,7 +18,7 @@ return all streams. ### Example request -``` shell +```shell curl "https://${domain}.platform.quix.ai/streams" \ -H "Authorization: bearer ${token}" \ -H "Content-Type: application/json" \ @@ -36,7 +29,7 @@ curl "https://${domain}.platform.quix.ai/streams" \ The JSON returned consists of an array of Stream objects: -``` json +```json [{ "streamId":"e6545c18-d20d-47bd-8997-f3f825c1a45c", "name":"cardata", @@ -54,10 +47,7 @@ The JSON returned consists of an array of Stream objects: ## Fetching streams page by page -To reduce the size of the response, you should page these results with -the `paging` property. Include this in the JSON object you send in -the body of your request. The value of this property is an object with -two members, `index` and `length`: +To reduce the size of the response, you should page these results with the `paging` property. Include this in the JSON object you send in the body of your request. The value of this property is an object with two members, `index` and `length`: - `index` The index of the page you want returned. @@ -65,10 +55,9 @@ two members, `index` and `length`: - `length` The number of items (i.e. streams) per page. -For example, to group all streams in pages of 10 and receive the 2nd -page, use this value: +For example, to group all streams in pages of 10 and receive the 2nd page, use this value: -``` json +```json "paging": { "index": 1, "length": 10 @@ -77,7 +66,7 @@ page, use this value: ### Example request -``` shell +```shell curl "https://${domain}.platform.quix.ai/streams" \ -H "Authorization: bearer ${token}" \ -H "Content-Type: application/json" \ diff --git a/docs/apis/streaming-reader-api/signalr.md b/docs/apis/streaming-reader-api/signalr.md index ac292fda..54a190d1 100644 --- a/docs/apis/streaming-reader-api/signalr.md +++ b/docs/apis/streaming-reader-api/signalr.md @@ -2,48 +2,34 @@ ## Before you begin - - Get a PAT for - [Authentication](authenticate.md) +Get a PAT for [Authentication](authenticate.md). - - Ensure you know your workspace ID +Ensure you know your environment ID. ## Installation -If you are using a package manager like [npm](https://www.npmjs.com/){target=_blank}, -you can install SignalR using `npm install @microsoft/signalr`. For -other installation options that don’t depend on a platform like Node.js, -such as consuming SignalR from a CDN, please refer to [SignalR documentation](https://docs.microsoft.com/en-us/aspnet/core/signalr/javascript-client?view=aspnetcore-3.1){target=_blank}. +If you are using a package manager like [npm](https://www.npmjs.com/){target=_blank}, you can install SignalR using `npm install @microsoft/signalr`. For other installation options that don’t depend on a platform like Node.js, such as consuming SignalR from a CDN, please refer to [SignalR documentation](https://docs.microsoft.com/en-us/aspnet/core/signalr/javascript-client?view=aspnetcore-3.1){target=_blank}. ## Testing the connection -Once you’ve installed the SignalR library, you can test it’s set up -correctly with the following code snippet. This opens a connection to -the hub running on your custom subdomain, and checks authentication. +Once you’ve installed the SignalR library, you can test it’s set up correctly with the following code snippet. This opens a connection to the hub running on your custom subdomain, and checks authentication. -You should replace the text `YOUR_ACCESS_TOKEN` with the PAT obtained -from [Authenticating with the Streaming Reader API](authenticate.md). +You should replace the text `YOUR_ACCESS_TOKEN` with the PAT obtained from [Authenticating with the Streaming Reader API](authenticate.md). + +You should also replace `YOUR_ENVIRONMENT_ID` with the appropriate identifier, a combination of your organization and environment names. -You should also replace `YOUR_WORKSPACE_ID` with the appropriate -identifier, a combination of your organization and workspace names. This can be located in one of the following ways: -- Portal URL - Look in the browsers URL when you are logged into the Portal and - inside the Workspace you want to work with. The URL contains the - workspace id. e.g everything after "workspace=" till the next *&* +- **Portal URL** - Look in the browsers URL when you are logged into the Portal and inside the environment you want to work with. The URL contains the environment ID. For example, everything after "workspace=" through to the next *&* + + !!! note -- Topics Page - In the Portal, inside the Workspace you want to work with, click the - Topics menu - ![Topics icon](../images/icons/topics.png) and then - click the expand icon - ![Expand icon](../images/icons/expand.jpg) on any - topic. Here you will see a *Username* under the Broker Settings. - This Username is also the Workspace Id. + `workspace=` is legacy. This is in fact your environment ID. +- **Settings** - Click on `Settings` and then the environment. Click on `General settings`. The environment name and environment ID is displayed. -``` javascript +```javascript var signalR = require("@microsoft/signalr"); const options = { @@ -51,11 +37,10 @@ const options = { }; const connection = new signalR.HubConnectionBuilder() - .withUrl("https://reader-YOUR_WORKSPACE_ID.platform.quix.ai/hub", options) + .withUrl("https://reader-YOUR_ENVIRONMNENT_ID.platform.quix.ai/hub", options) .build(); connection.start().then(() => console.log("SignalR connected.")); ``` -If the connection is successful, you should see the console log “SignalR -connected”. +If the connection is successful, you should see the console log "SignalR connected". diff --git a/docs/apis/streaming-writer-api/create-stream.md b/docs/apis/streaming-writer-api/create-stream.md index 4e07ffd8..8ec8d390 100644 --- a/docs/apis/streaming-writer-api/create-stream.md +++ b/docs/apis/streaming-writer-api/create-stream.md @@ -1,21 +1,16 @@ # Create a new Stream -You can create a new stream by specifying a topic to create it in, and -supplying any other additional properties required. +You can create a new stream by specifying a topic to create it in, and supplying any other additional properties required. !!! tip - This method is optional. You can also create a stream implicitly by - sending data to a stream that doesn’t already exist. But creating a - stream using the method on this page avoids having to determine a - unique stream id yourself. + This method is optional. You can also create a stream implicitly by sending data to a stream that doesn’t already exist. But creating a stream using the method on this page avoids having to determine a unique stream id yourself. ## Before you begin - - You should have a [Workspace set up](../../platform/glossary.md#workspace) with at least one [Topic](../../platform/glossary.md#topics). + - You should have an [environment set up](../../platform/glossary.md#environment) with at least one [Topic](../../platform/glossary.md#topics). - - [Get a Personal Access Token](authenticate.md) to authenticate each - request. + - [Get a Personal Access Token](authenticate.md) to authenticate each request. ## Using the /streams endpoint @@ -23,21 +18,15 @@ To create a new stream, send a `POST` request to: /topics/${topicName}/streams -You should replace `$\{topicName}` in the endpoint URL with the name of -the [Topic](../../platform/glossary.md#topics) you wish to create the -stream in. For example, if your topic is named “cars”, your endpoint url -will be `/topics/cars/streams`. +You should replace `$\{topicName}` in the endpoint URL with the name of the [Topic](../../platform/glossary.md#topics) you wish to create the stream in. For example, if your topic is named “cars”, your endpoint URL will be `/topics/cars/streams`. ### Example request -You can create a new Stream with an absolute minimum of effort by -passing an empty JSON object in the payload: - - +You can create a new Stream with an absolute minimum of effort by passing an empty JSON object in the payload: - curl - ``` shell + ```shell curl "https://${domain}.platform.quix.ai/topics/${topicName}/streams" \ -H "Authorization: bearer ${token}" \ -H "Content-Type: application/json" \ @@ -46,7 +35,7 @@ passing an empty JSON object in the payload: - Node.js - ``` javascript + ```javascript const https = require('https'); const data = "{}"; @@ -72,10 +61,7 @@ passing an empty JSON object in the payload: req.end(); ``` - - -For most real-world cases, you’ll also want to provide some or all of -the following: +For most real-world cases, you’ll also want to provide some or all of the following: - `name` @@ -89,7 +75,7 @@ the following: For example, here’s a more useful payload: -``` json +```json { "name": "cardata", "location": "simulations/trials", @@ -101,11 +87,9 @@ For example, here’s a more useful payload: ### Example response -The JSON returned is an object with a single property, `streamId`. This -contains the unique identifier of your newly created stream, and will -look something like this: +The JSON returned is an object with a single property, `streamId`. This contains the unique identifier of your newly created stream, and will look something like this: -``` json +```json { "streamId": "66fb0a2f-eb70-494e-9df7-c06d275aeb7c" } @@ -113,20 +97,18 @@ look something like this: !!! tip - If you’re following these guides in order, you’ll want to take note of - that stream id. For curl examples, it’s convenient to keep it in an - environment variable, e.g. + If you’re following these guides in order, you’ll want to take note of that stream id. For curl examples, it’s convenient to keep it in an environment variable, for example: - ``` bash + ```bash $ streamId=66fb0a2f-eb70-494e-9df7-c06d275aeb7c ``` ## Using SignalR -``` javascript +```javascript var signalR = require("@microsoft/signalr"); const token = "YOUR_TOKEN" -const workspaceId = "YOUR_WORKSPACE_ID" +const environmentId = "YOUR_ENVIRONMENT_ID" const topic = "YOUR_TOPIC_NAME" const options = { @@ -134,7 +116,7 @@ const options = { }; const connection = new signalR.HubConnectionBuilder() - .withUrl("https://writer-" + workspaceId + ".platform.quix.ai/hub", options) + .withUrl("https://writer-" + environmentId + ".platform.quix.ai/hub", options) .build(); // Establish connection @@ -158,5 +140,6 @@ connection.start().then(async () => { }); ``` -!!! tip +!!! tip + Also available as JsFiddle at [https://jsfiddle.net/QuixAI/cLno68fs/](https://jsfiddle.net/QuixAI/cLno68fs/){target=_blank} diff --git a/docs/apis/streaming-writer-api/get-swagger.md b/docs/apis/streaming-writer-api/get-swagger.md index 18d85596..f514d0a1 100644 --- a/docs/apis/streaming-writer-api/get-swagger.md +++ b/docs/apis/streaming-writer-api/get-swagger.md @@ -2,15 +2,15 @@ You can access [Swagger documentation](https://swagger.io/){target=_blank} and then use it to try out the [Streaming Writer API](intro.md). -The URL is workspace-specific, and follows this pattern: +The URL is environment-specific, and follows this pattern: - https://writer-${organization}-${workspace}.platform.quix.ai/swagger + https://writer-${organization}-${environment}.platform.quix.ai/swagger -The workspace ID is a combination based on your organization and workspace names. For example, for an `acme` organization with a `weather` workspace, the URL would have the following format: +The environment ID is a combination based on your organization and environment names. For example, for an `acme` organization with a `weather` environment, the URL would have the following format: https://writer-acme-weather.platform.quix.ai/swagger -To help determine the URL, you can [find out how to get your workspace id](../../platform/how-to/get-workspace-id.md). +To help determine the URL, you can [find out how to get your environment ID](../../platform/how-to/get-environment-id.md). !!! tip diff --git a/docs/apis/streaming-writer-api/request.md b/docs/apis/streaming-writer-api/request.md index 46e19f68..d4d45941 100644 --- a/docs/apis/streaming-writer-api/request.md +++ b/docs/apis/streaming-writer-api/request.md @@ -1,29 +1,22 @@ # Forming a request -How you send requests to the Streaming Writer API will vary depending on -the client or language you’re using. But the API still has behavior and -expectations that is common across all clients. +How you send requests to the Streaming Writer API will vary depending on the client or language you’re using. But the API still has behavior and expectations that is common across all clients. !!! tip - The examples in this section show how to use the popular [`curl`](https://curl.se/){target=_blank} command line tool. + The examples in this section show how to use the popular [`curl`](https://curl.se/ {target=_blank} command line tool. ## Before you begin - Sign up on the Quix Portal - - Read about [Authenticating with the Streaming Writer - API](authenticate.md) + - Read about [Authenticating with the Streaming Writer API](authenticate.md) ## Endpoint URLs -The Streaming Writer API is available on a per-workspace basis, so the -subdomain is based on a combination of your organization and workspace -names. See the [Swagger -documentation](get-swagger.md) to find out how -to get the exact hostname required. It will be in this format: +The Streaming Writer API is available on a per-environment basis, so the subdomain is based on a combination of your organization and environment names. See the [Swagger documentation](get-swagger.md) to find out how to get the exact hostname required. It will be in this format: -https://writer-${organization}-${workspace}.platform.quix.ai +https://writer-${organization}-${environment}.platform.quix.ai So your final endpoint URL will look something like: @@ -31,37 +24,29 @@ https://writer-acme-weather.platform.quix.ai/ ## Method -Endpoints in this API use the `POST` and `PUT` methods. Ensure your HTTP -client sends the correct request method. +Endpoints in this API use the `POST` and `PUT` methods. Ensure your HTTP client sends the correct request method. -Using `curl`, you can specify the request method with the `-X -` flag, for example: +Using `curl`, you can specify the request method with the `-X ` flag, for example: -``` bash +```bash curl -X PUT ... ``` ## Payload -For most methods, you’ll need to send a JSON object containing supported -parameters. You’ll also need to set the appropriate content type for the -payload you’re sending: +For most methods, you’ll need to send a JSON object containing supported parameters. You’ll also need to set the appropriate content type for the payload you’re sending: -``` bash +```bash curl -H "Content-Type: application/json" ... ``` !!! warning - You **must** specify the content type of your payload. Failing to - include this header will result in a `415 UNSUPPORTED MEDIA TYPE` - status code. + You **must** specify the content type of your payload. Failing to include this header will result in a `415 UNSUPPORTED MEDIA TYPE` status code. -You can send data using the `curl` flag `-d`. This should be followed by -either a string of JSON data, or a string starting with the *@* symbol, -followed by a filename containing the JSON data. +You can send data using the `curl` flag `-d`. This should be followed by either a string of JSON data, or a string starting with the *@* symbol, followed by a filename containing the JSON data. -``` bash +```bash curl -d '{"key": "value"}' ... curl -d "@data.json" ... ``` @@ -72,10 +57,9 @@ curl -d "@data.json" ... ## Complete curl example -You should structure most of your requests to the API around this -pattern: +You should structure most of your requests to the API around this pattern: -``` bash +```bash curl -H "Authorization: ${token}" \ -H "Content-Type: application/json" \ -d "@data.json" \ diff --git a/docs/apis/streaming-writer-api/send-data.md b/docs/apis/streaming-writer-api/send-data.md index 2ae68407..528ab807 100644 --- a/docs/apis/streaming-writer-api/send-data.md +++ b/docs/apis/streaming-writer-api/send-data.md @@ -1,13 +1,10 @@ # Send Parameter data -You can send telemetry data using the Streaming Writer API. Select a -topic and a stream to send the data to. In your payload, you can include -numeric, string, or binary parameter data, with nanosecond-level -timestamps. +You can send telemetry data using the Streaming Writer API. Select a topic and a stream to send the data to. In your payload, you can include numeric, string, or binary parameter data, with nanosecond-level timestamps. ## Before you begin - - You should have a [Workspace set up](../../platform/glossary.md#workspace) with at least one [Topic](../../platform/glossary.md#topics). + - You should have an [environment set up](../../platform/glossary.md#environment) with at least one [Topic](../../platform/glossary.md#topics). - [Get a Personal Access Token](authenticate.md) to authenticate each @@ -15,16 +12,13 @@ timestamps. ## Sending structured data to the endpoint -Send a POST request together with a JSON payload representing the data -you’re sending to: +Send a POST request together with a JSON payload representing the data you’re sending to: ``` /topics/${topicName}/streams/${streamId}/parameters/data ``` -You should replace `$\{topicName}` with the name of the topic your -stream belongs to, and `$\{streamId}` with the id of the stream you wish -to send data to. For example: +You should replace `$\{topicName}` with the name of the topic your stream belongs to, and `$ {streamId}` with the id of the stream you wish to send data to. For example: ``` /topics/cars/streams/66fb0a2f-eb70-494e-9df7-c06d275aeb7c/parameters/data @@ -36,13 +30,9 @@ to send data to. For example: ### Example request -Your payload should include an array of `timestamps` with one timestamp -for each item of data you’re sending. Actual data values should be keyed -on their name, in the object that corresponds to their type, one of -`numericValues`, `stringValues`, or `binaryValues`. The payload is in -this structure: +Your payload should include an array of `timestamps` with one timestamp for each item of data you’re sending. Actual data values should be keyed on their name, in the object that corresponds to their type, one of `numericValues`, `stringValues`, or `binaryValues`. The payload is in this structure: -``` json +```json { "timestamps": [...], "numericValues": {...}, @@ -52,10 +42,7 @@ this structure: } ``` -Any data types that are unused can be omitted. So a final request using -curl might look something like this: - - +Any data types that are unused can be omitted. So a final request using curl might look something like this: === "curl" @@ -101,20 +88,16 @@ curl might look something like this: req.end(); ``` - - ### Response -No payload is returned from this call. A 200 HTTP response code -indicates success. If the call fails, you should see either a 4xx or 5xx -response code indicating what went wrong. +No payload is returned from this call. A 200 HTTP response code indicates success. If the call fails, you should see either a 4xx or 5xx response code indicating what went wrong. ## Using SignalR -``` javascript +```javascript var signalR = require("@microsoft/signalr"); const token = "YOUR_TOKEN" -const workspaceId = "YOUR_WORKSPACE_ID" +const environmentId = "YOUR_ENVIRONMENT_ID" const topic = "YOUR_TOPIC_NAME" const streamId = "ID_OF_STREAM_TO_WRITE_TO" @@ -123,7 +106,7 @@ const options = { }; const connection = new signalR.HubConnectionBuilder() - .withUrl("https://writer-" + workspaceId + ".platform.quix.ai/hub", options) + .withUrl("https://writer-" + environmentId + ".platform.quix.ai/hub", options) .build(); // Establish connection @@ -174,5 +157,7 @@ connection.start().then(async () => { console.log("Sent parameter data"); }); ``` + !!! tip - Also available as JsFiddle at [https://jsfiddle.net/QuixAI/a41b8x0t/](https://jsfiddle.net/QuixAI/a41b8x0t/){target=_blank} \ No newline at end of file + + Also available as JsFiddle at [https://jsfiddle.net/QuixAI/a41b8x0t/](https://jsfiddle.net/QuixAI a41b8x0t/){target=_blank} \ No newline at end of file diff --git a/docs/apis/streaming-writer-api/send-event.md b/docs/apis/streaming-writer-api/send-event.md index 20cf9071..a0e4f406 100644 --- a/docs/apis/streaming-writer-api/send-event.md +++ b/docs/apis/streaming-writer-api/send-event.md @@ -1,28 +1,23 @@ # Send an Event -You can add Events to your stream data to record discrete actions for -future reference. +You can add Events to your stream data to record discrete actions for future reference. ## Before you begin - [Get a Personal Access Token](authenticate.md) to authenticate each request. - - If you don’t already have a Stream in your workspace, [add one using - the API](create-stream.md). + - If you don’t already have a Stream in your environment, [add one using the API](create-stream.md). ## Sending event data -To send event data to a stream, use the `POST` method with this -endpoint: +To send event data to a stream, use the `POST` method with this endpoint: ``` /topics/${topicName}/streams/${streamId}/events/data ``` -You should replace `${topicName}` with the name of the topic your -stream belongs to, and `${streamId}` with the id of the stream you wish -to send data to. For example: +You should replace `${topicName}` with the name of the topic your stream belongs to, and `${streamId}` with the id of the stream you wish to send data to. For example: ``` /topics/cars/streams/66fb0a2f-eb70-494e-9df7-c06d275aeb7c/events/data @@ -32,32 +27,23 @@ to send data to. For example: You can create a new stream by supplying a `$\{streamId}` that doesn’t already exist. This avoids the need to call the [create stream endpoint](create-stream.md) separately. -Your payload should be an array of events. Each event is an object -containing the following properties: +Your payload should be an array of events. Each event is an object containing the following properties: - - id - a unique identifier for the event + - `id` - a unique identifier for the event - - timestamp - the nanosecond-precise timestamp at which the event occurred + - `timestamp` - the nanosecond-precise timestamp at which the event occurred - - tags - a object containing key-value string pairs representing tag values + - `tags` - a object containing key-value string pairs representing tag values - - value - a string value associated with the event + - `value` - a string value associated with the event ### Example request -This example call adds a single event to a stream. The event has an -example value and demonstrates use of a tag to include additional -information. - - +This example call adds a single event to a stream. The event has an example value and demonstrates use of a tag to include additional information. === "curl" - ``` shell + ```shell curl -i "https://${domain}.platform.quix.ai/topics/${topicName}/streams/${streamId}/events/data" \ -H "Authorization: bearer ${token}" \ -H "Content-Type: application/json" \ @@ -73,7 +59,7 @@ information. === "Node.js" - ``` javascript + ```javascript const https = require('https'); const data = JSON.stringify({ @@ -101,20 +87,16 @@ information. req.end(); ``` - - ### Response -No payload is returned from this call. A 200 HTTP response code -indicates success. If the call fails, you should see either a 4xx or 5xx -response code indicating what went wrong. +No payload is returned from this call. A 200 HTTP response code indicates success. If the call fails, you should see either a 4xx or 5xx response code indicating what went wrong. ## Using SignalR -``` javascript +```javascript var signalR = require("@microsoft/signalr"); const token = "YOUR_TOKEN" -const workspaceId = "YOUR_WORKSPACE_ID" +const environmentId = "YOUR_ENVIRONMENT_ID" const topic = "YOUR_TOPIC_NAME" const streamId = "ID_OF_STREAM_TO_WRITE_TO" @@ -123,7 +105,7 @@ const options = { }; const connection = new signalR.HubConnectionBuilder() - .withUrl("https://writer-" + workspaceId + ".platform.quix.ai/hub", options) + .withUrl("https://writer-" + environmentId + ".platform.quix.ai/hub", options) .build(); // Establish connection @@ -148,5 +130,7 @@ connection.start().then(async () => { console.log("Sent event data"); }); ``` -!!! tip + +!!! tip + Also available as JsFiddle at [https://jsfiddle.net/QuixAI/h4fztrns/](https://jsfiddle.net/QuixAI/h4fztrns/){target=_blank} \ No newline at end of file diff --git a/docs/apis/streaming-writer-api/signalr.md b/docs/apis/streaming-writer-api/signalr.md index 7b65fce0..82662600 100644 --- a/docs/apis/streaming-writer-api/signalr.md +++ b/docs/apis/streaming-writer-api/signalr.md @@ -5,66 +5,41 @@ - Get a PAT for [Authentication](authenticate.md) - - Ensure you know your workspace ID + - Ensure you know your environment ID ## Installation -If you are using a package manager like [npm](https://www.npmjs.com/){target=_blank}, -you can install SignalR using `npm install @microsoft/signalr`. For -other installation options that don’t depend on a platform like Node.js, -such as consuming SignalR from a CDN, please refer to [SignalR -documentation](https://docs.microsoft.com/en-us/aspnet/core/signalr/javascript-client?view=aspnetcore-3.1){target=_blank}. +If you are using a package manager like [npm](https://www.npmjs.com/){target=_blank}, you can install SignalR using `npm install @microsoft/signalr`. For other installation options that don’t depend on a platform like Node.js, such as consuming SignalR from a CDN, please refer to [SignalR documentation](https://docs.microsoft.com/en-us/aspnet/core/signalr/javascript-client?view=aspnetcore-3.1){target=_blank}. ## Testing the connection -Once you’ve installed the SignalR library, you can test it’s set up -correctly with the following code snippet. This opens a connection to -the hub running on your custom subdomain, and checks authentication. +Once you’ve installed the SignalR library, you can test it’s set up correctly with the following code snippet. This opens a connection to the hub running on your custom subdomain, and checks authentication. -You should replace the text `YOUR_ACCESS_TOKEN` with the PAT obtained -from [Authenticating with the Streaming Writer -API](authenticate.md). +You should replace the text `YOUR_ACCESS_TOKEN` with the PAT obtained from [Authenticating with the Streaming Writer API](authenticate.md). -You should also replace `YOUR_WORKSPACE_ID` with the appropriate -identifier, a combination of your organization and workspace names. -This can be located in one of the following ways: +You should also replace `YOUR_ENVIRONMENT_ID` with the appropriate identifier, a combination of your organization and environment names. This can be located in one of the following ways: + - **Portal URL** - Look in the browser's URL when you are logged into the Portal and inside the environment you want to work with. The URL contains the environment ID. For example, everything after `workspace=` till the next *&*. Note, the use of `workspace` here is a legacy term. + - **Settings** - Click on `Settings` and then the environment. Click on `General settings`. The environment name and environment ID is displayed. - - Portal URL - Look in the browsers URL when you are logged into the Portal and - inside the Workspace you want to work with. The URL contains the - workspace id. e.g everything after "workspace=" till the next *&* - - - Topics Page - In the Portal, inside the Workspace you want to work with, click the - Topics menu - ![Topic icon](../images/icons/topics.png) and then - click the expand icon - ![Expand icon](../images/icons/expand.jpg) on any - topic. Here you will see a *Username* under the Broker Settings. - This Username is also the Workspace Id. - - - -``` javascript +```javascript var signalR = require("@microsoft/signalr"); const token = "YOUR_TOKEN" -const workspaceId = "YOUR_WORKSPACE_ID" +const environmentId = "YOUR_ENVIRONMENT_ID" const options = { accessTokenFactory: () => token }; const connection = new signalR.HubConnectionBuilder() - .withUrl("https://writer-" + workspaceId + ".platform.quix.ai/hub", options) + .withUrl("https://writer-" + environmentId + ".platform.quix.ai/hub", options) .build(); connection.start().then(() => console.log("SignalR connected.")); ``` -If the connection is successful, you should see the console log “SignalR -connected”. +If the connection is successful, you should see the console log "SignalR connected". !!! tip diff --git a/docs/apis/streaming-writer-api/stream-metadata.md b/docs/apis/streaming-writer-api/stream-metadata.md index 2693695b..34f60257 100644 --- a/docs/apis/streaming-writer-api/stream-metadata.md +++ b/docs/apis/streaming-writer-api/stream-metadata.md @@ -1,27 +1,20 @@ # Add Stream metadata -You can add arbitrary string metadata to any stream. You can also create -a new stream by sending metadata using a stream id that does not already -exist. +You can add arbitrary string metadata to any stream. You can also create a new stream by sending metadata using a stream id that does not already exist. ## Before you begin - - You should have a [Workspace set up](../../platform/glossary.md#workspace) with at least one [Topic](../../platform/glossary.md#topics). + - You should have an [environment set up](../../platform/glossary.md#environment) with at least one [Topic](../../platform/glossary.md#topics). - - [Get a Personal Access - Token](authenticate.md) to authenticate each - request. + - [Get a Personal Access Token](authenticate.md) to authenticate each request. ## How to add metadata to a stream -Send a `PUT` request to the following endpoint to update a stream with -the given properties: +Send a `PUT` request to the following endpoint to update a stream with the given properties: /topics/${topicName}/streams/${streamId} -You should replace `$\{topicName}` with the name of the topic your -stream belongs to, and `$\{streamId}` with the id of the stream you wish -to update. For example: +You should replace `$\{topicName}` with the name of the topic your stream belongs to, and `$\{streamId}` with the id of the stream you wish to update. For example: /topics/cars/streams/66fb0a2f-eb70-494e-9df7-c06d275aeb7c @@ -29,20 +22,15 @@ to update. For example: You can create a new stream by supplying a `$\{streamId}` that doesn’t already exist. It will be initialized with the data you provide in the payload, and the id you use in the endpoint. This avoids the need to call the [create stream endpoint](create-stream.md) separately. -Your request should contain a payload consisting of JSON data containing -the desired metadata. +Your request should contain a payload consisting of JSON data containing the desired metadata. ### Example request -Below is an example payload demonstrating how to set a single item of -metadata. Note that the `metadata` property references an object which -contains key/value string-based metadata. - - +Below is an example payload demonstrating how to set a single item of metadata. Note that the `metadata` property references an object which contains key/value string-based metadata. - curl - ``` shell + ```shell curl "https://${domain}.platform.quix.ai/topics/${topicName}/streams/${streamId}" \ -X PUT \ -H "Authorization: bearer ${token}" \ @@ -52,7 +40,7 @@ contains key/value string-based metadata. - Node.js - ``` javascript + ```javascript const https = require('https'); const data = JSON.stringify({ metadata: { fruit: "apple" }}); @@ -73,13 +61,9 @@ contains key/value string-based metadata. req.end(); ``` +Since this is a PUT request, it will replace all the stream data with the payload contents. To maintain existing data, you should include it in the payload alongside your metadata, for example: - -Since this is a PUT request, it will replace all the stream data with -the payload contents. To maintain existing data, you should include it -in the payload alongside your metadata, e.g. - -``` json +```json { "name": "Example stream", "location": "/sub/dir", @@ -91,17 +75,14 @@ in the payload alongside your metadata, e.g. ### Response -No payload is returned from this call. A 200 HTTP response code -indicates success. If the call fails, you should see either a 4xx or 5xx -response code indicating what went wrong. For example, you’ll see a 405 -code if you forget to specify the correct `PUT` method. +No payload is returned from this call. A 200 HTTP response code indicates success. If the call fails, you should see either a 4xx or 5xx response code indicating what went wrong. For example, you’ll see a 405 code if you forget to specify the correct `PUT` method. ## Using SignalR -``` javascript +```javascript var signalR = require("@microsoft/signalr"); const token = "YOUR_TOKEN" -const workspaceId = "YOUR_WORKSPACE_ID" +const environmentId = "YOUR_ENVIRONMENT_ID" const topic = "YOUR_TOPIC_NAME" const streamId = "ID_OF_STREAM_TO_UPDATE" @@ -110,7 +91,7 @@ const options = { }; const connection = new signalR.HubConnectionBuilder() - .withUrl("https://writer-" + workspaceId + ".platform.quix.ai/hub", options) + .withUrl("https://writer-" + environmentId + ".platform.quix.ai/hub", options) .build(); // Establish connection @@ -132,5 +113,6 @@ connection.start().then(async () => { console.log("Updated stream"); }); ``` -!!! tip +!!! tip + Also available as JsFiddle at [https://jsfiddle.net/QuixAI/ruywnz28/](https://jsfiddle.net/QuixAI/ruywnz28/){target=_blank} \ No newline at end of file diff --git a/docs/index.md b/docs/index.md index e04632f5..d4e04af4 100644 --- a/docs/index.md +++ b/docs/index.md @@ -2,10 +2,20 @@ Welcome to the Quix developer documentation! +!!! note "Beta" + + **See [here](./platform/changes.md) for significant recent changes.** + !!! tip Our docs support hotkeys. Press ++slash++, ++s++, or ++f++ to activate search, ++p++ or ++comma++ to go to the previous page, ++n++ or ++period++ to go to the next page. +## Documentation changelog + +The documentation changelog can be found in the [documentation repository Wiki](https://github.com/quixio/quix-docs/wiki/Docs-Releases). + +This is in addition to general product changes which are summarized in the [changes documentation](./platform/changes.md). + ## Get started If you're new to Quix, here are some resources to help get you started quickly. @@ -66,7 +76,7 @@ By following these tutorials, you can learn how to build data-driven apps, and i --- - Deploy a real-time **data science** project into a scalable self-maintained solution. + Deploy a real-time **data science** application into a scalable self-maintained solution. [:octicons-arrow-right-24: Data Science](./platform/tutorials/data-science/index.md) @@ -126,13 +136,13 @@ Read more about the Quix Streams Client Library and APIs. [:octicons-arrow-right-24: Learn more](./apis/streaming-reader-api/intro.md) -- __Data Catalogue API__ +- __Query API__ --- - Query historical time-series data in Quix using HTTP interface. + Query historical time series data in Quix using HTTP interface. - [:octicons-arrow-right-24: Learn more](./apis/data-catalogue-api/intro.md) + [:octicons-arrow-right-24: Learn more](./apis/query-api/intro.md) diff --git a/docs/platform/MLOps.md b/docs/platform/MLOps.md index 0ef55a1c..a0a5d1e4 100644 --- a/docs/platform/MLOps.md +++ b/docs/platform/MLOps.md @@ -8,24 +8,14 @@ Solving these challenges is a new field of expertise called [MLOps](https://en.w Quix have incorporated MLOps into the Quix Platform so that your data team has a seamless journey from concept to production. The key steps in the MLOps process are: -1. Discover and access data -2. Develop features in historical data -3. Build and train models on historical data -4. Test models on live data -5. Build a production pipeline -6. Deploy production models -7. Monitor production models +1. Build and train models on historical data +2. Test models on live data +3. Build a production pipeline +4. Deploy production models +5. Monitor production models Each of these is described in the following sections. -## Discover and access data - -Using the Quix Data Catalogue, team members can quickly access data without support from software or regulatory teams. - -## Develop features in historical data - -Use the Quix Data Explorer to discover, segment, label, and store significant features in the catalogue. - ## Build and train models on historical data In the Quix Portal you can: @@ -49,7 +39,7 @@ In the Quix Portal you can: ## Deploy production models -in the Quix Portal, with one click of the project's `Deploy` button, and tweaking of the default configuration if required, data engineers can deploy their Python models to production, without support from software engineering or DevOps teams. +in the Quix Portal, with one click of the application's `Deploy` button, and tweaking of the default configuration if required, data engineers can deploy their Python models to production, without support from software engineering or DevOps teams. ## Monitor production models diff --git a/docs/platform/changes.md b/docs/platform/changes.md new file mode 100644 index 00000000..57ae6b33 --- /dev/null +++ b/docs/platform/changes.md @@ -0,0 +1,176 @@ +# Recent changes to Quix + +The Quix Platform has recently undergone some substantial changes. These changes introduce some new terminology, and may also impact the way you currently work. This page provides an overview of the main changes. + +## Main features of the update + +Quix now supports the ability to: + +* Host a complete project in a Git repository +* Build a complete project from a single YAML file +* Automatically synchronize changes in the Git repository with the pipeline view +* Easily manage multiple environments such as production, staging and development +* Host projects with Quix-hosted Git or using a third-party provider +* Enable environments to leverage Quix-hosted Kafka, self-hosted Kafka, or Confluent Cloud +* Customized resource variables, so you can allocate different resources to different environments - for example, automatically allocate more CPU cores to production +* Secret (encrypted) variables + +These new features introduced significant changes to the Quix UI and workflow. The rest of this documentation describes the new terminology, features, and workflow. + +## Watch a video + +Watch a video demonstrating the new features, UI and workflow: + +1. [Creating a project and the first environment](https://www.loom.com/share/b4488be244834333aec56e1a35faf4db?sid=a9aa124a-a2b0-45f1-a756-11b4395d0efc){target=_blank} +2. [Creating an additional environment](https://www.loom.com/share/877ae703f0cf458f8827341549adce6c?sid=5cacebef-659f-45cd-b4eb-c2e3f7104ccb){target=_blank} +3. [Creating an application](https://www.loom.com/share/dee01c5f7d0d4d338504c3c09dcd3181?sid=b902acbd-ef72-4450-80f3-6201764f48b9){target=_blank} +4. [Merging environments](https://www.loom.com/share/b2f2115fba014473aac072bb61609160?sid=22ddf07f-fa40-4ed8-a5ae-1a6eb0420465){target=_blank} **Note:** video is continued in the next video. +5. [Using YAML variables for deployment configuration](https://www.loom.com/share/c66029f67b8747bbb28c0605f5ea3fad?sid=4f30404e-f2dc-4564-b758-5935c405be3e){target=_blank} +6. [Third-party Git - Part 1](https://www.loom.com/share/b48b2b3aede5487e8591aa6aee6e1e9a?sid=79ca7ea8-4491-44f6-a675-f7d1075fa5ed){target=_blank} +7. [Third-party Git - Part 2](https://www.loom.com/share/71590a35421f4626a46a753d8a691cad?sid=0e1e269d-5e60-4be4-b861-a68557961749){target=_blank} +8. [Confluent Cloud](https://www.loom.com/share/fb5d9cd4ab6e4e0caf01e97f5000885c?sid=0512bb6c-2f1c-4e10-b085-05bd49aa15f5){target=_blank} + +## Projects + +In previous versions of Quix, each service was developed independently and had its code in its own Git repository. The problem with this approach was it was not possible to manage a complete pipeline with revision control. For the pipeline you could not have different branches for development or production, for example. + +Quix now introduces the concept of a "monorepo" known as a project, where all the code and configuration for the pipeline is stored in a single repository. + +The monorepo (project) contains all the branches for the project, and all revision history for all services that make up the pipeline. This enables you to manage the complete pipeline development with full revision history, and use branches to develop features that are then merged after testing, as is usual for development processes based around Git. + +With these changes, you now start your pipeline development by creating a project. A project is an entity that corresponds to a Git repository. That Git repository can be hosted for you on Quix, or you can use another provider such as GitHub or Bitbucket. + +A project contains one or more environments (which is mapped to a Git branch), so typically you create an environment as part of the project creation workflow, and then create additional environments as required. + +You can read more about the structure of a project in Git in the [project structure documentation](../platform/how-to/project-structure.md). + +## Environments + +An environment can be thought of as an entity that encapsulates a branch in your project, that contains the code for your applications. For example, you could have an environment called "production" that references a `main` branch. You could also have an environment called "develop" that references a `dev` branch. + +While environments share a repository, they are logically isolated. Each environment can use Kafka hosted by Quix, self-hosted Kafka, or Confluent Cloud, independently of each other. For example, your "develop" environment could use Quix-hosted Kafka, and your "production" environment could use Confluent Cloud. + +Let’s look at an example, Project Alpha, which has a Git repository hosted on Bitbucket: + +| Environment | Branch | Kafka Hosting | +| ---|---|---| +| Production | main | Confluent Cloud | +| Staging | staging | Quix | +| Develop | dev | Quix | + +You can see that while the project is hosted in Bitbucket, each environment can use a different Kafka hosting option as required for the use case. + +!!! note + + In previous versions of Quix, the main entity most closely corresponding to an environment was the workspace. In some circumstances you may still see the term workspace used in some places, such as URLs. Simply bear in mind that workspaces are now environments, and very much enhanced in their capabilities. + +## Protected environments + +When you create a branch, it is possible to make it protected. This means that you can’t change the branch directly. You can’t commit changes directly into a protected branch. To modify a protected branch you would need to create a pull request, which would need to be reviewed, approved, and then merge in the usual way for the Git workflow. + +!!! tip + + Note that as all code and configuration for an environment is stored in its corresponding branch, you will not be able to directly change an environment that has a protected branch. + +Consider a simple example where you have a protected `main` branch, and a `dev` branch. You would carry out normal development work in the `dev` branch, and then when satisfied that the changes are fully correct and tested, you could create a merge pull request to merge `dev` into `main`. The pull request would appear in your Git provider (Gitea if using the Quix-hosted Git solution), where it could be reviewed by other developers, approved, and then merged into `main`. + +If you then view the pipeline in the production environment, it is now marked as “out of sync”. This is because the view of the pipeline in the Quix environment is now different to what is in the `main` branch of the repository. If you then “sync” the environment, you can see the changes you merged from dev to main are reflected in the production pipeline. + +If you make changes to an unprotected environment in the Quix "view", then the environment differs from the configuration and code in the corresponding repository branch. Quix will detect this and you will again be notified that the environment is now out of sync. You can simply click `Sync environment` to have the changes in the Quix view reflected in the corresponding branch. + +[Watch a video on merging changes](https://www.loom.com/share/b2f2115fba014473aac072bb61609160?sid=22ddf07f-fa40-4ed8-a5ae-1a6eb0420465){target=_blank} + +## Syncing an environment + +You can think of your environment as consisting of two parts: the Quix view, and what's in the Git repository. Sometimes these become out of sync. For example, if you make changes in the Quix view, such as adding an application to your pipeline, those changes need to be synchronized to the repository. Mostly this is done automatically for you. + +Sometimes you will need to sync your environments manually. For example, if you have merged a pull request from dev to production, then the contents of the repository branch corresponding to the production environment will now differ from the Quix view of the environment. This will be detected, and you are offered the option to synchronize between the corresponding branch in the Git repository, and the Quix view of the environment. + +You can always review the changes that will be made to your `quix.yaml` file, before you perform the synchronization. + +The rules around manual and automatic synchronization are: + +1. Operations performed in the Quix portal should not cause "out of sync", as those operations are automatically saved to the Git repository. This is the case for both Quix-managed and third-party hosted Git. +2. The exception to this is the case of YAML variables. When variables are created, if those are included in the `quix.yaml`, you will need to perform manual synchronization. You are prompted if this is required. +3. If you change the `quix.yaml` in the Git repository, then you may get "out of sync". The `quix.yaml` file currently only includes topics and deployments. + +!!! important + + Quix always notifies you if manual synchronization is required. You can then click `Sync environment` and you are shown the changes that will be made on synchronization. If you are happy with the changes that are to be made, then you can proceed with the synchronization. + +## Applications + +With a branch you develop your Applications, typically in Python. Each application represents the implementation of a source, transform, or destination. + +For example, you might create a new source component to retrieve data from an external service, a transform to process this data, and then perhaps a destination, which could store data in a Postgres database. You might have another destination to display the data on a Streamlit dashboard. + +These applications are connected together to form your pipeline. It is important to note the pipeline is contained within a branch. + +For example, a pipeline on the develop branch of Project Alpha might be: + +| Application name | Type | Notes | +|---|---|---| +| Inbound data | Source | Fetch data from REST API | +| IP to geo | Transform | Converts IP address to Geolocation | +| Archive data | Destination | Write to Postgres relational database | +| Dashboard | Destination | Streamlit dashboard | + +### Application name and path + +A new feature in Quix is the ability to change an application name and path. For example, when creating a new application from a Code Sample, you are prompted for the application name and path. + +The application name can be any suitable name, the application path is the folder in which the application is stored in the Git repository. By default the path is set to be the same as the application name, but you can choose any path name - for example, you might not want to use spaces in folder names, so you use underscores instead of spaces in the path. + +See documentation on how to [create an application](../platform/how-to/create-application.md). + +## Pipeline + +An entire Quix pipeline can be described by a `quix.yaml` file. This file is also used to configure resources used by the deployment. + +This allows Quix to quickly replicate an entire pipeline and configuration. For example, a pipeline created and tested in one branch, can be quickly duplicated in another branch. + +### YAML variables + +It is also possible to use [YAML variables](../platform/how-to/yaml-variables.md) in the YAML file to configure resources differently depending on the environment. + +## Secret variables + +It is now possible to make the value assigned to an environment variable completely secret. This means it cannot be read in code (either in the Application view or in the Git repository), the UI, or in the YAML code for a pipeline. This is essential in order to keep credentials for others services, such as Amazon Web Services (AWS) secure. + +To create a secret variable read the [secrets management](../platform/how-to/environment-variables.md#secrets-management) section in the environment variables documentation. + +## New and legacy terminology comparison + +The key terminology changes are shown in the following table: + +| New | Legacy | New meaning | +|---|---|---| +| Project | N/A | Git repository | +| Environment | Workspace | Git branch plus Kafka hosting and storage options | +| Branch | N/A | Represents a Git branch, such as main or dev | +| Application | Project | The files for the implementation of a source, transform, or destination | + +## Legacy workspaces + +To protect your investment in Quix, legacy workspaces will be supported for some time to come. + +!!! important + + Quix **strongly** advises that your event streaming solutions use the new project-based approach, with environments aligned with your development processes. Simply click `+ New project` to build your first project and environment. + +If you already have a legacy workspace, or you wish to create one for some reason, such as to follow some old content, then this is still supported. You can create a legacy workspace by clicking `+ New workspace`, as shown in the following screenshot: + +![Legacy Workspace](../platform/images/legacy-workspaces.png) + +!!! tip + + Legacy workspaces now map to [environments](#environments). In the dashboard you can see that your legacy workspaces are simply counted as environments. + +## Next steps + +* [What is Quix?](../platform/what-is-quix.md) +* [Check the Quix Glossary](../platform/glossary.md) +* [Dive into the Quickstart](../platform/quickstart.md) +* [Create a project](../platform/how-to/create-project.md) +* [Project structure](../platform/how-to/project-structure.md) +* [YAML variables](../platform/how-to/yaml-variables.md) diff --git a/docs/platform/glossary.md b/docs/platform/glossary.md index 541f0e36..0a306bef 100644 --- a/docs/platform/glossary.md +++ b/docs/platform/glossary.md @@ -6,29 +6,49 @@ The following is a list of terms useful when working with Quix and streaming dat In addition to the Quix Streams client library, there are several APIs that you can use with Quix. See the [API landing page](../apis/index.md). +## Application + +A set of code in Quix Platform that can be edited, compiled, run, and deployed as one Docker image (configured using a `dockerfile`). + +Applications in Quix Platform exist inside the Git branch associated with an [environment](#environment), and are therefore fully version controlled. You can also tag your code as an easy way to manage deployments. + +Read more about [applications](../platform/changes.md#applications). + ## Binary data Quix also supports any binary blob data. -With this data you can stream, process and store any type of audio, image, video or lidar data, or anything that isn’t supported with time-series, event, or metadata types. +With this data you can stream, process and store any type of audio, image, video or lidar data, or anything that isn’t supported with time series, event, or metadata types. ## Code Samples Quix Platform contains a large number of [open source](https://github.com/quixio/quix-samples) Code Samples. You can use these to quickly build out your stream processing pipeline. Generally the code samples are divided into three main categories: source, transform, destination. You can access the Code Samples from within the Quix Portal by using the navigation menu as show here: -![Code Samples](./images/code-samples.png){height=50%} +![Code Samples](./images/code-samples.png){height=30%} ## Connectors There are [many ways](../platform/ingest-data.md) to get data into Quix Platform. One option is to use the many connectors already provided by Quix. These can be viewed in Quix Platform by clicking Code Samples and then selecting Source and Destination filters. Alternatively, you can see a useful page in our documentation, that lists the [available connectors](../platform/connectors/index.md). -## Data Catalogue API +## Consumer + +Any project, container or application that [subscribes](https://quix.io/docs/client-library/subscribe.html) to data in a topic. + +## Consumer group + +Consumer [replicas](#replicas) can be grouped into a consumer group. When consumers are grouped into a consumer group, processing of a topic's messages is distributed over all consumers, providing horizontal scaling. + +If the consumers (replicas) are not in a consumer group, then all messages are processed by all replicas. + +## Data ingestion + +Data ingestion is the means by which you get your data into Quix. -An [HTTP API](../apis/data-catalogue-api/intro.md) used to query historical data in the Data Catalogue. Most commonly used for dashboards, analytics and training ML models. Also useful to call historical data when running an ML model, or to call historical data from an external application. +Read more about [data ingestion](../platform/ingest-data.md). -## Data Types +## Data types -Quix supports time-series data, events, metadata, and blobs with the following data types: +Quix supports time series data, events, metadata, and blobs with the following data types: * Numeric (double precision) * String (UTF-8) @@ -38,7 +58,35 @@ Read more about [data types](../client-library-intro.md#multiple-data-types). ## Deployment -An instance of a project running in the serverless environment. When you deploy your project, you can specify a number of parameters such as allocated RAM, CPU count, number of replicas, and public URL. You can also specify whether you want it to run as a [job](#job) or a [service](#service), depending on your use case. A job runs only once, and service runs continuously. +An instance of an application running in the serverless environment. When you deploy your application, you can specify a number of parameters such as allocated [RAM](#ram), [CPU count (cores)](#cpu-cores), number of [replicas](#replicas), and public URL. You can also specify whether you want it to run as a [job](#job) or a [service](#service), depending on your use case. A job runs only once, and service runs continuously. + +The following screenshot shows the `New Deployment` dialog: + +![New Deployment](../platform/images/deployment.png) + +### CPU (cores) + +The number of CPU cores allocated to a deployment. The range is 0.1 to 16 cores. + +### RAM + +Random-Access Memory (RAM) allocated to a deployment. The RAM is specified in GB. The range is 0.1 to 32 GB. + +### Replicas + +The number of instances of the deployment (service). If the replicas are part of a [consumer group](#consumer-group), then each message in the topic is processed once by only one replica. If the replicas are not part of a consumer group, then all messages are processed by all replicas. + +## Destination + +A type of [connector](../platform/connectors/index.md) where data is consumed from a Quix topic by an output (destination) such as a database or dashboard. + +## Environment + +An environment is an entity that encapsulates a branch in your [project](#project), that contains the code for your [applications](#application). + +Each environment can use Kafka hosted by Quix, self-hosted Kafka, or on Confluent Cloud. + +Read more about [environments](../platform/changes.md#environments). ## Events @@ -50,7 +98,7 @@ For example: * Game started, match made, kill made, player won the race, lap completed, track limits exceeded, task completed. * Takeoff, landing, missile launched, fuel low, autopilot engaged, pilot ejected. -Events are typically things that occur less frequently. They are streamed into the same topics as their related time-series data, and act to provide some context to what is happening. For example, start and stop events typically mark the beginning and end of a data stream. +Events are typically things that occur less frequently. They are streamed into the same topics as their related time series data, and act to provide some context to what is happening. For example, start and stop events typically mark the beginning and end of a data stream. Read more about [event data](../client-library/publish.md#eventdata-format). @@ -74,19 +122,65 @@ Metadata is key to data governance and becomes very useful in down-stream data p Read more about [Metadata](../client-library/publish.md#parameter-definitions). -## Online IDE (Quix Platform) +## Microservice -Quix provides an online Integrated Development Environment (IDE) for Python and C# projects. When you open any project, you will see the **Run** button, and a console during runtime, in addition to the IntelliSense. +An instance of a Quix Application that has been deployed and is running. When applications are deployed in the processing pipeline, they are running as microservices, and often referred to simply as services. -Sign up for a [free account](https://portal.platform.quix.ai/self-sign-up). +## Model + +A machine learning model "program" that comprises of both data and a procedure for using the data to make a prediction. In the case of neural networks, a model has typically been trained on a dataset. A standard program or script is not a model - if deployed, this is an application. For example, the sorted list output of a sorting algorithm is not really a model. Neither is an alert script that sends a message when the incoming data goes over a certain threshold. Note: during development a model can be deployed as a job. For example, when training the model. + +## Monorepo + +A monorepo is a single repository that contains all code and configuration for a complete pipeline. The monorepo contains all revision history and all branches for all services and their associated code and configurations. + +In Quix, the monorepo is known as a [project](#project). + +## Online IDE (Quix Portal) + +See [Quix Portal](#quix-portal). + +## Partitions + +When creating a new topic in Quix, you can specify the number of topic partitions. The default is two partitions. You can add more partitions later, but you can’t remove them. Each partition is an independent queue that preserves the order of messages. + +[Quix Streams](#quix-streams) restricts all messages inside a stream to the same partition. This means that inside one stream, a consumer can rely on the order of messages. + +With the Quix Kafka broker, partitions are spread across the Kafka cluster, and over different Kafka nodes, for improved scalability, performance and fault tolerance. + +The benefits of partitions in topics can be summarized as: + +* *Scalability*: Kafka can handle large volumes of data by distributing it across multiple partitions, which can be hosted on different Kafka brokers or servers. +* *Parallelism*: Partitions allow multiple consumers to work in parallel, reading different partitions simultaneously, which improves the overall throughput and processing speed. +* *Durability*: Kafka ensures data durability by replicating each partition to multiple brokers, ensuring that data is not lost in case of broker failures. + +## Pipeline + +Applications implementing a source, transform, or destination, are connected together using [topics](#topic) into a pipeline. A pipeline provides the complete stream processing solution for your use case. The pipeline is visually represented in an environment. ## Portal API -An [HTTP API](../apis/portal-api.md) used to interact with most portal-related features such as creation of [workspaces](#workspace), [users](#workspace), and [deployments](#deployment). +An [HTTP API](../apis/portal-api.md) used to interact with most portal-related features such as creation of [environments](#environment), users, and [deployments](#deployment). + +## Producer + +Any project, container or application that [publishes](https://quix.io/docs/client-library/publish.html) data to a topic. ## Project -A set of code in Quix Platform that can be edited, compiled, run, and deployed as one Docker image. Projects in Quix Platform are fully version controlled. You can also tag your code as an easy way to manage releases of your project. +A project is an entity that corresponds to a Git repository. That Git repository can be hosted for you on Quix, or you can use another provider such as GitHub or Bitbucket to host the repository. + +A project contains one or more [environments](#environment), so typically you create an environment as part of the project creation workflow, and then create additional environments as required. + +## Query API + +The [Query API](../apis/query-api/intro.md) is used to query persisted data. Most commonly used for dashboards, analytics and training ML models. Also useful to call historical data when running an ML model, or to call historical data from an external application. This API is primarily iused for testing and debugging purposes. + +## Quix Portal + +Quix provides an online Integrated Development Environment (IDE) for Python and C# projects. When you open any project, you will see the **Run** button, and a console during runtime, in addition to the IntelliSense. + +Sign up for a [free account](https://portal.platform.quix.ai/self-sign-up). ## Quix Streams @@ -96,9 +190,13 @@ A set of code in Quix Platform that can be edited, compiled, run, and deployed a Any application code that runs continuously in the serverless environment. For example, a connector, a function, a backend operation, or an integration to a third-party service like Twilio. +## Source + +A type of [connector](../platform/connectors/index.md) where data is published to a Quix topic from an input (source), such as a web service or command line program. + ## Stream -A stream is a collection of data (time-series data, events, binary blobs and metadata) that belong to a single session of a single source. For example: +A stream is a collection of data (time series data, events, binary blobs and metadata) that belong to a single session of a single source. For example: * One journey for one car * One game session for one player @@ -106,19 +204,27 @@ A stream is a collection of data (time-series data, events, binary blobs and met Read more about [streams](../client-library/features/streaming-context.md). -## Time-series data +## Streaming Reader API + +A [WebSockets API](../apis/streaming-reader-api/intro.md) used to stream any data directly from a topic to an external application. Most commonly used to read the results of a model or service to a real-time web application. Your application **reads** data from Quix Platform. + +## Streaming Writer API + +An [HTTP API](../apis/streaming-writer-api/intro.md) used to send telemetry data from any source to a topic in the Quix platform. It should be used when it is not possible to use [Quix Streams](../client-library-intro.md). Your application **writes** data into Quix Platform. + +## Time series data Tine-series data consists of values that change over time. Quix Streams supports numeric and string values. For example: -* Crank revolution and oil temperature are two engine time-series variables that define the engine system. -* Player position in X, Y and Z are three time-series variables that define the player location in a game. -* Altitude, GPS LAT, GPS LONG and Speed are four time-series variables that define the location and velocity of a plane in the sky. +* Crank revolution and oil temperature are two engine time series variables that define the engine system. +* Player position in X, Y and Z are three time series variables that define the player location in a game. +* Altitude, GPS LAT, GPS LONG and Speed are four time series variables that define the location and velocity of a plane in the sky. Referring back to topics as a grouping context: Quix recommends that each of these examples would be grouped into a single topic to maintain context. -Read more about [time-series data](../client-library/publish.md#timeseriesdata-format). +Read more about [time series data](../client-library/publish.md#timeseriesdata-format). ## Timestamp @@ -148,25 +254,3 @@ Topics are key for scalability and good data governance. Use them to organize yo * Maintaining separate topics for raw, clean, or processed data Read more about [topics](../client-library/publish.md#create-a-topic-producer). - -## Streaming Reader API - -A [WebSockets API](../apis/streaming-reader-api/intro.md) used to stream any data directly from a topic to an external application. Most commonly used to read the results of a model or service to a real-time web application. Your application **reads** data from Quix Platform. - -## Streaming Writer API - -An [HTTP API](../apis/streaming-writer-api/intro.md) used to send telemetry data from any source to a topic in the Quix platform. It should be used when it is not possible to use [Quix Streams](../client-library-intro.md). Your application **writes** data into Quix Platform. - -## Workspace - -In Quix Platform, a workspace is an instance of a complete streaming infrastructure isolated from the rest of your Organization in terms of performance and security. It contains its own dedicated API instances and Quix internal services. - -You can imagine a workspace as the streaming infrastructure of your company or your team. As each workspace has its own allocated infrastructure, development work in one workspace will not affect the performance and reliability of another workspace. - -You can also have different workspaces to separate different stages of your development process like Development, Staging, and Production. - -Part of a typical workspace is shown here: - -![Workspace](./images/workspace.png) - -Workspaces are collaborative. Multiple users, including developers, data scientitsts, and machine learning engineers, can all work together in the same workspace. You can invite other users into a workspace you created. diff --git a/docs/platform/how-to/create-application.md b/docs/platform/how-to/create-application.md new file mode 100644 index 00000000..5b24e5e8 --- /dev/null +++ b/docs/platform/how-to/create-application.md @@ -0,0 +1,25 @@ +# Create an application + +There are various ways to create an application: + +* From the `Pipeline` view, select `+ Add new`, and then select the appropriate type - for example, one of: source, external source, transformation, external destination, destination. +* From the `Applications` view, click `+ New application`. +* From the `Code Samples` view, click a suitable code sample as a starting point, for example `Starter transformation`. + +When you create your application, you are prompted for an application name and path: + +![Application name and path](../images/how-to/application/save-code-sample.png) + +The application name can be any suitable name, the application path is the folder in which the application is stored in the Git repository. + +By default the path is set to be the same as the application name, but you can choose any path name - for example, you might not want to use spaces in folder names, so you use underscores instead of spaces in the path. + +By way of example, if you created an application with the name `My Application Name`, and left the path as the default value, you Git repository would look similar to the following: + +![Application path](../images/how-to/application/application-path.png) + +Note the path is the same as the application name. + +!!! tip + + When you create an application from a code sample, the comment added to the repository displays the code sample that was used as the starting point. In the previous example, the repository comment is "Created from Starter transformation". \ No newline at end of file diff --git a/docs/platform/how-to/create-project.md b/docs/platform/how-to/create-project.md new file mode 100644 index 00000000..f97df245 --- /dev/null +++ b/docs/platform/how-to/create-project.md @@ -0,0 +1,109 @@ +# How to create a project + +This documentation describes how to create a new project, and populate it with two environments: `Production` and `Develop`. The `Production` environment is protected. Development work is done in the `Develop` environment, and then reflected in the `Production` environment through a merge request. + +!!! note + + You can create as many environments in a project as you need. You can mark them as protected, and name them as needed, to align with your own development processes. This how-to simply shows one example project. + +## Creating a project + +To do anything useful with Quix, you'll need at least one project, and one environment. You can think of a project as corresponding to a Git repository, and an environment as corresponding to Git branch within that repository. + +1. [Sign up](https://portal.platform.quix.ai/self-sign-up){target=_blank} and log into Quix. + +2. Click on `+ New project`. + + ![+ New project](../images/how-to/create-project/create-project.png) + +3. Give your project a name, such as `My project`. + +4. Select either Quix-hosted Git, or a third-party Git provider. The third-party provider must support SSH keys. + +5. Click `Create project`. + +You are taken automatically into the `Environment settings` wizard to create your first environment. + +## Creating the `Production` environment + +The environment corresponds with a branch in your project. Typically you'll have multiple environments. As well as correpsonding to a branch, an environment contains your selected Kafka hosting options, and also the storage requirements. + +1. Enter the environment name, `Production`. Note: it can be named anything that suits your own development processes. + +2. Select the repository branch. By default this is `main`. Leave this as the default value. + +3. Check the `This branch is protected` checkbox. This prevents modifications directly to your production environment. + +4. Click `Continue`. + +5. You can now select your broker settings for the environment. The options are Quix-hosted Kafka, Self-hosted Kafka, and Confluent Cloud. Select `Quix Broker` and then click `Continue`. + +6. Select the `Standard` storage option, and then click `Create environment`. + +## Creating the `Develop` environment + +You'll now create an environment in which you can do your development work (remember, the production environment is protected in this example, so you can't change it directly). + +1. There are various ways to add an environment. One way is to click the kebab menu next to the panel that displays your environments: + + ![+ New project](../images/how-to/create-project/add-environment.png) + +2. Now click `+ New environment`. + +3. Enter the environment name, `Develop`. Note: it can be named anything that suits your own development processes. + +4. Select the repository branch. Activate the repository branch dropdown menu, and click `+ New branch`. + +5. In the `New branch` dialog enter `dev` as the branch name. In this case you want to branch from `main`. Note: again, values entered here can be anything that suits your development process, for example, you may create branches from branches if required. + +6. Click `Create branch`. + +7. As you are going to do development work here, leave the `This branch is protected` checkbox clear. + +8. Click `Continue`. + +9. You can now select your broker settings for the environment. The options are Quix-hosted Kafka, Self-hosted Kafka, and Confluent Cloud. Select `Quix Broker` and then click `Continue`. + +10. Select the `Standard` storage option, and then click `Create environment`. + +You have now created your `Develop` environment. + +## Performing a merge request + +Once you have carried out development work, you will want to have those changes reflected in production. As your `Production` environment is protected you have to do this by creating a merge request. + +1. Click the kebab menu next to the panel that displays your environments. + +2. Select `Merge request`. + +3. Select a source and target environment. In this example the source is `Develop` and the target is `Production`. + +4. Click `Create pull request`. + + At this point, you will be taken into your Git provider where you can review the merge commit. Use your usual development processes to review and approve the merge. + + !!! tip + + If using the Quix-hosted Git provider and you are asked to log into Gitea, you need to obtain your Git credentials. To do this click on your profile image in Quix, and then select `Manage Git credentials`. Generate a password, and use the email and generated password to log into Gitea. + +## Syncing your environment + +When you select your `Production` environment, you will see that it is now flagged as `out of sync` with the Git repository. You now need to synchronize the environment to have the changes submitted using the merge commit reflected in the Quix view of the environment. To do this: + +1. In the top right corner click the blue `Sync environment` button. The `Sync environment` dialog is displayed. + +2. Review the changes that will be made to the `quix.yaml` file. Note: the `quix.yaml` file is an important file that defines the entire pipeline in your environment. Your pipeline view in Quix is built from this file. + +3. Click `Sync environment`. You also have the options of editing the YAML or exiting the sync process. + +4. Once synchronized, click the `Go to pipeline` button. + +The pipeline in `Production` now reflects the work that was done in the `Develop` environment. + +## Next steps + +Here are some suggested next steps to help you continue your Quix learning journey: + +* Read about Quix projects, environments, and other terminology in the [Quix glossary](../glossary.md). +* Read an overview of the most [recent significant changes](../changes.md). +* Go on the [Quix Tour](../quixtour/overview.md). diff --git a/docs/platform/how-to/environment-variables.md b/docs/platform/how-to/environment-variables.md index 744ed63d..3e659cca 100644 --- a/docs/platform/how-to/environment-variables.md +++ b/docs/platform/how-to/environment-variables.md @@ -22,3 +22,39 @@ Once the variable has been created, you can then access the variable in your cod api_secret = os.environ["API_SECRET"] print(api_secret) ``` + +## Secrets management + +Sometimes you connect the [Quix Connectors](../connectors/index.md), or services you have created, to other services, such as AWS, Vonage, Twilio, Azure and so on. You usually need to provide credentials to access these third-party APIs and services, using environment variables. + +You do not want to expose these credentials through the use of environment variables in your YAML code, service code, Git repository, or even the UI, which may have shared access. Quix provides a feature to enable your credentials to be stored securely - secrets management. + +Using secrets management, you can create secret variables whose value is hidden in code (Git repository and Application code view), UI, and YAML code. The value is encrypted. This is ideal for environment variables used to provide credentials to third-party services and APIs in a secure manner. + +## Create a secret variable + +To create a secret variable: + +1. In the code view for your application, select `Secrets management`: + + ![secrets management](../images/how-to/env-variables/secrets-management.png) + +2. Click `+ New variable`. + +3. Give your secret variable a name, a default value, and a value for the current environment, or environments if you have more than one for this project. + +4. Click `Save changes` to save the secret variable. + +## To use a secret variable + +You can't use the secret variable directly, you need to create an environment variable that uses it. + +To create an environment variable that uses a secret variable: + +1. In the `Add Variable` dialog, select the `secret` icon. + +2. Give the environment variable a name. + +3. Select the corresponding secret variable from the dropdown list. + +Now, when the environment variable appears in code or the UI, its value is the secret variable you assigned, but you cannot not see the value of the secret variable, only its name. diff --git a/docs/platform/how-to/get-environment-id.md b/docs/platform/how-to/get-environment-id.md new file mode 100644 index 00000000..18678a01 --- /dev/null +++ b/docs/platform/how-to/get-environment-id.md @@ -0,0 +1,23 @@ +# Get environment ID + +Occasionally, you’ll need to obtain an ID based on a specific environment. For example, endpoints for the [Query API](../../apis/query-api/intro.md) use a domain with the following pattern: + + https://telemetry-query-${environment-id}.platform.quix.ai/ + +The environment ID is a combination of your organization and environment names, converted to URL friendly values. + +To obtain your environment ID: + +1. Go to the [Portal home](https://portal.platform.quix.ai/){target=_blank}. + +2. Locate the environment you’re interested in and open it. + +3. At this point, take note of the URL. It will be in the form: + + https://portal.platform.quix.ai/home?workspace={environment-id} + +Copy the value for `environment-id` and use it wherever you need an environment ID. + +!!! note + + The `workspace` parameter in the URL `https://portal.platform.quix.ai/home?workspace={environment-id}` is there for legacy reasons, and does in fact indicate an environment. \ No newline at end of file diff --git a/docs/platform/how-to/get-workspace-id.md b/docs/platform/how-to/get-workspace-id.md deleted file mode 100644 index 2e4678a9..00000000 --- a/docs/platform/how-to/get-workspace-id.md +++ /dev/null @@ -1,23 +0,0 @@ -# Get Workspace ID - -Occasionally, you’ll need to obtain an ID based on a specific workspace. -For example, endpoints for the [Data Catalogue API](../../apis/data-catalogue-api/intro.md) use a domain with the -following pattern: - - https://telemetry-query-${workspace-id}.platform.quix.ai/ - -The workspace ID is a combination of your organization and workspace -names, converted to URL friendly values. The easiest way to get hold of -it is as follows: - -1. Go to the [Portal home](https://portal.platform.quix.ai/){target=_blank}. - -2. Locate the workspace you’re interested in and click **OPEN**. - -3. At this point, take note of the URL. It should be in the form: - - - - https://portal.platform.quix.ai/home?workspace=**{workspace-id}** - -Copy that value and use it wherever you need a workspace ID. diff --git a/docs/platform/how-to/ingest-csv.md b/docs/platform/how-to/ingest-csv.md index 3b9b8582..fcbe4c3f 100644 --- a/docs/platform/how-to/ingest-csv.md +++ b/docs/platform/how-to/ingest-csv.md @@ -1,6 +1,6 @@ # How to ingest data from a CSV file -You may need to load data from a CSV file into a service, as CSV is a very common file format, especially in data science. One possibility is to upload the CSV to be processed into your Quix project, and read the data from there. Another option is to read the CSV file on some other system (perhaps your laptop) and push that data into Quix using the Quix Streams client library. +You may need to load data from a CSV file into a service, as CSV is a very common file format, especially in data science. One possibility is to upload the CSV to be processed into your Quix application, and read the data from there. Another option is to read the CSV file on some other system (perhaps your laptop) and push that data into Quix using the Quix Streams client library. ## Using pandas diff --git a/docs/platform/how-to/jupyter-nb.md b/docs/platform/how-to/jupyter-nb.md index 8317aee8..1eb71aae 100644 --- a/docs/platform/how-to/jupyter-nb.md +++ b/docs/platform/how-to/jupyter-nb.md @@ -1,70 +1,61 @@ -# Use Jupyter notebooks +# Use Jupyter Notebook -In this article, you will learn how to use Jupyter Notebook to analyse -data persisted in the Quix platform +In this documentation, you learn how to use Jupyter Notebook to analyze data persisted in the Quix platform. ## Why this is important -Although Quix is a realtime platform, to build realtime in-memory models -and data processing pipelines, we need to understand data first. To do -that, Quix offers a Data catalogue that makes data discovery and -analysis so much easier. +Although Quix is a real-time platform, to build real-time in-memory models and data processing pipelines, you need to understand data first. To help with that, Quix offers the option to persist data in topics. This data can be accessed using the [Query API](../../apis/query-api/intro.md). This helps make data discovery and analysis easier. -## Preparation +## Prerequisites -You`ll need some data stored in the Quix platform. You can use any of -our Data Sources available in the Code Samples, or just follow the -onboarding process when you [sign-up to -Quix](https://portal.platform.quix.ai/self-sign-up?xlink=docs){target=_blank}. +You'll need some data stored in the Quix platform. You can use any of the Quix [data sources](../connectors/index.md) available in the Quix Code Samples. -You will also need Python 3 environment set up in your local -environment. +You can also follow the onboarding process when you [sign-up to Quix](https://portal.platform.quix.ai/self-sign-up?xlink=docs){target=_blank}. This process helps you create a source. -Install Jupyter notebooks as directed [here](https://docs.jupyter.org/en/latest/install/notebook-classic.html){target=_blank}. +You also need Python 3 environment set up in your local environment. + +Install Jupyter Notebook as directed [here](https://docs.jupyter.org/en/latest/install/notebook-classic.html){target=_blank}. ### Create a new notebook file -You can now run jupyter from the Windows start menu or with the following command in an Anaconda Powershell Prompt, or the equivalent for your operating system. +You can now run Jupyter from the Windows start menu, or with the following command in an Anaconda Powershell Prompt, or the equivalent for your operating system: ``` shell jupyter notebook ``` -Then create a new Python3 notebook +Then create a new Python 3 notebook: ![how-to/jupyter-wb/new-file.png](../../platform/images/how-to/jupyter-wb/new-file.png) -## Connecting Jupyter notebook to Data Catalogue +## Connecting Jupyter Notebook to persisted data -The Quix web application has a python code generator to help you connect your Jupyter notebook with Quix. +The Quix web application has a Python code generator to help you connect your Jupyter notebook with Quix. -You need to be logged into the platform for this: +You need to be logged into the platform for this. To import persisted data: -1. Select workspace (you likely only have one) +1. Select an environment. -2. Go to the Data Explorer +2. In the main left-hand navigation, click `Data explorer`. -3. Add a query to visualize some data. Select parameters, events, aggregation and time range +3. Add a query to visualize some data. Select parameters, events, aggregation and time range. -4. Select the **Code** tab +4. Select the **Code** tab. -5. Ensure **Python** is the selected language +5. Ensure **Python** is the selected language: -![how-to/jupyter-wb/connect-python.png](../../platform/images/how-to/jupyter-wb/connect-python.png) + ![how-to/jupyter-wb/connect-python.png](../../platform/images/how-to/jupyter-wb/connect-python.png) -Copy the Python code to your Jupyter notebook and run. +6. Copy the Python code to your Jupyter notebook and click `Run`: -![how-to/jupyter-wb/jupyter-results.png](../../platform/images/how-to/jupyter-wb/jupyter-results.png) + ![how-to/jupyter-wb/jupyter-results.png](../../platform/images/how-to/jupyter-wb/jupyter-results.png) !!! tip - If you want to use this generated code for a long time, replace the temporary token with **PAT token**. See [authenticate your requests](../../apis/data-catalogue-api/authenticate.md) how to do that. + If you want to use this generated code for a long time, replace the temporary token with **PAT token**. See [authenticate your requests](../../apis/query-api/authenticate.md) for details on how to do that. ## Too much data -If you find that the query results in more data than can be handled by -Jupyter Notebooks try using the aggregation feature to reduce the amount -of data returned. +If you find that the query results in more data than can be handled by Jupyter Notebook, then try using the aggregation feature to reduce the amount of data returned. -For more info on aggregation check out this [short -video](https://youtu.be/fnEPnIunyxA). +For more info on aggregation you can watch this [short video](https://youtu.be/fnEPnIunyxA). diff --git a/docs/platform/how-to/project-structure.md b/docs/platform/how-to/project-structure.md new file mode 100644 index 00000000..74a25eef --- /dev/null +++ b/docs/platform/how-to/project-structure.md @@ -0,0 +1,104 @@ +# Explore project structure + +This documentation looks at the file structure of a typical project in Quix, as hosted in its Git repository. + +A project in Quix maps to a Git repository. Within a project you can create multiple environments, and these correspond to branches in the Git repository. Within a branch (environment) there are some root files. One example of this is `quix.yaml` which defines the pipeline, and then each application in the pipeline has its own folder, containing code and configuration, such as the `main.py` and `app.yaml` files. + +## Pipeline + +This section shows an example pipeline consisting of one application, `Demo Data`, as illustrated by the following screenshot: + +![Pipeline](../images/how-to/project-structure/pipeline.png) + +Looking at the project stored in Git, it would have the following structure: + +![Project structure](../images/how-to/project-structure/project-structure.png) + +Note the `quix.yaml` file that defines the entire pipeline. There is aso a folder for the application, `Demo Data`. + +The complete `quix.yaml` file is shown here: + +``` yaml +# Quix Project Descriptor +# This file describes the data pipeline and configuration of resources of a Quix Project. + +metadata: + version: 1.0 + +# This section describes the Deployments of the data pipeline +deployments: + - name: Demo Data + application: Demo Data + deploymentType: Job + version: ada522b5199fc9667505b4dd19980995804ca764 + resources: + cpu: 200 + memory: 200 + replicas: 1 + libraryItemId: 7abe02e1-1e75-4783-864c-46b930b42afe + variables: + - name: Topic + inputType: OutputTopic + description: Name of the output topic to write into + required: true + value: f1-data + +# This section describes the Topics of the data pipeline +topics: + - name: f1-data + persisted: false + configuration: + partitions: 2 + replicationFactor: 2 + retentionInMinutes: -1 + retentionInBytes: 262144000 +``` + +This defines one or more deployments, and their allocated resources, as well as other information such as the code commit version to use, in this case `ada522b`. The topics in the pipeline are also defined here. + +## Application + +Opening the `Demo Data` folder in the Git repository, you see the structure of the application (one service in the pipeline) itself: + +![Application structure](../images/how-to/project-structure/app-structure.png) + +The notable file here is the `app.yaml` file that defines important aspects of the application. The full `app.yaml` for this application is shown here: + +``` yaml +name: Demo Data +language: python +variables: + - name: Topic + inputType: OutputTopic + description: Name of the output topic to write into + defaultValue: f1-data + required: true +dockerfile: build/dockerfile +runEntryPoint: main.py +defaultFile: main.py +``` + +This provides a reference to the Dockerfile that is to be used to build the application before it is deployed. This is located in the `build` directory, and the full Dockerfile for this application is shown here: + +``` yaml +FROM python:3.11.1-slim-buster + +ENV DEBIAN_FRONTEND="noninteractive" +ENV PYTHONUNBUFFERED=1 +ENV PYTHONIOENCODING=UTF-8 + +WORKDIR /app +COPY --from=git /project . +RUN find | grep requirements.txt | xargs -I '{}' python3 -m pip install -i http://pip-cache.pip-cache.svc.cluster.local/simple --trusted-host pip-cache.pip-cache.svc.cluster.local -r '{}' --extra-index-url https://pypi.org/simple --extra-index-url https://pkgs.dev.azure.com/quix-analytics/53f7fe95-59fe-4307-b479-2473b96de6d1/_packaging/public/pypi/simple/ +ENTRYPOINT ["python3", "main.py"] +``` + +This defines the build environment used to create the container image that will run in Kubernetes. + +As well as the `app.yaml` the application folder also contains the actual code for the service, in this case in `main.py` - the complete Python code for the application. + +There is also a `requirements.txt` file - this is the standard Python file that lists modules to be installed. In this case there is only one requirement that is "pip installed" as part of the build process, `quixstreams==0.5.4`, the [Quix Streams client library](../../client-library-intro.md). + +Any data files required by the application can also be located in the application's folder. In this example there is a `demo-data.csv` file that is loaded by the application code. + +While this documentation has explored a simple project consisting of a pipeline with one application (service), pipelines with multiple applications have a similar structure, with a `quix.yaml` defining the pipeline, and with each application having its own folder, containing its application-specific files and an `app.yaml` file. diff --git a/docs/platform/how-to/replay.md b/docs/platform/how-to/replay.md index 89c2d865..d2937491 100644 --- a/docs/platform/how-to/replay.md +++ b/docs/platform/how-to/replay.md @@ -18,11 +18,11 @@ Once created, a replay service looks like any other service in your pipeline, an ## To replay persisted data into a topic -You can only replay persisted data, so you need to persist a topic first. In your Quix workspace, select `Topics` in the left-hand sidebar to list all your topics. Ensure the topic you want to persist has persistence switched on, as shown in the following screenshot: +You can only replay persisted data, so you need to persist a topic first. In your Quix environment, select `Topics` in the left-hand sidebar to list all your topics. Ensure the topic you want to persist has persistence switched on, as shown in the following screenshot: ![Enable persistence](../images/how-to/replay/replay-add-persist-topic.png) -Click `Pipeline` in your Quix workspace, and then to create a new replay, click `Add new` in the top right corner: +Click `Pipeline` in your Quix environment, and then to create a new replay, click `Add new` in the top right corner: ![Add replay](../images/how-to/replay/replay-add-new.png) diff --git a/docs/platform/how-to/streaming-token.md b/docs/platform/how-to/streaming-token.md index b853b184..834ac38b 100644 --- a/docs/platform/how-to/streaming-token.md +++ b/docs/platform/how-to/streaming-token.md @@ -1,13 +1,12 @@ # Using a streaming token -A streaming token is a type of bearer token that can be used to authenticate your client to access functionality necessary for streaming actions. Think of streaming tokens as a token you use to access the Quix Portal but -with limited scope. +A streaming token is a type of bearer token that can be used to authenticate your client to access functionality necessary for streaming actions. Think of streaming tokens as a token you use to access the Quix Portal but with limited scope. -Each workspace comes with one of these tokens, limited in use for that specific workspace. +Each environment comes with one of these tokens, limited in use for that specific environment. ## How to find -You can access these tokens by logging into the Quix Portal and clicking on `Settings` in the main left-hand navigation. Then click on `APIs and tokens` and then click on `Streaming Tokens`. +You can access these tokens by logging into the Quix Portal and clicking on `Settings` in the main left-hand navigation. Select your environment, and then click on `APIs and tokens` and then click on `Streaming Tokens`. If you are looking for a bearer token to access the Quix APIs, such as the Portal API, you can select `Personal Access Tokens`. These are custom JWTs. diff --git a/docs/platform/how-to/testing-data-store.md b/docs/platform/how-to/testing-data-store.md new file mode 100644 index 00000000..48c550e5 --- /dev/null +++ b/docs/platform/how-to/testing-data-store.md @@ -0,0 +1,19 @@ +# Data store for testing + +Quix provides a data store for testing and debugging purposes. + +While [topics](../glossary.md#topic) do provide a configurable retention time, persisting data into a database provides advantages - for example, you can perform powerful queries to retrieve historical data. This data can be retrieved and displayed using the Data Explorer, or retrieved using the [Query API](../../apis/query-api/intro.md). + +Quix provides a very simple way to persist data in a topic. Simply locate the topic in your topic list, and click the `Persistance` button. + +!!! important + + You don't have to use the Quix data store. Quix provides numerous [connectors](../connectors/index.md) for common database technologies, so you can always store your data in the database of your choice. + +## Replay service + +When data has been persisted, you have the option to not only query and display it, but replay it into your pipeline. This can be very useful for testing and debugging pipelines using historical data. + +See how to [use the Quix replay service](../how-to/replay.md). + +See also an in-depth blog post on [stream reprocessing](https://quix.io/blog/intro-stream-reprocessing-python/){target=_blank}. diff --git a/docs/platform/how-to/webapps/read.md b/docs/platform/how-to/webapps/read.md index 0f74cd9d..a897a4e7 100644 --- a/docs/platform/how-to/webapps/read.md +++ b/docs/platform/how-to/webapps/read.md @@ -1,23 +1,14 @@ # Read from Quix with Node.js -Quix supports real-time data streaming over WebSockets. JavaScript -clients can receive updates on parameter and event definition updates, -parameter data and event data as they happen. Following examples use -[SignalR](https://docs.microsoft.com/en-us/aspnet/core/signalr/introduction?view=aspnetcore-3.1){target=_blank} -client library to connect to Quix over WebSockets. +Quix supports real-time data streaming over WebSockets. JavaScript clients can receive updates on parameter and event definition updates, parameter data and event data as they happen. Following examples use [SignalR](https://docs.microsoft.com/en-us/aspnet/core/signalr/introduction?view=aspnetcore-3.1){target=_blank} client library to connect to Quix over WebSockets. ## Setting up SignalR -If you are using a package manager like npm, you can install SignalR -using `npm install @microsoft/signalr`. For other installation options -that don’t depend on a platform like Node.js such as consuming SignalR -from a CDN please refer to [SignalR -documentation](https://docs.microsoft.com/en-us/aspnet/core/signalr/javascript-client?view=aspnetcore-3.1){target=_blank}. +If you are using a package manager like npm, you can install SignalR using `npm install @microsoft/signalr`. For other installation options that don’t depend on a platform like Node.js such as consuming SignalR from a CDN please refer to [SignalR documentation](https://docs.microsoft.com/en-us/aspnet/core/signalr/javascript-client?view=aspnetcore-3.1){target=_blank}. -Following code snippet shows how you can connect to Quix after SignalR -has been setup. +Following code snippet shows how you can connect to Quix after SignalR has been setup. -``` javascript +```javascript var signalR = require("@microsoft/signalr"); const options = { @@ -25,26 +16,21 @@ const options = { }; const connection = new signalR.HubConnectionBuilder() - .withUrl("https://reader-your-workspace-id.portal.quix.ai/hub", options) + .withUrl("https://reader-your-environment-id.portal.quix.ai/hub", options) .build(); connection.start().then(() => console.log("SignalR connected.")); ``` -If the connection is successful, you should see the console log "SignalR -connected". +If the connection is successful, you should see the console log "SignalR connected". ## Reading data from a stream -Before you can read data from a stream, you need to subscribe to an -event like parameter definition, event definition, parameter data or -event data. +Before you can read data from a stream, you need to subscribe to an event like parameter definition, event definition, parameter data or event data. -Following is an example of establishing a connection to Quix, -subscribing to a parameter data stream, reading data from that stream, -and unsubscribing from the event using a SignalR client. +Following is an example of establishing a connection to Quix, subscribing to a parameter data stream, reading data from that stream, and unsubscribing from the event using a SignalR client. -``` javascript +```javascript var signalR = require("@microsoft/signalr"); const options = { @@ -52,7 +38,7 @@ const options = { }; const connection = new signalR.HubConnectionBuilder() - .withUrl("https://reader-your-workspace-id.portal.quix.ai/hub", options) + .withUrl("https://reader-your-environment-id.portal.quix.ai/hub", options) .build(); // Establish connection @@ -92,16 +78,13 @@ Following is a list of subscriptions available for SignalR clients. - `UnsubscribeFromStream(topicName, streamId)`: Unsubscribes from all subscriptions of the specified stream. -Following is a list of SignalR events supported by Quix and their -payloads. +Following is a list of SignalR events supported by Quix and their payloads. - `ParameterDataReceived`: Add a listener to this event to receive parameter data from a stream. Following is a sample payload for this event. - - -``` javascript +```javascript { topicName: 'topic-1', streamId: 'b45969d2-4624-4ab7-9779-c8f90ce79420', @@ -116,9 +99,7 @@ payloads. receive data from `SubscribeToParameterDefinitions` subscription. Following is a sample payload of this event. - - -``` javascript +```javascript { topicName: 'topic-1', streamId: 'b45969d2-4624-4ab7-9779-c8f90ce79420', @@ -137,13 +118,9 @@ payloads. } ``` - - `EventDataReceived`: Add a listener to this event to receive data - from `SubscribeToEvent` subscription. Following is a sample payload - of this event. - - + - `EventDataReceived`: Add a listener to this event to receive data from `SubscribeToEvent` subscription. Following is a sample payload of this event: -``` javascript +```javascript { topicName: 'topic-1', streamId: 'b45969d2-4624-4ab7-9779-c8f90ce79420' diff --git a/docs/platform/how-to/webapps/write.md b/docs/platform/how-to/webapps/write.md index c3b2ca0e..922f8b34 100644 --- a/docs/platform/how-to/webapps/write.md +++ b/docs/platform/how-to/webapps/write.md @@ -1,10 +1,8 @@ # Write to Quix with Node.js -Clients write data to Quix using streams opened on existing -[topics](../../../platform/glossary.md#topics). Therefore, you need to -first create a topic in the Portal to hold your data streams. +Clients write data to Quix using streams opened on existing [topics](../../../platform/glossary.md#topics). Therefore, you need to first create a topic in the Portal to hold your data streams. -Once you have a topic, your clients can start writing data to Quix by +Once you have a topic, your clients can start writing data to Quix by: - creating a stream in your topic @@ -14,11 +12,9 @@ Once you have a topic, your clients can start writing data to Quix by ## Creating a stream -To write data to Quix, you need to open a stream to your topic. -Following is an example of creating a stream using JavaScript and -Node.js. +To write data to Quix, you need to open a stream to your topic. Following is an example of creating a stream using JavaScript and Node.js. -``` javascript +```javascript const https = require(https); const data = JSON.stringify({ @@ -33,7 +29,7 @@ const data = JSON.stringify({ }); const options = { - hostname: 'your-workspace-id.portal.quix.ai', + hostname: 'your-environment-id.portal.quix.ai', path: '/topics/your-topic-name/streams', method: 'POST', headers: { @@ -53,33 +49,21 @@ req.write(data); req.end(); ``` -Upon completing the request successfully, you will receive the stream id -in the response body. You are going to need this stream id when you are -writing data to the stream. +Upon completing the request successfully, you will receive the stream ID in the response body. You are going to need this stream id when you are writing data to the stream. -In the request data, `Location` is also an optional, but an important -property. Location allows you to organize your streams under directories -in the Data Catalogue. +In the request data, `Location` is also an optional, but an important property. Location allows you to organize your streams under directories in the Quix data store. -When you are creating the stream, you can add optional metadata about -the stream to the stream definition like `Property1` and `Property2` in -the preceding example. +When you are creating the stream, you can add optional metadata about the stream to the stream definition like `Property1` and `Property2` in the preceding example. -Field `Parents` is also optional. If the current stream is derived from -one or more streams (e.g. by transforming data from one stream using an -analytics model), you can reference the original streams using this -field. +Field `Parents` is also optional. If the current stream is derived from one or more streams (e.g. by transforming data from one stream using an analytics model), you can reference the original streams using this field. -`TimeOfRecording` is an optional field that allows you to specify the -actual time the data was recorded. This field is useful if you are -streaming data that was recorded in the past. +`TimeOfRecording` is an optional field that allows you to specify the actual time the data was recorded. This field is useful if you are streaming data that was recorded in the past. ## Writing parameter data to a stream -After you have created the stream, you can start writing data to that -stream using the following HTTP request. +After you have created the stream, you can start writing data to that stream using the following HTTP request. -``` javascript +```javascript const https = require(https); const data = JSON.stringify({ @@ -104,7 +88,7 @@ const data = JSON.stringify({ }); const options = { - hostname: 'your-workspace-id.portal.quix.ai', + hostname: 'your-environment-id.portal.quix.ai', path: '/topics/your-topic-name/streams/your-stream-id/parameters/data', method: 'POST', headers: { @@ -121,25 +105,15 @@ req.write(data); req.end(); ``` -In the preceding example, `data` has two different parameter types, -numeric and strings. If your data only contains numeric data, you do not -need to include the `StringValues` property. In the case of binary -values, the items in the array must be a base64 encoded string. +In the preceding example, `data` has two different parameter types, numeric and strings. If your data only contains numeric data, you do not need to include the `StringValues` property. In the case of binary values, the items in the array must be a base64 encoded string. -`TagValues` is another optional field in the data request that allows -you to add context to data points by means of tagging them. Index of the -`Timestamps` array is used when matching the parameter data values as -well as tag values. Therefore, the order of the arrays is important. +`TagValues` is another optional field in the data request that allows you to add context to data points by means of tagging them. Index of the `Timestamps` array is used when matching the parameter data values as well as tag values. Therefore, the order of the arrays is important. ## Defining parameters -In the above examples, parameters are created in Quix as you write data -to the stream. However, what if you would like to add more information -like acceptable value ranges, measurement units, etc. to your -parameters? You can use the following HTTP request to update your -parameter definitions. +In the above examples, parameters are created in Quix as you write data to the stream. However, what if you would like to add more information like acceptable value ranges, measurement units, etc. to your parameters? You can use the following HTTP request to update your parameter definitions. -``` javascript +```javascript const https = require(https); const data = JSON.stringify([ @@ -156,7 +130,7 @@ const data = JSON.stringify([ ]); const options = { - hostname: 'your-workspace-id.portal.quix.ai', + hostname: 'your-environment-id.portal.quix.ai', path: '/topics/your-topic-name/streams/your-stream-id/parameters', method: 'PUT', headers: { @@ -173,22 +147,13 @@ req.write(data); req.end(); ``` -In the preceding request, the `Id` must match the parameter id you set -when writing data to the stream. `Name` allows you to set a more -readable name for the parameter. You can also add a description, minimum -and maximum values, unit of measurement to your parameter. `Location` -allows you to organize/group your parameters in a hierarchical manner -like with the streams. If you have a custom parameter definition that is -not covered by the primary fields of the request, you can use -`CustomProperties` field to add your custom definition as a string. +In the preceding request, the `Id` must match the parameter ID you set when writing data to the stream. `Name` allows you to set a more readable name for the parameter. You can also add a description, minimum and maximum values, unit of measurement to your parameter. `Location` allows you to organize/group your parameters in a hierarchical manner like with the streams. If you have a custom parameter definition that is not covered by the primary fields of the request, you can use `CustomProperties` field to add your custom definition as a string. ## Writing event data to a stream -Writing event data to a stream is similar to writing parameter data -using the web api. The main difference in the two requests is in the -request body. +Writing event data to a stream is similar to writing parameter data using the web api. The main difference in the two requests is in the request body. -``` javascript +```javascript const data = JSON.stringify([ { Id: "EventA", @@ -219,7 +184,7 @@ const data = JSON.stringify([ ]); const options = { - hostname: 'your-workspace-id.portal.quix.ai', + hostname: 'your-environment-id.portal.quix.ai', path: '/topics/your-topic-name/streams/your-stream-id/events/data', method: 'POST', headers: { @@ -236,19 +201,13 @@ req.write(data); req.end(); ``` -In the preceding example, tags in the event data request are optional. -Tags add context to your data points and help you to run efficient -queries over them on your data like using indexes in traditional -databases. +In the preceding example, tags in the event data request are optional. Tags add context to your data points and help you to run efficient queries over them on your data like using indexes in traditional databases. ## Defining events -In the above examples, events are created in Quix as you write data to -the stream. If you want to add more descriptions to your events, you can -use event definitions api similar to parameter definitions to update -your events. +In the above examples, events are created in Quix as you write data to the stream. If you want to add more descriptions to your events, you can use event definitions api similar to parameter definitions to update your events. -``` javascript +```javascript const https = require(https); const data = JSON.stringify([ @@ -263,7 +222,7 @@ const data = JSON.stringify([ ]); const options = { - hostname: 'your-workspace-id.portal.quix.ai', + hostname: 'your-environment-id.portal.quix.ai', path: '/topics/your-topic-name/streams/your-stream-id/events', method: 'PUT', headers: { @@ -280,26 +239,17 @@ req.write(data); req.end(); ``` -In the preceding request, the `Id` must match the event id you set when -writing events to the stream. `Name` allows you to set a more readable -name for the event. `Location` allows you to organize/group your events -in a hierarchy like with the parameters. If you have a custom event -definition that is not covered by the primary fields of the request, you -can use `CustomProperties` field to add your custom definition as a -string. You can also set an optional event `Level`. Accepted event -levels are Trace, Debug, Information, Warning, Error and Critical. Event -level defaults to Information if not specified. +In the preceding request, the `Id` must match the event id you set when writing events to the stream. `Name` allows you to set a more readable name for the event. `Location` allows you to organize/group your events in a hierarchy like with the parameters. If you have a custom event definition that is not covered by the primary fields of the request, you can use `CustomProperties` field to add your custom definition as a string. You can also set an optional event `Level`. Accepted event levels are Trace, Debug, Information, Warning, Error and Critical. Event level defaults to Information if not specified. ## Closing a stream -After finishing sending data, you can proceed to close the stream using -the request below. +After finishing sending data, you can proceed to close the stream using the request below. -``` javascript +```javascript const https = require(https); const options = { - hostname: 'your-workspace-id.portal.quix.ai', + hostname: 'your-environment-id.portal.quix.ai', path: '/topics/your-topic-name/streams/your-stream-id/close', method: 'POST', headers: { diff --git a/docs/platform/how-to/yaml-variables.md b/docs/platform/how-to/yaml-variables.md new file mode 100644 index 00000000..b3ff82f1 --- /dev/null +++ b/docs/platform/how-to/yaml-variables.md @@ -0,0 +1,78 @@ +# Configure deployments using YAML variables + +YAML variables enable you to create variables that can have different values across different environments. For example, if you want to allocate more memory for a deployment in your production environment, you could have a variable `MEMORY` that has a value of 500 in the development environment and 1000 in production. + +## Watch a video + +You can watch a video on YAML variables here: + +
+ +## Example + +In the pipeline view, click on the service you want to configure, and then click the YAML button in the top right of the view. You see the `quix.yaml` code, such as the following: + +``` yaml +# Quix Project Descriptor +# This file describes the data pipeline and configuration of resources of a Quix Project. + +metadata: + version: 1.0 + +# This section describes the Deployments of the data pipeline +deployments: + - name: CPU Threshold + application: Starter transformation + deploymentType: Service + version: transform-v2 + resources: + cpu: 200 + memory: 500 + replicas: 1 + desiredStatus: Stopped + variables: + - name: input + inputType: InputTopic + description: Name of the input topic to listen to. + required: false + value: cpu-load + - name: output + inputType: OutputTopic + description: Name of the output topic to write to. + required: false + value: transform + - name: CPU Alert SMS + ... +``` + +In this case you want to configure the memory for the service. + +Click the `Variables` tab and then click `+ New variable`. Click `+ New variable` on the dialog and create a variable called `MEMORY`. Set the values for memory for each of the environments, such as develop and production. For example, you might set `MEMORY` to 1000 for production, and 500 for develop. + +Now create any other variables you would like to have, such as `CPU`. This might be set to 1000 for production and 200 for develop. + +Now edit the `quix.yaml`: + +``` yaml + resources: + cpu: 200 + memory: 500 + replicas: 1 +``` + +Change this to: + +``` yaml + resources: + cpu: {{CPU}} + memory: {{MEMORY}} + replicas: 1 +``` + +This specifies that the variable values should be used, rather than the hard-coded values. + +!!! note + + Curly braces are required to denote YAML variables. + +Now sync up your environment. If you've made your changes to your develop environment, you will now need to merge those into your production environment, and then sync production. diff --git a/docs/platform/images/about/BrokerTopicStreams.png b/docs/platform/images/about/BrokerTopicStreams.png deleted file mode 100644 index 74728943..00000000 Binary files a/docs/platform/images/about/BrokerTopicStreams.png and /dev/null differ diff --git a/docs/platform/images/about/DeploymentIcon.png b/docs/platform/images/about/DeploymentIcon.png deleted file mode 100644 index ea6738a4..00000000 Binary files a/docs/platform/images/about/DeploymentIcon.png and /dev/null differ diff --git a/docs/platform/images/about/Message-broker-at-the-core.png b/docs/platform/images/about/Message-broker-at-the-core.png deleted file mode 100644 index 991ecf0d..00000000 Binary files a/docs/platform/images/about/Message-broker-at-the-core.png and /dev/null differ diff --git a/docs/platform/images/about/Presentation1.png b/docs/platform/images/about/Presentation1.png deleted file mode 100644 index e96a298a..00000000 Binary files a/docs/platform/images/about/Presentation1.png and /dev/null differ diff --git a/docs/platform/images/about/Product.png b/docs/platform/images/about/Product.png deleted file mode 100644 index 9a585d24..00000000 Binary files a/docs/platform/images/about/Product.png and /dev/null differ diff --git a/docs/platform/images/about/Python_stream_processing.png b/docs/platform/images/about/Python_stream_processing.png deleted file mode 100644 index 371fa541..00000000 Binary files a/docs/platform/images/about/Python_stream_processing.png and /dev/null differ diff --git a/docs/platform/images/about/TopicsIcon.png b/docs/platform/images/about/TopicsIcon.png deleted file mode 100644 index 004f417a..00000000 Binary files a/docs/platform/images/about/TopicsIcon.png and /dev/null differ diff --git a/docs/platform/images/about/TopicsandDeployments.png b/docs/platform/images/about/TopicsandDeployments.png deleted file mode 100644 index 4f5d92dd..00000000 Binary files a/docs/platform/images/about/TopicsandDeployments.png and /dev/null differ diff --git a/docs/platform/images/about/pubsub.png b/docs/platform/images/about/pubsub.png deleted file mode 100644 index c264f4ce..00000000 Binary files a/docs/platform/images/about/pubsub.png and /dev/null differ diff --git a/docs/platform/images/about/serverless-environment.png b/docs/platform/images/about/serverless-environment.png deleted file mode 100644 index c942d111..00000000 Binary files a/docs/platform/images/about/serverless-environment.png and /dev/null differ diff --git a/docs/platform/images/code-samples.png b/docs/platform/images/code-samples.png index fab18465..e7bcb789 100644 Binary files a/docs/platform/images/code-samples.png and b/docs/platform/images/code-samples.png differ diff --git a/docs/platform/images/create-new-github-user.png b/docs/platform/images/create-new-github-user.png new file mode 100644 index 00000000..38fb9a45 Binary files /dev/null and b/docs/platform/images/create-new-github-user.png differ diff --git a/docs/platform/images/deployment.png b/docs/platform/images/deployment.png new file mode 100644 index 00000000..bd5b4d37 Binary files /dev/null and b/docs/platform/images/deployment.png differ diff --git a/docs/platform/images/git-setup-guide.png b/docs/platform/images/git-setup-guide.png new file mode 100644 index 00000000..14c9fef6 Binary files /dev/null and b/docs/platform/images/git-setup-guide.png differ diff --git a/docs/platform/images/how-to/account/email-invite.png b/docs/platform/images/how-to/account/email-invite.png deleted file mode 100644 index 31aff276..00000000 Binary files a/docs/platform/images/how-to/account/email-invite.png and /dev/null differ diff --git a/docs/platform/images/how-to/account/user-menu.png b/docs/platform/images/how-to/account/user-menu.png deleted file mode 100644 index f95ce090..00000000 Binary files a/docs/platform/images/how-to/account/user-menu.png and /dev/null differ diff --git a/docs/platform/images/how-to/account/users-menu.png b/docs/platform/images/how-to/account/users-menu.png deleted file mode 100644 index dccbdace..00000000 Binary files a/docs/platform/images/how-to/account/users-menu.png and /dev/null differ diff --git a/docs/platform/images/how-to/application/application-path.png b/docs/platform/images/how-to/application/application-path.png new file mode 100644 index 00000000..a974a69b Binary files /dev/null and b/docs/platform/images/how-to/application/application-path.png differ diff --git a/docs/platform/images/how-to/application/save-code-sample.png b/docs/platform/images/how-to/application/save-code-sample.png new file mode 100644 index 00000000..dbacf4fb Binary files /dev/null and b/docs/platform/images/how-to/application/save-code-sample.png differ diff --git a/docs/platform/images/how-to/create-deployment/develop-deploy.png b/docs/platform/images/how-to/create-deployment/develop-deploy.png deleted file mode 100644 index 9f6c5fcd..00000000 Binary files a/docs/platform/images/how-to/create-deployment/develop-deploy.png and /dev/null differ diff --git a/docs/platform/images/how-to/create-project/add-environment.png b/docs/platform/images/how-to/create-project/add-environment.png new file mode 100644 index 00000000..b407265e Binary files /dev/null and b/docs/platform/images/how-to/create-project/add-environment.png differ diff --git a/docs/platform/images/how-to/create-project/create-project.png b/docs/platform/images/how-to/create-project/create-project.png new file mode 100644 index 00000000..7336f144 Binary files /dev/null and b/docs/platform/images/how-to/create-project/create-project.png differ diff --git a/docs/platform/images/how-to/data/metadata.png b/docs/platform/images/how-to/data/metadata.png deleted file mode 100644 index 3efa6b84..00000000 Binary files a/docs/platform/images/how-to/data/metadata.png and /dev/null differ diff --git a/docs/platform/images/how-to/data/search.png b/docs/platform/images/how-to/data/search.png deleted file mode 100644 index 739882a2..00000000 Binary files a/docs/platform/images/how-to/data/search.png and /dev/null differ diff --git a/docs/platform/images/how-to/data/sort-data.png b/docs/platform/images/how-to/data/sort-data.png deleted file mode 100644 index ce51b837..00000000 Binary files a/docs/platform/images/how-to/data/sort-data.png and /dev/null differ diff --git a/docs/platform/images/how-to/data/streams-by-location.png b/docs/platform/images/how-to/data/streams-by-location.png deleted file mode 100644 index 77076416..00000000 Binary files a/docs/platform/images/how-to/data/streams-by-location.png and /dev/null differ diff --git a/docs/platform/images/how-to/data/streams-by-topic.png b/docs/platform/images/how-to/data/streams-by-topic.png deleted file mode 100644 index 777889f7..00000000 Binary files a/docs/platform/images/how-to/data/streams-by-topic.png and /dev/null differ diff --git a/docs/platform/images/how-to/env-variables/add-env-var-dialog.png b/docs/platform/images/how-to/env-variables/add-env-var-dialog.png index ad2dade5..c4ddb83b 100644 Binary files a/docs/platform/images/how-to/env-variables/add-env-var-dialog.png and b/docs/platform/images/how-to/env-variables/add-env-var-dialog.png differ diff --git a/docs/platform/images/how-to/env-variables/secrets-management.png b/docs/platform/images/how-to/env-variables/secrets-management.png new file mode 100644 index 00000000..803c646a Binary files /dev/null and b/docs/platform/images/how-to/env-variables/secrets-management.png differ diff --git a/docs/platform/images/how-to/jupyter-wb/connect-python.png b/docs/platform/images/how-to/jupyter-wb/connect-python.png index 193eca09..11baad82 100644 Binary files a/docs/platform/images/how-to/jupyter-wb/connect-python.png and b/docs/platform/images/how-to/jupyter-wb/connect-python.png differ diff --git a/docs/platform/images/how-to/jupyter-wb/jupyter-results.png b/docs/platform/images/how-to/jupyter-wb/jupyter-results.png index f7d95dad..b9bb030f 100644 Binary files a/docs/platform/images/how-to/jupyter-wb/jupyter-results.png and b/docs/platform/images/how-to/jupyter-wb/jupyter-results.png differ diff --git a/docs/platform/images/how-to/jupyter-wb/new-file.png b/docs/platform/images/how-to/jupyter-wb/new-file.png index d9925622..2c7745db 100644 Binary files a/docs/platform/images/how-to/jupyter-wb/new-file.png and b/docs/platform/images/how-to/jupyter-wb/new-file.png differ diff --git a/docs/platform/images/how-to/library/customise.png b/docs/platform/images/how-to/library/customise.png deleted file mode 100644 index c3a2aa92..00000000 Binary files a/docs/platform/images/how-to/library/customise.png and /dev/null differ diff --git a/docs/platform/images/how-to/library/sample-project-browse.png b/docs/platform/images/how-to/library/sample-project-browse.png deleted file mode 100644 index 48dfafe3..00000000 Binary files a/docs/platform/images/how-to/library/sample-project-browse.png and /dev/null differ diff --git a/docs/platform/images/how-to/project-structure/app-structure.png b/docs/platform/images/how-to/project-structure/app-structure.png new file mode 100644 index 00000000..aeacd6e1 Binary files /dev/null and b/docs/platform/images/how-to/project-structure/app-structure.png differ diff --git a/docs/platform/images/how-to/project-structure/pipeline.png b/docs/platform/images/how-to/project-structure/pipeline.png new file mode 100644 index 00000000..105d35f6 Binary files /dev/null and b/docs/platform/images/how-to/project-structure/pipeline.png differ diff --git a/docs/platform/images/how-to/project-structure/project-structure.png b/docs/platform/images/how-to/project-structure/project-structure.png new file mode 100644 index 00000000..a65eef1b Binary files /dev/null and b/docs/platform/images/how-to/project-structure/project-structure.png differ diff --git a/docs/platform/images/how-to/projects/branch-menu.png b/docs/platform/images/how-to/projects/branch-menu.png deleted file mode 100644 index 82975064..00000000 Binary files a/docs/platform/images/how-to/projects/branch-menu.png and /dev/null differ diff --git a/docs/platform/images/how-to/projects/delete-file.png b/docs/platform/images/how-to/projects/delete-file.png deleted file mode 100644 index 77092cb6..00000000 Binary files a/docs/platform/images/how-to/projects/delete-file.png and /dev/null differ diff --git a/docs/platform/images/how-to/projects/delete-tag.png b/docs/platform/images/how-to/projects/delete-tag.png deleted file mode 100644 index cfdc7703..00000000 Binary files a/docs/platform/images/how-to/projects/delete-tag.png and /dev/null differ diff --git a/docs/platform/images/how-to/projects/duplicate.png b/docs/platform/images/how-to/projects/duplicate.png deleted file mode 100644 index e4a218e9..00000000 Binary files a/docs/platform/images/how-to/projects/duplicate.png and /dev/null differ diff --git a/docs/platform/images/how-to/run-deployment/start-deployment.png b/docs/platform/images/how-to/run-deployment/start-deployment.png deleted file mode 100644 index e67f0134..00000000 Binary files a/docs/platform/images/how-to/run-deployment/start-deployment.png and /dev/null differ diff --git a/docs/platform/images/how-to/tag-deployment/commit-actions-menu.png b/docs/platform/images/how-to/tag-deployment/commit-actions-menu.png deleted file mode 100644 index 633af5bf..00000000 Binary files a/docs/platform/images/how-to/tag-deployment/commit-actions-menu.png and /dev/null differ diff --git a/docs/platform/images/how-to/tag-deployment/commit-list-tag.png b/docs/platform/images/how-to/tag-deployment/commit-list-tag.png deleted file mode 100644 index 01641753..00000000 Binary files a/docs/platform/images/how-to/tag-deployment/commit-list-tag.png and /dev/null differ diff --git a/docs/platform/images/how-to/tag-deployment/new-tag-dialog.png b/docs/platform/images/how-to/tag-deployment/new-tag-dialog.png deleted file mode 100644 index ab5df61c..00000000 Binary files a/docs/platform/images/how-to/tag-deployment/new-tag-dialog.png and /dev/null differ diff --git a/docs/platform/images/how-to/topics/create-dialog.png b/docs/platform/images/how-to/topics/create-dialog.png deleted file mode 100644 index c1428eca..00000000 Binary files a/docs/platform/images/how-to/topics/create-dialog.png and /dev/null differ diff --git a/docs/platform/images/how-to/topics/download-certificate.png b/docs/platform/images/how-to/topics/download-certificate.png deleted file mode 100644 index 598ed1a2..00000000 Binary files a/docs/platform/images/how-to/topics/download-certificate.png and /dev/null differ diff --git a/docs/platform/images/how-to/topics/persist-toggle.png b/docs/platform/images/how-to/topics/persist-toggle.png deleted file mode 100644 index 13266871..00000000 Binary files a/docs/platform/images/how-to/topics/persist-toggle.png and /dev/null differ diff --git a/docs/platform/images/how-to/visualize/aggregation-panel-grouped.png b/docs/platform/images/how-to/visualize/aggregation-panel-grouped.png deleted file mode 100644 index 941e3ee2..00000000 Binary files a/docs/platform/images/how-to/visualize/aggregation-panel-grouped.png and /dev/null differ diff --git a/docs/platform/images/how-to/visualize/aggregation-panel.png b/docs/platform/images/how-to/visualize/aggregation-panel.png deleted file mode 100644 index ea375585..00000000 Binary files a/docs/platform/images/how-to/visualize/aggregation-panel.png and /dev/null differ diff --git a/docs/platform/images/how-to/visualize/aggregation-table-grouped.png b/docs/platform/images/how-to/visualize/aggregation-table-grouped.png deleted file mode 100644 index 5bee2e02..00000000 Binary files a/docs/platform/images/how-to/visualize/aggregation-table-grouped.png and /dev/null differ diff --git a/docs/platform/images/how-to/visualize/grouped.png b/docs/platform/images/how-to/visualize/grouped.png deleted file mode 100644 index e7b90e75..00000000 Binary files a/docs/platform/images/how-to/visualize/grouped.png and /dev/null differ diff --git a/docs/platform/images/how-to/visualize/nav.png b/docs/platform/images/how-to/visualize/nav.png deleted file mode 100644 index a46a6f1b..00000000 Binary files a/docs/platform/images/how-to/visualize/nav.png and /dev/null differ diff --git a/docs/platform/images/how-to/visualize/pagination.png b/docs/platform/images/how-to/visualize/pagination.png deleted file mode 100644 index 90536f1f..00000000 Binary files a/docs/platform/images/how-to/visualize/pagination.png and /dev/null differ diff --git a/docs/platform/images/how-to/visualize/table.png b/docs/platform/images/how-to/visualize/table.png deleted file mode 100644 index 97005ce7..00000000 Binary files a/docs/platform/images/how-to/visualize/table.png and /dev/null differ diff --git a/docs/platform/images/how-to/workspaces/create-workspace.png b/docs/platform/images/how-to/workspaces/create-workspace.png deleted file mode 100644 index c5bab403..00000000 Binary files a/docs/platform/images/how-to/workspaces/create-workspace.png and /dev/null differ diff --git a/docs/platform/images/how-to/workspaces/delete-notify.png b/docs/platform/images/how-to/workspaces/delete-notify.png deleted file mode 100644 index b47d973b..00000000 Binary files a/docs/platform/images/how-to/workspaces/delete-notify.png and /dev/null differ diff --git a/docs/platform/images/legacy-workspaces.png b/docs/platform/images/legacy-workspaces.png new file mode 100644 index 00000000..8006a112 Binary files /dev/null and b/docs/platform/images/legacy-workspaces.png differ diff --git a/docs/platform/images/quick-start/brake-model-waveform.png b/docs/platform/images/quick-start/brake-model-waveform.png deleted file mode 100644 index 508d3438..00000000 Binary files a/docs/platform/images/quick-start/brake-model-waveform.png and /dev/null differ diff --git a/docs/platform/images/quick-start/create-a-workspace.png b/docs/platform/images/quick-start/create-a-workspace.png deleted file mode 100644 index 97f41b22..00000000 Binary files a/docs/platform/images/quick-start/create-a-workspace.png and /dev/null differ diff --git a/docs/platform/images/quick-start/data-catalogue-curl.png b/docs/platform/images/quick-start/data-catalogue-curl.png deleted file mode 100644 index 964abc0f..00000000 Binary files a/docs/platform/images/quick-start/data-catalogue-curl.png and /dev/null differ diff --git a/docs/platform/images/quick-start/data-catalogue-sample.png b/docs/platform/images/quick-start/data-catalogue-sample.png deleted file mode 100644 index dd1cf067..00000000 Binary files a/docs/platform/images/quick-start/data-catalogue-sample.png and /dev/null differ diff --git a/docs/platform/images/quick-start/deploy-completed.png b/docs/platform/images/quick-start/deploy-completed.png deleted file mode 100644 index 58fd9727..00000000 Binary files a/docs/platform/images/quick-start/deploy-completed.png and /dev/null differ diff --git a/docs/platform/images/quick-start/flask_dashboard_debug.png b/docs/platform/images/quick-start/flask_dashboard_debug.png deleted file mode 100644 index 547cb2e0..00000000 Binary files a/docs/platform/images/quick-start/flask_dashboard_debug.png and /dev/null differ diff --git a/docs/platform/images/quick-start/project-checkout-dialog.png b/docs/platform/images/quick-start/project-checkout-dialog.png deleted file mode 100644 index 0b1fbe7e..00000000 Binary files a/docs/platform/images/quick-start/project-checkout-dialog.png and /dev/null differ diff --git a/docs/platform/images/quick-start/pycharm-code-completion.png b/docs/platform/images/quick-start/pycharm-code-completion.png deleted file mode 100644 index 286f196a..00000000 Binary files a/docs/platform/images/quick-start/pycharm-code-completion.png and /dev/null differ diff --git a/docs/platform/images/quick-start/pycharm-quixstreaming-package.png b/docs/platform/images/quick-start/pycharm-quixstreaming-package.png deleted file mode 100644 index 0e1b840f..00000000 Binary files a/docs/platform/images/quick-start/pycharm-quixstreaming-package.png and /dev/null differ diff --git a/docs/platform/images/quick-start/quix_clone.png b/docs/platform/images/quick-start/quix_clone.png deleted file mode 100644 index 37aabacd..00000000 Binary files a/docs/platform/images/quick-start/quix_clone.png and /dev/null differ diff --git a/docs/platform/images/quick-start/quix_copy_token_dialog.png b/docs/platform/images/quick-start/quix_copy_token_dialog.png deleted file mode 100644 index 304bdce4..00000000 Binary files a/docs/platform/images/quick-start/quix_copy_token_dialog.png and /dev/null differ diff --git a/docs/platform/images/quick-start/quix_deployment_dialog.png b/docs/platform/images/quick-start/quix_deployment_dialog.png deleted file mode 100644 index 0ceebcb4..00000000 Binary files a/docs/platform/images/quick-start/quix_deployment_dialog.png and /dev/null differ diff --git a/docs/platform/images/quick-start/quix_deployment_network.png b/docs/platform/images/quick-start/quix_deployment_network.png deleted file mode 100644 index 5c276a55..00000000 Binary files a/docs/platform/images/quick-start/quix_deployment_network.png and /dev/null differ diff --git a/docs/platform/images/quick-start/quix_deployments.png b/docs/platform/images/quick-start/quix_deployments.png deleted file mode 100644 index 87f0fd26..00000000 Binary files a/docs/platform/images/quick-start/quix_deployments.png and /dev/null differ diff --git a/docs/platform/images/quick-start/quix_develop.png b/docs/platform/images/quick-start/quix_develop.png deleted file mode 100644 index dc598dc5..00000000 Binary files a/docs/platform/images/quick-start/quix_develop.png and /dev/null differ diff --git a/docs/platform/images/quick-start/quix_generate_token_btn.png b/docs/platform/images/quick-start/quix_generate_token_btn.png deleted file mode 100644 index 1a8a70e0..00000000 Binary files a/docs/platform/images/quick-start/quix_generate_token_btn.png and /dev/null differ diff --git a/docs/platform/images/quick-start/quix_git_pwd_dialog.png b/docs/platform/images/quick-start/quix_git_pwd_dialog.png deleted file mode 100644 index b19fc965..00000000 Binary files a/docs/platform/images/quick-start/quix_git_pwd_dialog.png and /dev/null differ diff --git a/docs/platform/images/quick-start/quix_live_stream.png b/docs/platform/images/quick-start/quix_live_stream.png deleted file mode 100644 index c772c33b..00000000 Binary files a/docs/platform/images/quick-start/quix_live_stream.png and /dev/null differ diff --git a/docs/platform/images/quick-start/quix_new_pat_dialog.png b/docs/platform/images/quick-start/quix_new_pat_dialog.png deleted file mode 100644 index ab4f0ea1..00000000 Binary files a/docs/platform/images/quick-start/quix_new_pat_dialog.png and /dev/null differ diff --git a/docs/platform/images/quick-start/quix_open_deploy_dialog.png b/docs/platform/images/quick-start/quix_open_deploy_dialog.png deleted file mode 100644 index a1be6ac0..00000000 Binary files a/docs/platform/images/quick-start/quix_open_deploy_dialog.png and /dev/null differ diff --git a/docs/platform/images/quick-start/quix_project_dialog.png b/docs/platform/images/quick-start/quix_project_dialog.png deleted file mode 100644 index df2c5779..00000000 Binary files a/docs/platform/images/quick-start/quix_project_dialog.png and /dev/null differ diff --git a/docs/platform/images/quick-start/quix_token_menu.png b/docs/platform/images/quick-start/quix_token_menu.png deleted file mode 100644 index 9e1dea55..00000000 Binary files a/docs/platform/images/quick-start/quix_token_menu.png and /dev/null differ diff --git a/docs/platform/images/quick-start/sample-model-project.png b/docs/platform/images/quick-start/sample-model-project.png deleted file mode 100644 index b4bf30af..00000000 Binary files a/docs/platform/images/quick-start/sample-model-project.png and /dev/null differ diff --git a/docs/platform/images/quick-start/stream-id.png b/docs/platform/images/quick-start/stream-id.png deleted file mode 100644 index 8594166c..00000000 Binary files a/docs/platform/images/quick-start/stream-id.png and /dev/null differ diff --git a/docs/platform/images/quick-start/stream-pipeline.png b/docs/platform/images/quick-start/stream-pipeline.png deleted file mode 100644 index 1c3bed9e..00000000 Binary files a/docs/platform/images/quick-start/stream-pipeline.png and /dev/null differ diff --git a/docs/platform/images/quick-start/topic-persist.png b/docs/platform/images/quick-start/topic-persist.png deleted file mode 100644 index 4cfd39a0..00000000 Binary files a/docs/platform/images/quick-start/topic-persist.png and /dev/null differ diff --git a/docs/platform/images/quick-start/visualize.png b/docs/platform/images/quick-start/visualize.png deleted file mode 100644 index 370bc4b0..00000000 Binary files a/docs/platform/images/quick-start/visualize.png and /dev/null differ diff --git a/docs/platform/images/quick-start/vscode-code-completion.png b/docs/platform/images/quick-start/vscode-code-completion.png deleted file mode 100644 index 3a171ae3..00000000 Binary files a/docs/platform/images/quick-start/vscode-code-completion.png and /dev/null differ diff --git a/docs/platform/images/quick-start/vscode-python-extension.png b/docs/platform/images/quick-start/vscode-python-extension.png deleted file mode 100644 index d1798356..00000000 Binary files a/docs/platform/images/quick-start/vscode-python-extension.png and /dev/null differ diff --git a/docs/platform/images/quick-start/vscode-python-list.png b/docs/platform/images/quick-start/vscode-python-list.png deleted file mode 100644 index 8bcbee4d..00000000 Binary files a/docs/platform/images/quick-start/vscode-python-list.png and /dev/null differ diff --git a/docs/platform/images/quick-start/vscode-python-version.png b/docs/platform/images/quick-start/vscode-python-version.png deleted file mode 100644 index b1f77941..00000000 Binary files a/docs/platform/images/quick-start/vscode-python-version.png and /dev/null differ diff --git a/docs/platform/images/quick-start/vscode-welcome.png b/docs/platform/images/quick-start/vscode-welcome.png deleted file mode 100644 index 1d97ddf7..00000000 Binary files a/docs/platform/images/quick-start/vscode-welcome.png and /dev/null differ diff --git a/docs/platform/images/quick-start/workspace-id.png b/docs/platform/images/quick-start/workspace-id.png deleted file mode 100644 index 1babc885..00000000 Binary files a/docs/platform/images/quick-start/workspace-id.png and /dev/null differ diff --git a/docs/platform/images/quick-start/workspace-in-url.png b/docs/platform/images/quick-start/workspace-in-url.png deleted file mode 100644 index c3cc7085..00000000 Binary files a/docs/platform/images/quick-start/workspace-in-url.png and /dev/null differ diff --git a/docs/platform/images/quix-technical-architecture.png b/docs/platform/images/quix-technical-architecture.png new file mode 100644 index 00000000..8ed5a2c9 Binary files /dev/null and b/docs/platform/images/quix-technical-architecture.png differ diff --git a/docs/platform/images/stream-processing-architecture.png b/docs/platform/images/stream-processing-architecture.png new file mode 100644 index 00000000..1f36b203 Binary files /dev/null and b/docs/platform/images/stream-processing-architecture.png differ diff --git a/docs/platform/images/workspace.png b/docs/platform/images/workspace.png deleted file mode 100644 index 80d034dc..00000000 Binary files a/docs/platform/images/workspace.png and /dev/null differ diff --git a/docs/platform/ingest-data.md b/docs/platform/ingest-data.md index 24f70f77..3a73e78b 100644 --- a/docs/platform/ingest-data.md +++ b/docs/platform/ingest-data.md @@ -7,8 +7,9 @@ There are various ways to ingest data into Quix, as well as write data out from 3. Polling 4. Inbound webhooks 5. HTTP API -6. Websockets +6. WebSockets 7. Push data into Quix Platform using Quix Streams +8. Post data into Quix using a web app The particular method you use depends on the nature of the service you're trying to interface with Quix. Each of these methods is described briefly in the following sections. @@ -142,7 +143,7 @@ Quix provides two APIs with an HTTP API interface: 1. [Writer API](../apis/streaming-writer-api/intro.md) 2. [Reader API](../apis/streaming-reader-api/intro.md) -The Writer API is used to write data into the Quix Platform, that is, it is used by publishers. The Reader API is used to read data from the Quix Platform, and is therefore used by consumers. These are used typically by external services such as web browser client code, or perhaps IoT devices. The Reader and Writer APIs also provide a websockets interface, which is described in the [next section](#websockets). +The Writer API is used to write data into the Quix Platform, that is, it is used by publishers. The Reader API is used to read data from the Quix Platform, and is therefore used by consumers. These are used typically by external services such as web browser client code, or perhaps IoT devices. The Reader and Writer APIs also provide a WebSockets interface, which is described in the [next section](#websockets). The easiest way to try out these HTTP APIs is to use the prebuilt connectors called `External source` and `External destination`. This section looks at using the `External source` connector, but the process is similar for the `External destination` connector. To use the `External source` connector, step through the following procedure: @@ -176,17 +177,17 @@ As you can see there are other options such as generating Curl code that can be Further information can be found in the [Writer API](../apis/streaming-writer-api/intro.md) and [Reader API](../apis/streaming-reader-api/intro.md) documentation. -## Websockets +## WebSockets -The Writer and Reader APIs offer a websockets interface in addition to the HTTP interface described in the [previous section](#http-api). The websockets interface provides a continuous end-to-end connection suitable for higher speed, real-time data transfer. This is a higher performance alternative to the request-response mode of operation of the HTTP interface. The Writer and Reader APIs both use the [Microsoft SignalR](https://learn.microsoft.com/en-us/aspnet/core/signalr/javascript-client?view=aspnetcore-5.0&tabs=visual-studio) technology to implement the websockets interface. +The Writer and Reader APIs offer a WebSockets interface in addition to the HTTP interface described in the [previous section](#http-api). The WebSockets interface provides a continuous end-to-end connection suitable for higher speed, real-time data transfer. This is a higher performance alternative to the request-response mode of operation of the HTTP interface. The Writer and Reader APIs both use the [Microsoft SignalR](https://learn.microsoft.com/en-us/aspnet/core/signalr/javascript-client?view=aspnetcore-5.0&tabs=visual-studio) technology to implement the WebSockets interface. -Some example code that shows how to connect to Quix and write data into a Quix stream using the websockets interface is shown here: +Some example code that shows how to connect to Quix and write data into a Quix stream using the WebSockets interface is shown here: ``` html - Hello Websockets + Hello WebSockets @@ -210,7 +211,7 @@ Some example code that shows how to connect to Quix and write data into a Quix s -

Quix JavaScript Hello Websockets

+

Quix JavaScript Hello WebSockets

const token = ""; // Obtain your PAT token from the Quix portal - // Set the Workspace and Topic - const workspaceId = ""; + // Set the environment and Topic + const environmentId = ""; const topicName = "transform"; const streamId = "mouse-pos"; const canvas = document.getElementById("myCanvas"); @@ -314,7 +315,7 @@ Code that could read mouse cursor position from a Quix stream is as follows: }; const connection = new signalR.HubConnectionBuilder() - .withUrl(`https://reader-${workspaceId}.platform.quix.ai/hub`, options) + .withUrl(`https://reader-${environmentId}.platform.quix.ai/hub`, options) .build(); connection.start().then(() => { @@ -347,7 +348,7 @@ Code that could read mouse cursor position from a Quix stream is as follows: This code uses the Reader API to read data from a Quix stream. -The Quix documentation explains how to obtain your [Quix workspace ID](../platform/how-to/get-workspace-id.md), [PAT token](../apis/streaming-reader-api/authenticate.md) for authentication, and also how to [set up SignalR](../apis/streaming-reader-api/signalr.md). +The Quix documentation explains how to obtain your [Quix environment ID](../platform/how-to/get-environment-id.md), [PAT token](../apis/streaming-reader-api/authenticate.md) for authentication, and also how to [set up SignalR](../apis/streaming-reader-api/signalr.md). ## Push data using Quix Streams @@ -398,6 +399,79 @@ if __name__ == '__main__': You need to obtain a [streaming token](../platform/how-to/streaming-token.md) from within the platform. +## Post data into Quix using a web app + +You may have a web app, either hosted in Quix, or elsewhere (say on Glitch), that receives data that you want to process in Quix. In either case, data can be posted using HTTP `POST` methods (or other HTTP methods) to the web app, and then this data published to a Quix topic using Quix Streams. + +The following example shows a Python Flask web app hosted in Quix. Data is received on an endpoint, in this case `/data`. The data is then published to an output topic. Of course you may have multiple endpoints receiving data, which you can publish to different streams, depending on your use case. + +!!! note + + When deploying this service in Quix, it's important to enable public access in the deployment dialog, and make a note of the [service public URL](../platform/how-to/deploy-public-page.md). + +The following shows the code for a simple web app that enables you to post data using HTTP, and then publish this to a Quix topic using Quix Streams: + +```python +import quixstreams as qx +from flask import Flask, request +from datetime import datetime +from waitress import serve +import os +import json + +# Quix injects credentials automatically to the client. +# Alternatively, you can always pass an SDK token manually as an argument. +client = qx.QuixStreamingClient() + +# Open the output topic where to write data out +producer_topic = client.get_topic_producer(os.environ["output"]) + +stream = producer_topic.create_stream() +stream.properties.name = "Post Data" + +app = Flask("Post Data") + +# this is unauthenticated, anyone could post anything to you! +@app.route("/data", methods=['POST']) +def webhook(): + print('dumps: ', json.dumps(request.json)) + + # post event data + stream.events.add_timestamp(datetime.now())\ + .add_value("sensor", json.dumps(request.json))\ + .publish() + + return "OK", 200 + + +print("CONNECTED!") + +# use waitress for production +serve(app, host='0.0.0.0', port = 80) +``` + +There may be various devices or apps posting data to your web app. + +A simple test of your web app can be performed with Curl, as shown in the following example: + +```shell +curl -X POST -H "Content-Type: application/json" https://app-workspace-project-branch.deployments.quix.ai/data -d @data.json +``` + +In this example, `data.json` contains your JSON data, such as: + +```json +{ + "id": "device-012-ABC", + "temp": 123, + "press": 456 +} +``` + +!!! tip + + You'll need to change the URL in the Curl example to the one provided in the [deployment dialog](../platform/how-to/deploy-public-page.md) for your service. + ## Summary There are various ways to connect to Quix, and how you do so depends on the nature of the service and data you are connecting. In many cases Quix has a [suitable connector](../platform/connectors/index.md) you can use with minor configuration. @@ -406,4 +480,4 @@ If you want some example code you can use as a starting point for connecting you Low-frequency data from REST APIs can be [polled](#polling) from Quix using a library such as `requests`. -Quix also provides the [streaming writer](../apis/streaming-writer-api/intro.md) and [streaming reader](../apis/streaming-reader-api/intro.md) APIs with both HTTP and websockets interfaces. If a continous connection is not required you can use the HTTP interface. Faster data from web servers, browser clients, and IoT devices can interface [using websockets](#websockets), where a continuous connection is required. +Quix also provides the [streaming writer](../apis/streaming-writer-api/intro.md) and [streaming reader](../apis/streaming-reader-api/intro.md) APIs with both HTTP and WebSockets interfaces. If a continous connection is not required you can use the HTTP interface. Faster data from web servers, browser clients, and IoT devices can interface [using ebSockets](#websockets), where a continuous connection is required. diff --git a/docs/platform/integrations/kafka/confluent-cloud.md b/docs/platform/integrations/kafka/confluent-cloud.md index 4e004c42..dfd4f7ed 100644 --- a/docs/platform/integrations/kafka/confluent-cloud.md +++ b/docs/platform/integrations/kafka/confluent-cloud.md @@ -1,8 +1,8 @@ # Connect to Confluent Cloud -Quix requires Kafka to provide streaming infrastructure for your Quix workspace. +Quix requires Kafka to provide streaming infrastructure for your Quix environment. -When you create a new Quix workspace, there are three hosting options: +When you create a new Quix environment, there are three hosting options: 1. **Quix Broker** - Quix hosts Kafka for you. This is the simplest option as Quix provides hosting and configuration. 2. **Self-Hosted Kafka** - This is where you already have existing Kafka infrastructure that you use, and you want to enable Quix to provide the stream processing platform on top of it. You can configure Quix to work with your existing Kafka infrastructure using this option. @@ -16,7 +16,7 @@ If you do not already have Confluent Cloud account, you can [sign up for a free ## Selecting Confluent Cloud to host Quix -When you create a new Quix workspace, you can select your hosting option in the `Broker settings` dialog, as shown in the following screenshot: +When you create a new Quix environment, you can select your hosting option in the `Broker settings` dialog, as shown in the following screenshot: ![Broker Settings](../../images/integrations/confluent/confluent-broker-settings.png) @@ -32,4 +32,4 @@ All the required configuration information can be found in your Confluent Cloud !!! note - If you already have topics created in your Confluent Cloud, you can synchronize these with your Quix workspace. The `Synchronize Topics` checkbox is enabled by default. + If you already have topics created in your Confluent Cloud, you can synchronize these with your Quix environment. The `Synchronize Topics` checkbox is enabled by default. diff --git a/docs/platform/quickstart.md b/docs/platform/quickstart.md index bd17c15e..73223d0c 100644 --- a/docs/platform/quickstart.md +++ b/docs/platform/quickstart.md @@ -2,11 +2,15 @@ This Quickstart is designed to show you how to get your data into Quix and display it, in **less than 10 minutes**. -## Video +## Watch a video -Watch the video showing what you're going to build. +Create your first project and environment: -
+
+ +Get data into Quix and display it: + +
## Peek at the code @@ -34,7 +38,7 @@ If you're just curious, click the box to see the complete code. cpu_load = psutil.cpu_percent(interval=1) return cpu_load - # Obtain client library token from portal + # Obtain streaming token from portal client = qx.QuixStreamingClient(token) # (5) # Open a topic to publish data to @@ -71,6 +75,10 @@ If you're just curious, click the box to see the complete code. 7. Create a Quix stream to write to. You can think of a stream as a channel within a topic. 8. Publish your data to the stream. +## Download the code + +The complete code for the Quickstart can be found in the [Quix Tutorials GitHub repository](https://github.com/quixio/tutorial-code/tree/main/quickstart){target=_blank}. + ## Prerequisites To complete the Quickstart you'll need the following: @@ -96,11 +104,42 @@ You're going to use the [Quix Streams](../client-library-intro.md) library to pu You use the `psutil` module to retrieve the CPU load on your laptop. -You use `python-dotenv` as you securely store your client library token (previously known as the SDK token) in a `.env` file. +!!! tip + + You use `python-dotenv` as you securely store your streaming token (previously known as the SDK token) in a `.env` file. + +## 2. Create your project and environment + +You'll need to create a project and an environment. You can watch a video on how to do this: + +
+ +## 3. Get your token + +You'll need a streaming token to connect your client code to your Quix environment: + +1. Log in to the Quix Portal and enter the `Develop` environment. +2. Click `Settings` and then click `Develop` again to display the environment settings. +3. Click `APIs and tokens`. +4. Click `Streaming Token`. +5. Copy the streaming token to the clipboard using the button provided. + +## 4. Create your `.env` file -## 2. Write your code +You'll store your streaming token securely in a `.env` file on your computer in the same directory as your Python code. To create the `.env` file: -1. Open up a terminal on your laptop, make a new directory for your project, and then create a new file `cpu_load.py`. +1. Open up a terminal on your laptop, make a new directory for your code. +2. Using your editor, create a `.env` file in your project directory. On the first line add the text `STREAMING_TOKEN=`. +3. Paste the streaming token from the clipboard into the `.env` file _immediately_ after the `=` (there should be no space between the `=` and the token). +4. Save the file. + +Your streaming token is now safely stored in your `.env` file for your Python code to use. + +## 5. Write your code + +You'll now write the Python code that runs on your computer, and publishes your CPU load into a Quix topic. + +1. Create a new file `cpu_load.py`. 2. Copy and paste in the following code: ```python @@ -118,7 +157,7 @@ You use `python-dotenv` as you securely store your client library token (previou cpu_load = psutil.cpu_percent(interval=1) return cpu_load - # Obtain client library token from portal + # Obtain streaming token from portal client = qx.QuixStreamingClient(token) # Open a topic to publish data to @@ -148,20 +187,7 @@ You use `python-dotenv` as you securely store your client library token (previou 3. Save the file. -## 3. Get your token - -1. Log in to the Quix Portal. -2. Click `Settings`. -3. Click `APIs and tokens`. -4. Click `Streaming Token`. -5. Copy the streaming token to the clipboard. -6. Create a `.env` file in the same directory as your Python code. On the first line add `STREAMING_TOKEN=` -7. Paste the streaming token from the clipboard into the `.env` file _immediately_ after the `=` (there should be no space between the `=` and the token). -8. Save the file. - -Your streaming token is now safely stored in your `.env` file for the project. - -## 4. Run your code +## 6. Run your code Run your code with the following command in your terminal: @@ -169,27 +195,31 @@ Run your code with the following command in your terminal: python cpu_load.py ``` -The code runs and, after creating the `cpu-load` topic, displays your CPU load. The code is now publishing data to the Quix topic `cpu-load`. - !!! tip If you're on Mac and using Homebrew, you may have multiple Python versions installed. In this case you may have to use the command `python3` to run your code. -## 5. See the data in Quix +The code runs and, after creating the `cpu-load` topic, displays your CPU load. The code is now publishing data to the Quix topic `cpu-load`. + +## 7. See the data in Quix -1. Switch back to the Quix Portal. +1. Switch back to the Quix Portal and enter your `Develop` environment. 2. Click on `Topics` in the main left-hand navigation. 3. You see the `cpu-load` topic. Note the vertical green bars representing inbound data. -4. Hover the mouse over the `Data` colum. You see the tool tip text `View live data`. +4. Hover the mouse over the `Data` column. You see the tool tip text `View live data`. 5. Click the mouse where the tool tip text is displayed. You are taken to the Quix Data Explorer in a new tab. 6. Under `SELECT STREAMS` select the box `Quickstart CPU Load - Server 1`. -7. Under `SELECT PARAMETERS OR EVENTS` select `CPU_Load`. Your real-time CPU load is displayed as a waveform. You can also take a look at the table view, and the message view. +7. Under `SELECT PARAMETERS OR EVENTS` select `CPU_Load`. + +Your real-time CPU load is displayed as a waveform. You can also take a look at the table view, and the message view. ## Conclusion That concludes the Quickstart! In this Quickstart you've learned the following: -* How to push data into Quix Platform from the command line. +* How to create a project and an environment. +* How to obtain the streaming token for your environment. +* How to publish data into a Quix topic from the command line using Quix Streams. * How to view real-time data in a topic using the Quix Data Explorer. ## Next steps diff --git a/docs/platform/quixtour/images/architecture.png b/docs/platform/quixtour/images/architecture.png deleted file mode 100644 index bc5263f9..00000000 Binary files a/docs/platform/quixtour/images/architecture.png and /dev/null differ diff --git a/docs/platform/quixtour/ingest-push.md b/docs/platform/quixtour/ingest-push.md index 1dc6db05..6c6dad5f 100644 --- a/docs/platform/quixtour/ingest-push.md +++ b/docs/platform/quixtour/ingest-push.md @@ -1,8 +1,10 @@ -# Push your data into Quix Platform using Quix Streams +# Publish your data into a Quix topic using Quix Streams -There are [many ways](../ingest-data.md) to get your data into Quix, a process usually known as ingestion. Data can be loaded using CSV files, by polling external web services, websockets and so on. The option you use depends on your use case. +There are [many ways](../ingest-data.md) to get your data into Quix, a process usually known as ingestion. Data can be loaded using CSV files, by polling external web services, WebSockets and so on. The option you use depends on your use case. -In this part you'll learn how to send data into Quix from your laptop using Quix Streams to push data into Quix Platform. You'll write a short Python program to retrieve your CPU load and publish that data into a Quix topic in real time. +In this part of the Quix Tour, you'll learn how to send data into Quix using Quix Streams to publish data into a topic hosted in the Quix Platform. + +You'll write a short Python program to retrieve your CPU load and publish that data into a Quix topic in real time. !!! tip @@ -10,7 +12,7 @@ In this part you'll learn how to send data into Quix from your laptop using Quix ## 1. Install the Python modules -Once you have Python installed, open up a terminal and install the following modules using `pip`: +Once you have Python installed, open up a terminal, and install the following modules using `pip`: ``` pip install quixstreams @@ -22,15 +24,46 @@ pip install python-dotenv If you're on Mac and using Homebrew, you may have multiple Python versions installed. In this case you may have to use the command `pip3` to install your modules. -You're going to use the [Quix Streams](../../client-library-intro.md) library to push data into Quix Platform. This is just one of [many ways](../ingest-data.md) to get your data into Quix. You could for example simply log into Quix and use one of our already available [connectors](../connectors/index.md). +You're going to use the [Quix Streams](../../client-library-intro.md) library to publish data into Quix Platform. This is just one of [many ways](../ingest-data.md) to get your data into Quix. You could for example simply log into Quix and use one of our already available [connectors](../connectors/index.md). You use the `psutil` module to retrieve the CPU load on your laptop. -You use `python-dotenv` as you securely store your client library token (previously known as the SDK token) in a `.env` file. +!!! tip + + You use `python-dotenv` as you securely store your client library token (previously known as the SDK token) in a `.env` file. + +## 2. Create your project and environment + +You'll need to create a project and an environment. You can watch a video on how to do this: + +
+ +## 3. Get your token + +You'll need a streaming token to connect your client code to your Quix environment: + +1. Log in to the Quix Portal and enter the `Develop` environment. +2. Click `Settings` and then click `Develop` again to display the environment settings. +3. Click `APIs and tokens`. +4. Click `Streaming Token`. +5. Copy the streaming token to the clipboard using the button provided. + +## 4. Create your `.env` file -## 2. Write your code +You'll store your streaming token securely in a `.env` file on your computer in the same directory as your Python code. To create the `.env` file: -1. Open up a terminal on your laptop, create a directory for your new project, and then create a new file `cpu_load.py`. +1. Open up a terminal on your laptop, make a new directory for your code. +2. Using your editor, create a `.env` file in your project directory. On the first line add the text `STREAMING_TOKEN=`. +3. Paste the streaming token from the clipboard into the `.env` file _immediately_ after the `=` (there should be no space between the `=` and the token). +4. Save the file. + +Your streaming token is now safely stored in your `.env` file for your Python code to use. + +## 5. Write your code + +You'll now write the Python code that runs on your computer, and publishes your CPU load into a Quix topic. + +1. Create a new file `cpu_load.py`. 2. Copy and paste in the following code: ```python @@ -48,7 +81,7 @@ You use `python-dotenv` as you securely store your client library token (previou cpu_load = psutil.cpu_percent(interval=1) return cpu_load - # Obtain client library token from portal + # Obtain streaming token from portal client = qx.QuixStreamingClient(token) # Open a topic to publish data to @@ -78,18 +111,7 @@ You use `python-dotenv` as you securely store your client library token (previou 3. Save the file. -## 3. Get your token - -1. Log in to the Quix Portal. -2. Click `Settings`. -3. Click `APIs and tokens`. -4. Click `Streaming Token`. -5. Copy the streaming token to the clipboard. -6. Create a `.env` file in the same directory as your Python code. On the first line add `STREAMING_TOKEN=` -7. Paste the streaming token from the clipboard into the `.env` file after the `=`. -8. Save the file. - -## 4. Run your code +## 6. Run your code Run your code with the following command in your terminal: @@ -97,9 +119,13 @@ Run your code with the following command in your terminal: python cpu_load.py ``` +!!! tip + + If you're on Mac and using Homebrew, you may have multiple Python versions installed. In this case you may have to use the command `python3` to run your code. + The code runs and, after creating the `cpu-load` topic, displays your CPU load. The code is now publishing data to the Quix topic `cpu-load`. -## 5. Create an external source +## 7. Create an external source At this point you have an external program sending data into the Quix Platform, and it is writing into a topic. However, you can't currently see this in the Pipeline view. To help you visualize what you've created, you can add an external source component, to provide a visual entity in the pipeline view. To do this: @@ -114,7 +140,7 @@ This now appears in the pipeline view as a reminder (visual cue) as to the natur Watch a video on adding an external source: -
+
## 🏃‍♀️ Next step diff --git a/docs/platform/quixtour/overview.md b/docs/platform/quixtour/overview.md index 94eaba54..6293a6a5 100644 --- a/docs/platform/quixtour/overview.md +++ b/docs/platform/quixtour/overview.md @@ -10,6 +10,10 @@ Watch the video showing what you're going to build:
+## The code + +The complete code for the Quix Tour can be found in the [Quix Tutorials GitHub repository](https://github.com/quixio/tutorial-code/tree/main/quixtour){target=_blank}. + ## The parts The Quix Tour is split into three parts. These parts represent the typical stream processing **pipeline**: @@ -20,13 +24,13 @@ The Quix Tour is split into three parts. These parts represent the typical strea This general stream processing architecture is illustrated in the following diagram: -![Architecture](./images/architecture.png) +![Stream Processing Architecture](../images/stream-processing-architecture.png) ## CPU overload detection pipeline The pipeline you will implement: -1. **Ingest** - you push data from your laptop into Quix Platform using the Quix client library, Quix Streams. You're going to push your real-time CPU load. You could alternatively push data from a CSV file, or any other source required for your use case. If you needed to connect to an external service, you could alternatively use one of Quix's many [connectors](../connectors/index.md). +1. **Ingest** - you publish data from your laptop into Quix Platform using the Quix client library, Quix Streams. You're going to publish your real-time CPU load. You could alternatively publish data from a CSV file, or any other source required for your use case. If you needed to connect to an external service, you could alternatively use one of Quix's many [connectors](../connectors/index.md). 2. **Process** - in this step, you process your data. There are many [types of processing](../concepts/types-of-processing.md), one of which is the transform. There are many possible [types of transform](../concepts/types-of-transform.md). Here you create a transform that performs threshold detection. You publish a message to the transform's output topic. 3. **Serve** - when you receive a message indicating CPU load has exceeded the threshold you (optionally) send an SMS to the system administrator. diff --git a/docs/platform/quixtour/process-threshold.md b/docs/platform/quixtour/process-threshold.md index 71e1e05a..2d63ecb8 100644 --- a/docs/platform/quixtour/process-threshold.md +++ b/docs/platform/quixtour/process-threshold.md @@ -4,20 +4,20 @@ In this part of the tour you'll learn how to create a transform. The transform d ## Watch the video -
+
## Create the transform To create the threshold detection transform: -1. Click on `Code Samples` in the main left-hand navigation. +1. In your `Develop` environment, click on `Code Samples` in the main left-hand navigation. 2. Select the `Python`, `Transformation`, and `Basic templates` filters. 3. For `Starter transformation` click `Preview code`. 4. Click `Edit code`. 5. Name the transform "CPU Threshold". 6. Select the input topic `cpu-load`. 7. For the output topic, add a new topic called `cpu-spike`. -8. Click `Save as Project`. +8. Click `Save as Application`. 9. In the project view click on `main.py` to edit it. 10. Replace all the code in `main.py` with the following: @@ -48,7 +48,7 @@ To create the threshold detection transform: qx.App.run() ``` -11. Tag the project as `v1` and deploy as a service (watch the [video](#watch-the-video) if you're not sure how to do this). +11. Tag the project as `process-v1` and deploy as a service (watch the [video](#watch-the-video) if you're not sure how to do this). 12. Monitor the logs for the deployed process. ## Generate a CPU spike @@ -59,10 +59,6 @@ You can generate a CPU spike by starting up several large applications. In the l CPU spike of 71% detected! ``` -This video also demonstrates testing the transform: - -
- ## 🏃‍♀️ Next step Create a destination to log events and send a notification SMS! diff --git a/docs/platform/quixtour/serve-sms.md b/docs/platform/quixtour/serve-sms.md index a1651ac0..a034da43 100644 --- a/docs/platform/quixtour/serve-sms.md +++ b/docs/platform/quixtour/serve-sms.md @@ -4,23 +4,25 @@ In this part of the tour you'll learn how to create a simple destination. This d ## Watch the video -
+
## Prerequisites +If you've completed this tutorial so far, you should have all the prerequisites already installed. + **Optionally:** You can sign up for a [free Vonage account](https://developer.vonage.com/sign-up), to be able to send an SMS. If you would like to try this, simply set `send_sms_bool = True` in the `main.py` code you create later, to switch this feature **on**. ## Create the destination To create the SMS alert destination: -1. Click on `Code Samples` in the main left-hand navigation. +1. In your `Develop` environment, click on `Code Samples` in the main left-hand navigation. 2. Select the `Python`, `Destination`, and `Basic templates` filters. 3. For `Starter destination` click `Preview code`. 4. Click `Edit code`. 5. Name the destination "CPU Alert SMS". 6. Select the input topic `cpu-spike`. -7. Click `Save as Project`. +7. Click `Save as Application`. 8. In the project view click on `main.py` to edit it. 9. Replace all the code in `main.py` with the following: @@ -28,18 +30,19 @@ To create the SMS alert destination: import quixstreams as qx import os import pandas as pd - import vonage # add vonage to requirements.txt to pip install it - from dotenv import load_dotenv # add python-dotenv to requirement.txt - load_dotenv() - vonage_key = os.getenv("VONAGE_API_KEY") - vonage_secret = os.getenv("VONAGE_API_SECRET") - to_number = os.getenv("TO_NUMBER") - send_sms_bool = False # Set this to True if you want to actually send an SMS (you'll need a free Vonage account) + # Set this to True if you want to actually send an SMS (you'll need a free Vonage account) + send_sms_bool = False + if send_sms_bool: + import vonage # add vonage module to requirements.txt to pip install it + vonage_key = os.environ["VONAGE_API_KEY"] + vonage_secret = os.environ["VONAGE_API_SECRET"] + to_number = os.environ["TO_NUMBER"] - client = vonage.Client(key=vonage_key, secret=vonage_secret) - sms = vonage.Sms(client) + client = vonage.Client(key=vonage_key, secret=vonage_secret) + sms = vonage.Sms(client) + # function to send an SMS def send_sms(message): print("Sending SMS message to admin...") responseData = sms.send_message( @@ -76,39 +79,48 @@ To create the SMS alert destination: qx.App.run() ``` -11. Click the add file icon to add a new file to your project - name it `.env`. -12. Add the following to the file: +## Send an SMS (optional) - ``` - TO_NUMBER= - VONAGE_API_KEY= - VONAGE_API_SECRET= - ``` +This section is **optional**. + +If you want to send an alert SMS follow these steps: +1. Change the variable `send_sms_bool` to `True` in your `main.py`. +2. In the `Environment variables` panel, click `+ Add`. The `Add Variable` dialog is displayed. +3. Complete the information for the following environment variables (you obtain these from your Vonage developer dashboard): + + | Variable name | Variable type | + |----|----| + | VONAGE_API_KEY | `text - hidden` | + | VONAGE_API_SECRET | `text - hidden` | + | TO_NUMBER | `text - hidden` | + !!! tip - While this example shows you how to use a `.env` file, you could also create environment variables in Quix, and use those rather than load your variables from the `.env` file. To use this approach, open the code view for your service, and in the `Environment variables` panel, click `+ Add`. The `Add Variable` dialog is displayed. Complete the information for the environment variable. You can select properties such as `Text Hidden` for variables that represent API secrets, keys, and passwords. If necessary, you can also make a variable required. - - Once the variable has been created, you can then access the variable in your code using `os.environ["variable"]`. For example, to access the environment variable `VONAGE_API_SECRET`, your code would be `vonage_secret = os.environ["VONAGE_API_SECRET"]`. + You can select properties such as `Text Hidden` for variables that represent API secrets, keys, and passwords. If necessary, you can also make a variable required. + + See also [how to add environment variables](../how-to/environment-variables.md). + +4. You now need to add the `vonage` module to the `requirements.txt` file in your project. Click to open it and add a line for `vonage`. This ensures the module is built into the deployment. + +## Tag and deploy your SMS alert service - See also [how to add environment variables](../how-to/environment-variables.md). +You can now tag and deploy your code: -13. If you've enabled the SMS alert feature, then paste your information into the `.env` file (you can get all of this information from your Vonage API dashboard). If you don't want to use this feature, just leave the file as shown. -14. You now need to add your modules to the `requirements.txt` file in your project. Click to open it and add lines for `vonage` and `python-dotenv`. This ensures these modules are built into the deployment. -15. Tag the project as `v1` and deploy as a service (watch the video if you're not sure how to do this). -16. Monitor the logs for the deployed process. +1. Tag the project as `sms-v1` and deploy as a service (watch the video if you're not sure how to do this). +2. Monitor the logs for the deployed process. ## Generate an alert Again generate a CPU spike by opening several large applications on your laptop. If you have SMS alert enabled, you'll receive an SMS. If not, you can check the logs. -You can watch a video that shows how to test your service: +## Conclusion -
+You've now completed the Quix Tour. You've built a simple but complete stream processing pipeline. ## Next steps -You've now completed the Quix Tour. You've built a simple but complete stream processing pipeline. To continue your Quix learning journey, you may want to consider some of the following resources: +To continue your Quix learning journey, you may want to consider some of the following resources: * [Real-time event detection tutorial](../tutorials/event-detection/index.md) * [Real-time Machine Learning (ML) predictions tutorial](../tutorials/data-science/index.md) diff --git a/docs/platform/samples/code-samples.png b/docs/platform/samples/code-samples.png new file mode 100644 index 00000000..67260d37 Binary files /dev/null and b/docs/platform/samples/code-samples.png differ diff --git a/docs/platform/samples/library.png b/docs/platform/samples/library.png deleted file mode 100644 index 97d51347..00000000 Binary files a/docs/platform/samples/library.png and /dev/null differ diff --git a/docs/platform/samples/samples.md b/docs/platform/samples/samples.md index 97e6c3b8..b4b74e97 100644 --- a/docs/platform/samples/samples.md +++ b/docs/platform/samples/samples.md @@ -2,8 +2,16 @@ The Quix Portal includes Quix Code Samples, a collection of templates and sample projects that you can use to start working with the platform. -Quix allows you to explore the Code Samples and save them as a new project and immediately run or deploy them. If you don't have a Quix account yet, go [sign-up to Quix](https://portal.platform.quix.ai/self-sign-up?xlink=docs){target=_blank} and create one. +![Code Samples](../samples/code-samples.png) + +Quix allows you to explore the Code Samples and save them as a new application and immediately run or deploy them. + +If you don't have a Quix account yet, go [sign-up to Quix](https://portal.platform.quix.ai/self-sign-up?xlink=docs){target=_blank} and create one. The backend of the Code Samples is handled by a public [Open source repository](https://github.com/quixio/quix-samples){target=_blank} on GitHub. You can become a contributor of our Code Samples by generating new samples or updating existing ones. -![Library.png](library.png) +!!! important + + Note that when you use a public code sample in the Quix Portal, it is added to your private repository, so any changes you make can be kept private if you so wish. Of course, if you are working in a public repository, then any code samples you add or modify will also be public. + + diff --git a/docs/platform/troubleshooting/datacataloguewarning.jpg b/docs/platform/troubleshooting/datacataloguewarning.jpg deleted file mode 100644 index 6ba6497c..00000000 Binary files a/docs/platform/troubleshooting/datacataloguewarning.jpg and /dev/null differ diff --git a/docs/platform/troubleshooting/quix-data-store.png b/docs/platform/troubleshooting/quix-data-store.png new file mode 100644 index 00000000..ce7b0004 Binary files /dev/null and b/docs/platform/troubleshooting/quix-data-store.png differ diff --git a/docs/platform/troubleshooting/site-cant-be-reached.png b/docs/platform/troubleshooting/site-cant-be-reached.png new file mode 100644 index 00000000..9da3a4ac Binary files /dev/null and b/docs/platform/troubleshooting/site-cant-be-reached.png differ diff --git a/docs/platform/troubleshooting/sitecantbereached.jpg b/docs/platform/troubleshooting/sitecantbereached.jpg deleted file mode 100644 index 182557f4..00000000 Binary files a/docs/platform/troubleshooting/sitecantbereached.jpg and /dev/null differ diff --git a/docs/platform/troubleshooting/troubleshooting.md b/docs/platform/troubleshooting/troubleshooting.md index cbb489ee..87b32e49 100644 --- a/docs/platform/troubleshooting/troubleshooting.md +++ b/docs/platform/troubleshooting/troubleshooting.md @@ -1,35 +1,24 @@ # Troubleshooting -This section contains solutions, fixes, hints and tips to help you solve -the most common issues encountered when using Quix. +This section contains solutions, fixes, hints and tips to help you solve the most common issues encountered when using Quix. ## Data is not being received into a Topic - - Ensure the Topic Name or Id is correct in Topics option of Quix - Portal. +If data is not being received in a topic: - - You can check the data in / out rates on the Topics tab. +* Ensure the Topic Name or Id is correct in Topics option of Quix Portal. - - If you want to see the data in the Data Catalogue please make sure - you are persisting the data to the Topic otherwise it may appear - that there is no data. +* You can check the data in / out rates on the Topics tab. - - If you are using a consumer group, check that no other services are - using the same group. If you run your code locally and deployed - somewhere and they are both using the same consumer group one of - them may consume all of the data. +* If you want to see the data in the Quix data store please make sure you are persisting the data to the Topic otherwise it may appear that there is no data. + +* If you are using a consumer group, check that no other services are using the same group. If you run your code locally and deployed somewhere and they are both using the same consumer group one of them may consume all of the data. ## Topic Authentication Error -If you see errors like these in your service or job logs then you may -have used the wrong credentials or it could be that you have specified -the wrong Topic Id. +If you see errors like these in your service or job logs then you may have used the wrong credentials or it could be that you have specified the wrong Topic Id. -Authentication failed during authentication due to invalid credentials -with SASL mechanism SCRAM-SHA-256 Exception receiving -package from Kafka 3/3 brokers are -down Broker: Topic authorization -failed +Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-256 Exception receiving package from Kafka 3/3 brokers are down Broker: Topic authorization failed Check very carefully each of the details. @@ -43,9 +32,7 @@ These can all be found in Topics option of Quix Portal. ## Broker Transport Failure -If you have deployed a service or job and the logs mention *broker -transport failure* then check the workspace name and password in the -SecurityOptions. +If you have deployed a service or job and the logs mention *broker transport failure* then check the environment name and password in the SecurityOptions. Also check the broker address list. You should have these by default: @@ -53,12 +40,9 @@ kafka-k1.quix.ai:9093,kafka-k2.quix.ai:9093,kafka-k3.quix.ai:9093 ## 401 Error -When attempting to access the web APIs you may encounter a 401 error. -Check that the bearer token is correct and has not expired. If necessary -generate a new bearer token. +When attempting to access the web APIs you may encounter a 401 error. Check that the bearer token is correct and has not expired. If necessary generate a new bearer token. -Example of the error received when trying to connect to the Streaming -Reader API with an expired bearer token +Example of the error received when trying to connect to the Streaming Reader API with an expired bearer token signalrcore.hub.errors.UnAuthorizedHubError @@ -70,29 +54,25 @@ The APIs that require a valid bearer token are: 2. Streaming Writer API - - https://writer-[YOUR_ORGANIZATION_ID]-[YOUR_WORKSPACE_ID].platform.quix.ai/index.html + - https://writer-[YOUR_ORGANIZATION_ID]-[YOUR_ENVIRONMENT_ID].platform.quix.ai/index.html 3. Telemetry Query API - - https://telemetry-query-[YOUR_ORGANIZATION_ID]-[YOUR_WORKSPACE_ID].platform.quix.ai/swagger/index.html + - https://telemetry-query-[YOUR_ORGANIZATION_ID]-[YOUR_ENVIRONMENT_ID].platform.quix.ai/swagger/index.html ## Error Handling in the client library callbacks -Errors generated in the client library callback can be swallowed or hard to read. -To prevent this and make it easier to determine the root cause you -should use a -[traceback](https://docs.python.org/3/library/traceback.html){target=_blank} +Errors generated in the client library callback can be swallowed or hard to read. To prevent this and make it easier to determine the root cause you should use a [traceback](https://docs.python.org/3/library/traceback.html){target=_blank} Begin by importing traceback -``` python +```python import traceback ``` -Then, inside the client library callback where you might have an issue place code -similar to this: +Then, inside the client library callback where you might have an issue place code similar to this: -``` python +```python def read_stream(new_stream: StreamReader): def on_parameter_data_handler(data: ParameterData): @@ -107,10 +87,9 @@ def read_stream(new_stream: StreamReader): input_topic.on_stream_received += read_stream ``` -Notice that the try clause is within the handler and the except clause -prints a formatted exception (below) +Notice that the try clause is within the handler and the except clause prints a formatted exception (below) -``` python +```python Traceback (most recent call last): File "main.py", line 20, in on_parameter_data_handler data.timestamps[19191919] @@ -121,25 +100,19 @@ IndexError: list index out of range ## Service keeps failing and restarting -If your service continually fails and restarts you will not be able to -view the logs. Redeploy your service as a job instead. This will allow -you to inspect the logs and get a better idea about what is happening. +If your service continually fails and restarts you will not be able to view the logs. Redeploy your service as a job instead. This will allow you to inspect the logs and get a better idea about what is happening. ## Possible DNS Propagation Errors -There are currently 2 scenarios in which you might encounter an issue -caused by DNS propagation. +There are currently two scenarios in which you might encounter an issue caused by DNS propagation: - - 1\. Data catalogue has been deployed but DNS entries have not fully - propagated. In this scenario you might see a banner when accessing - the data catalogue. +1. Quix data store has been deployed but DNS entries have not fully propagated. In this scenario you might see a banner when accessing the Quix data store. -![troubleshoot/datacataloguewarning.jpg](datacataloguewarning.jpg) + ![Quix data storage warning](quix-data-store.png) - - 2\. A dashboard or other publicly visible deployment is not yet - accessible, again due to DNS propagation. +2. A dashboard or other publicly visible deployment is not yet accessible, again due to DNS propagation. -![troubleshoot/sitecantbereached.jpg](sitecantbereached.jpg) + ![Site can't be reached](site-cant-be-reached.png) !!! tip @@ -147,30 +120,25 @@ caused by DNS propagation. ## Python Version -If you get strange errors when trying to compile your Python code -locally please check that you are using Python >=3.6 and <4 +If you get strange errors when trying to compile your Python code locally please check that you are using Python >=3.6 and <4 -For example you may encounter a *ModuleNotFoundError* +For example you may encounter a *ModuleNotFoundError*: ``` python ModuleNotFoundError: No module named 'quixstreams' ``` -For information on using the Quix Client Library please check out this [section](../../client-library/quickstart.md) in the client library -documentation. +For information on using the Quix Client Library please check out this [section](../../client-library/quickstart.md) in the client library documentation. ## Jupyter Notebooks -If you are having trouble with Jupyter Notebooks or another consumer of -Quix data try using aggregation to reduce the number of records -returned. +If you are having trouble with Jupyter Notebooks or another consumer of Quix data try using aggregation to reduce the number of records returned. For more info on aggregation check out this [short video](https://youtu.be/fnEPnIunyxA). ## Process Killed or Out of memory -If your deployment’s logs report "Killed" or "Out of memory" then you -may need to increase the amount of memory assigned to the deployment. +If your deployment’s logs report "Killed" or "Out of memory" then you may need to increase the amount of memory assigned to the deployment. You may experience this: @@ -181,23 +149,15 @@ You may experience this: ## Missing Dependency in online IDE -Currently the [online IDE](../../platform/glossary.md#online-ide) does -not use the same docker image as the one used for deployment due to time -it would take to build it and make it available to you. (Likely feature -for future however) Because of this you might have some OS level -dependencies that you need to install from within your python code to be -able to make use of the **Run** feature in the IDE. The section -below should give you guidance how to achieve this. +Currently the [online IDE](../../platform/glossary.md#online-ide) does not use the same docker image as the one used for deployment due to time it would take to build it and make it available to you. (Likely feature for future however) Because of this you might have some OS level dependencies that you need to install from within your python code to be able to make use of the **Run** feature in the IDE. The section below should give you guidance how to achieve this. -In your `main.py` (or similar) file, add as the first line: `import -preinstall`. Now create the file `preinstall.py` and add content based -on example below: +In your `main.py` (or similar) file, add as the first line: `import preinstall`. Now create the file `preinstall.py` and add content based on example below: - TA-Lib This script will check if TA-Lib is already installed (like from docker deployment). If not then installs it. - ``` python + ```python import os import sys @@ -257,8 +217,4 @@ on example below: print("Installed TA-Lib pip package") ``` - - -With this, the first time you press **Run**, the dependency should -install. Any subsequent run should already work without having to -install. +With this, the first time you press **Run**, the dependency should install. Any subsequent run should already work without having to install. diff --git a/docs/platform/tutorials/currency-alerting/currency-alerting.md b/docs/platform/tutorials/currency-alerting/currency-alerting.md index f76ee1e6..077be3a7 100644 --- a/docs/platform/tutorials/currency-alerting/currency-alerting.md +++ b/docs/platform/tutorials/currency-alerting/currency-alerting.md @@ -26,11 +26,11 @@ To complete this tutorial you will need the following accounts: The objective of this tutorial is to create a pipeline that resembles the following example: -![Alt text](currency-pipeline.png) +![Alt text](./images/currency-pipeline.png) The colors describe the role of the microservice that is being deployed. The possible roles are as follows: -
Source — enables streaming of data into the Quix platform from any external source, such as an API or websocket.
+
Source — enables streaming of data into the Quix platform from any external source, such as an API or WebSocket.
Transformation — implements the processing of data, for example, cleaning data or implementing a Machine Learning (ML) model.
Destination — enables streaming of processed data to an external destination, such as a database or dashboard.
@@ -38,7 +38,7 @@ The colors describe the role of the microservice that is being deployed. The pos In this section you will learn how to set up the source sample and deploy it in your pipeline as a microservice. -This sample, when deployed as a microservice in the Quix pipeline, connects a live stream of updates for the currency pair: `BTC/USD`. This real-time exchange rate data is streamed in from the [CoinAPI](https://www.coinapi.io/){target=_blank} through its [Websocket](https://en.wikipedia.org/wiki/WebSocket){target=_blank} interface. The free [sandbox version](https://docs.coinapi.io/#endpoints-2){target=_blank} is used for the purposes of this tutorial. +This sample, when deployed as a microservice in the Quix pipeline, connects a live stream of updates for the currency pair: `BTC/USD`. This real-time exchange rate data is streamed in from the [CoinAPI](https://www.coinapi.io/){target=_blank} through its [WebSocket](https://en.wikipedia.org/wiki/WebSocket){target=_blank} interface. The free [sandbox version](https://docs.coinapi.io/#endpoints-2){target=_blank} is used for the purposes of this tutorial. To summarize this functionality: @@ -51,35 +51,35 @@ To set up the CoinAPI source, follow these steps: 2. In the search box on the Code Samples page, enter "CoinAPI - Exchange Rate Feed". - You will see the Coin API sample appear in the search results: ![CoinAPI sample](coinapi.png "CoinAPI sample") + You will see the Coin API sample appear in the search results: ![CoinAPI sample](./images/coinapi.png "CoinAPI sample") -3. Click the `Preview code` button, and on the page that appears, click the `Edit code` button. When you choose to edit a sample, Quix prompts you to create a copy of it as a project, as sample are read-only. +3. Click the `Preview code` button, and on the page that appears, click the `Edit code` button. When you choose to edit a sample, Quix prompts you to create a copy of it as an application, as sample are read-only. - Optionally, you could have clicked the `Setup & deploy` button, which would have deployed the microservice directly. However, in this tutorial, you are given the opportunity to first look at the code, and modify it if necessary. + Optionally, you could have clicked the `Deploy` button, which would have deployed the microservice directly. However, in this tutorial, you are given the opportunity to first look at the code, and modify it if necessary. -4. In the `Setup project` form, configure the following environment variables: +4. In the `Setup` form, configure the following environment variables: | Field | Value | | --- | --- | - | `Name` | Enter a project name or keep the default suggestion. | + | `Name` | Enter an application name or keep the default suggestion. | | `output` | Select the output topic. In this case, select `currency-exchange-rates` from the list. | | `coin_api_key` | The API key that you use to access CoinAPI. | | `asset_id_base` | The short code for the _base_ currency that you want to track, for example BTC. | | `asset_id_quote` | The short code for the _target_ currency in which prices will be quoted, for example, USD. | -5. Click `Save as project`. You now have a copy of the CoinAPI sample in your workspace. +5. Click `Save as Application`. You now have a copy of the CoinAPI sample in your environment. 6. Click the `Deploy` button. The sample is deployed as a service and automatically started. - Once the sample has been deployed, you’ll be redirected to the workspace home page, where you can see the service in the pipeline context, as was illustrated previously. + Once the sample has been deployed, you’ll be redirected to the portal home page, where you can see the service in the pipeline context, as was illustrated previously. 7. Click the CoinAPI service card to inspect the logs: - ![CoinAPI Step](pipeline-coinstep.png) + ![CoinAPI Step](./images/pipeline-coinstep.png) A successful deployment will resemble the following example: -![CoinAPI Step](success-coinapi.png) +![CoinAPI Step](./images/success-coinapi.png) If there is an issue with the service, you can also inspect the `build logs` in the `Lineage` panel to check for any traces of a syntax error or other build issues. @@ -104,24 +104,24 @@ To set up the Threshold Alert sample, follow these steps: You will see the `Threshold Alert` sample appear in the search results: - ![Threshold Alert](threshold-alerts.png "Threshold Alert") + ![Threshold Alert](./images/threshold-alerts.png "Threshold Alert") 3. Click the `Preview code` button, and on the page that appears, click the `Edit code` button. -4. In the `Setup project` form, set the following environment variables: +4. In the `Setup application` form, set the following environment variables: | Field | Value | | --- | --- | - | `Name` | As usual, enter a project name or keep the default suggestion. | + | `Name` | As usual, enter an application name or keep the default suggestion. | | `input` | Select the input topic. In this case, select `currency-exchange-rates` from the list. | | `output` | Select the output topic. In this case, select `currency-rate-alerts` from the list. | | `parameterName` | Set this to `PRICE`. | | `thresholdValue` | The price in USD that you'd like to get alerted about. For example, on the day that this tutorial was written, BTC was hovering around $16,300 so we entered `16300`. This increases the likelihood that some alerts are generated soon after deploying (otherwise it's hard to tell if it's working). | | `msecs_before_recheck` | Enter the minimum delay in milliseconds between alerts. The default is 300 milliseconds (5 minutes), as this prevents numerous alerts when the price hovers around the threshold. | -5. Click `Save as project`. +5. Click `Save as Application`. - You now have a copy of the Threshold Alert sample in your workspace. + You now have a copy of the Threshold Alert sample in your environment. 6. Click the `Deploy` button. @@ -131,7 +131,7 @@ To set up the Threshold Alert sample, follow these steps: A successful deployment will resemble the following screenshot: -![CoinAPI Step](success-threshold.png) +![CoinAPI Step](./images/success-threshold.png) In the `Lineage` panel, you will notice that the two services are connected by a line, which indicates that they're both using the same topic, `currency-exchange-rates`. The CoinAPI service is _writing_ to `currency-exchange-rates`, and the Threshold Alert service is _reading_ from it. @@ -153,31 +153,31 @@ To set up the push nonfiction microservice, follow these steps: You will see the `Threshold Alert` sample appear in the search results: - ![Pushover Notifications](library-pushover.png "Pushover Notifications") + ![Pushover Notifications](./images/library-pushover.png "Pushover Notifications") 3. Click the `Preview code` button, and on the page that appears, click the `Edit code` button. -4. On the `Project Creation` page, complete the following fields: +4. On the `Setup application` page, complete the following fields: | Field | Value | | --- | --- | - | `Name` | Enter a project name or keep the default suggestion. | + | `Name` | Enter an application name or keep the default suggestion. | | `Input` | Select the input topic. In this case, select `currency-rate-alerts` from the list.Every message will be read from this topic, and turned into a push notification. | | `base_url` | Leave the default value, `https://api.pushover.net/1/messages.json?`. If you decide to use another push notification app, your can always update this value. | | `api_token` | Enter the API token that you generated for this application in your Pushover dashboard. For example: `azovmnbxxdxkj7j4g4wxxxdwf12xx4`. | | `user_key` | Enter the user key that you received when you signed up with Pushover. For example: `u721txxxgmvuy5dxaxxxpzx5xxxx9e` | -5. Click the `Save as project`. You now have a copy of the Pushover notification sample in your workspace. +5. Click the `Save as Application`. You now have a copy of the Pushover notification sample in your environment. 6. Click the `Deploy` button. You will now start receiving Pushover notifications on your phone, as shown here: -![Pushover Notification Example](pushover_notification.png){width=60%} +![Pushover Notification Example](./images/pushover_notification.png){width=60%} Depending on your threshold value and the price fluctuations, it might take a few minutes for you to get a notification. While you are waiting to receive a notification, you can inspect the logs, as shown previously. -![Pushover Logs](success-pushover.png) +![Pushover Logs](./images/success-pushover.png) * Don't worry if the logs only show "_Listening to Stream_" initially — remember that the Threshold service only writes a message to the `currency-rate-alerts` topic when the threshold has been crossed. * This means that the `currency-rate-alerts` stream might be empty for a short while. diff --git a/docs/platform/tutorials/currency-alerting/CoinAPIer.png b/docs/platform/tutorials/currency-alerting/images/CoinAPIer.png similarity index 100% rename from docs/platform/tutorials/currency-alerting/CoinAPIer.png rename to docs/platform/tutorials/currency-alerting/images/CoinAPIer.png diff --git a/docs/platform/tutorials/currency-alerting/coinapi.png b/docs/platform/tutorials/currency-alerting/images/coinapi.png similarity index 100% rename from docs/platform/tutorials/currency-alerting/coinapi.png rename to docs/platform/tutorials/currency-alerting/images/coinapi.png diff --git a/docs/platform/tutorials/currency-alerting/currency-pipeline.png b/docs/platform/tutorials/currency-alerting/images/currency-pipeline.png similarity index 100% rename from docs/platform/tutorials/currency-alerting/currency-pipeline.png rename to docs/platform/tutorials/currency-alerting/images/currency-pipeline.png diff --git a/docs/platform/tutorials/currency-alerting/library-icon.png b/docs/platform/tutorials/currency-alerting/images/library-icon.png similarity index 100% rename from docs/platform/tutorials/currency-alerting/library-icon.png rename to docs/platform/tutorials/currency-alerting/images/library-icon.png diff --git a/docs/platform/tutorials/currency-alerting/library-pushover.png b/docs/platform/tutorials/currency-alerting/images/library-pushover.png similarity index 100% rename from docs/platform/tutorials/currency-alerting/library-pushover.png rename to docs/platform/tutorials/currency-alerting/images/library-pushover.png diff --git a/docs/platform/tutorials/currency-alerting/logs1.png b/docs/platform/tutorials/currency-alerting/images/logs1.png similarity index 100% rename from docs/platform/tutorials/currency-alerting/logs1.png rename to docs/platform/tutorials/currency-alerting/images/logs1.png diff --git a/docs/platform/tutorials/currency-alerting/pipeline-coinstep.png b/docs/platform/tutorials/currency-alerting/images/pipeline-coinstep.png similarity index 100% rename from docs/platform/tutorials/currency-alerting/pipeline-coinstep.png rename to docs/platform/tutorials/currency-alerting/images/pipeline-coinstep.png diff --git a/docs/platform/tutorials/currency-alerting/pipeline.png b/docs/platform/tutorials/currency-alerting/images/pipeline.png similarity index 100% rename from docs/platform/tutorials/currency-alerting/pipeline.png rename to docs/platform/tutorials/currency-alerting/images/pipeline.png diff --git a/docs/platform/tutorials/currency-alerting/pushover_notification.png b/docs/platform/tutorials/currency-alerting/images/pushover_notification.png similarity index 100% rename from docs/platform/tutorials/currency-alerting/pushover_notification.png rename to docs/platform/tutorials/currency-alerting/images/pushover_notification.png diff --git a/docs/platform/tutorials/currency-alerting/success-coinapi.png b/docs/platform/tutorials/currency-alerting/images/success-coinapi.png similarity index 100% rename from docs/platform/tutorials/currency-alerting/success-coinapi.png rename to docs/platform/tutorials/currency-alerting/images/success-coinapi.png diff --git a/docs/platform/tutorials/currency-alerting/success-pushover.png b/docs/platform/tutorials/currency-alerting/images/success-pushover.png similarity index 100% rename from docs/platform/tutorials/currency-alerting/success-pushover.png rename to docs/platform/tutorials/currency-alerting/images/success-pushover.png diff --git a/docs/platform/tutorials/currency-alerting/success-threshold.png b/docs/platform/tutorials/currency-alerting/images/success-threshold.png similarity index 100% rename from docs/platform/tutorials/currency-alerting/success-threshold.png rename to docs/platform/tutorials/currency-alerting/images/success-threshold.png diff --git a/docs/platform/tutorials/currency-alerting/threshold-alerts.png b/docs/platform/tutorials/currency-alerting/images/threshold-alerts.png similarity index 100% rename from docs/platform/tutorials/currency-alerting/threshold-alerts.png rename to docs/platform/tutorials/currency-alerting/images/threshold-alerts.png diff --git a/docs/platform/tutorials/currency-alerting/twilio.png b/docs/platform/tutorials/currency-alerting/images/twilio.png similarity index 100% rename from docs/platform/tutorials/currency-alerting/twilio.png rename to docs/platform/tutorials/currency-alerting/images/twilio.png diff --git a/docs/platform/tutorials/data-science/1-bikedata.md b/docs/platform/tutorials/data-science/1-bikedata.md index 8d5562d5..43088c0c 100644 --- a/docs/platform/tutorials/data-science/1-bikedata.md +++ b/docs/platform/tutorials/data-science/1-bikedata.md @@ -8,7 +8,7 @@ You won't need to write lots of code, as you will use the Quix Code Samples to d ![NY Bikes sample tile](./images/ny-bikes-library-tile.png){width=200px} -2. Click `Setup and deploy`: +2. Click `Deploy`: a. Leave the `Name` as it is. @@ -16,6 +16,6 @@ You won't need to write lots of code, as you will use the Quix Code Samples to d 3. Click `Deploy`. - The precompiled service is deployed to your workspace and begins running immediately. + The precompiled service is deployed to your environment and begins running immediately. [Part 2 - Weather data :material-arrow-right-circle:{ align=right }](2-weatherdata.md) \ No newline at end of file diff --git a/docs/platform/tutorials/data-science/2-weatherdata.md b/docs/platform/tutorials/data-science/2-weatherdata.md index 8f50e6d0..5cbae218 100644 --- a/docs/platform/tutorials/data-science/2-weatherdata.md +++ b/docs/platform/tutorials/data-science/2-weatherdata.md @@ -22,7 +22,7 @@ You can now deploy the VisualCrossing connector from the Quix Code Samples: 1. Search the Code Samples for `weather` and select the `VisualCrossing Weather` tile. -2. Click `Setup and deploy`. +2. Click `Deploy`. 3. Leave the `Name` as it is. @@ -32,7 +32,7 @@ You can now deploy the VisualCrossing connector from the Quix Code Samples: 6. Click `Deploy`. - The precompiled service is deployed to your workspace and begins running immediately. + The precompiled service is deployed to your environment and begins running immediately. !!! warning "Visual Crossing usage limitation" diff --git a/docs/platform/tutorials/data-science/5-run.md b/docs/platform/tutorials/data-science/5-run.md index 132a96ec..e05290ce 100644 --- a/docs/platform/tutorials/data-science/5-run.md +++ b/docs/platform/tutorials/data-science/5-run.md @@ -1,6 +1,6 @@ # 5. Run the model -Quix has has already trained model artifacts and these have been included as pickle files in the prediction code project. This project is included in the open source Code Samples. You will use the Code Sample to run the model. +Quix has has already trained model artifacts and these have been included as pickle files in the prediction code application. This application is included in the open source Code Samples. You will use the Code Sample to run the model. ## Prediction service code @@ -20,9 +20,9 @@ Get the code for the prediction service: 7. Ensure the `output` is set to `NY-bikes-prediction`. -8. Click `Save as project`. +8. Click `Save as Application`. - This will save the code for this service to your workspace. + This will save the code for this service to your environment. !!! note "Free Models" diff --git a/docs/platform/tutorials/data-science/6-conclusion.md b/docs/platform/tutorials/data-science/6-conclusion.md index 6dafe7ac..c55b5b50 100644 --- a/docs/platform/tutorials/data-science/6-conclusion.md +++ b/docs/platform/tutorials/data-science/6-conclusion.md @@ -18,7 +18,7 @@ Here are some suggested next steps to continue on your Quix learning journey: * If you decide to build your own connectors and apps, you can contribute something to the Code Samples. Visit the [GitHub Code Samples repository](https://github.com/quixio/quix-samples){target=_blank}. Fork our Code Samples repo and submit your code, updates, and ideas. -What will you build? Let us know! Quix would like to feature your project or use case in our [newsletter](https://www.quix.io/community/){target=_blank}. +What will you build? Let us know! Quix would like to feature your application or use case in our [newsletter](https://www.quix.io/community/){target=_blank}. ## Getting help diff --git a/docs/platform/tutorials/data-science/index.md b/docs/platform/tutorials/data-science/index.md index ec8ad249..227cd044 100644 --- a/docs/platform/tutorials/data-science/index.md +++ b/docs/platform/tutorials/data-science/index.md @@ -1,12 +1,12 @@ # Real-time Machine Learning (ML) predictions -In this tutorial you will learn how to deploy a real-time **data science** project into a scalable self-maintained solution. You create a service that predicts bicycle availability in New York, by building the raw data ingestion pipelines, Extract Transform Load (ETL), and predictions. +In this tutorial you will learn how to deploy a real-time **data science** application into a scalable self-maintained solution. You create a service that predicts bicycle availability in New York, by building the raw data ingestion pipelines, Extract Transform Load (ETL), and predictions. ## Aim The Quix Platform enables you to harness complex, efficient real-time infrastructure in a quick and simple way. You are going to build an application that uses real-time New York bicycle data and weather data to predict the future availability of bikes in New York. -You will complete all the typical phases of a data science project: +You will complete all the typical phases of a data science application: - Build pipelines to gather bicycle and weather data. diff --git a/docs/platform/tutorials/data-stream-processing/data-stream-processing.md b/docs/platform/tutorials/data-stream-processing/data-stream-processing.md index cb9f8889..d7b6d853 100644 --- a/docs/platform/tutorials/data-stream-processing/data-stream-processing.md +++ b/docs/platform/tutorials/data-stream-processing/data-stream-processing.md @@ -20,11 +20,11 @@ By the end you will have: If you need any help, please sign up to the [Quix community forum](https://forum.quix.io/){target=_blank}. -## Project Architecture +## Application architecture ![The demo's architecture](architecture.png) -The solution has 3 main elements: +The solution has three main elements: - Two services to process data @@ -32,7 +32,11 @@ The solution has 3 main elements: However, this is all running with the Quix Serverless environment. -You have to create and deploy 3 projects, we have: . Created an always on high performance back-end . Created APIs and Services focused on performance . Opened firewall ports and optimized DNS propagation +You have to create and deploy three applications. Quix has: + +1. Created an always on high performance back-end +2. Created APIs and Services focused on performance +3. Opened firewall ports and optimized DNS propagation ## Prerequisites @@ -55,7 +59,7 @@ This walk through covers the following: ## Getting Started -Login to Quix and open your Workspace, you get one workspace on the free tier, more on higher tiers. A Quix Workspace is a container to help you manage all the data, topics, models and services related to a single solution so we advise using a new, clean one for this tutorial. +Login to Quix and open your project, you get one project on the free tier, more on higher tiers. A Quix project is a container to help you manage all the data, topics, models and services related to a single solution so we advise using a new, clean one for this tutorial. ### Code Samples @@ -65,17 +69,17 @@ Navigate to `Code Samples` and search for `Streaming Demo`. You will see 3 resul ![Code Samples search results](library-items.png) -You will save the code for each of these to your workspace and deploy the two services and the UI. +You will save the code for each of these to your environment and deploy the two services and the UI. ### Input -First you will select, build and deploy the input project, this project handles and transforms data from your phone. +First you will select, build and deploy the input application, this application handles and transforms data from your phone. -Don't worry, all the code you'll need is in the `Streaming Demo - Input` project. +Don't worry, all the code you'll need is in the `Streaming Demo - Input` application. #### Save the code -Follow these steps to get the code and deploy the project as a microservice. +Follow these steps to get the code and deploy the application as a microservice. 1. Click the tile 2. Click `Edit code` @@ -84,11 +88,11 @@ Follow these steps to get the code and deploy the project as a microservice. Leave the name, input and output as they are. - The input and output values are [Topics](../../glossary.md#topics). These have been pre-configured in this and the other projects in this tutorial to allow the services to communicate with each other. + The input and output values are [Topics](../../glossary.md#topics). These have been pre-configured in this and the other applications in this tutorial to allow the services to communicate with each other. -3. Click `Save as project` +3. Click `Save as Application` - This will save a copy of this code to your workspace. + This will save a copy of this code to your environment. ???- info "About the code" The code's main purpose is to listen for and respond in real-time to data and events being streamed to it via the `gamedata` topic. @@ -103,7 +107,7 @@ Follow these steps to get the code and deploy the project as a microservice. 4. Tag this version of the code by clicking the small tag icon at the top of the code window. - - Type `v1.0` into the input box + - Type `game-v1.0` into the input box - Hit enter Next you will deploy the code as a microservice. @@ -114,7 +118,7 @@ To deploy a microservice in Quix is a very simple three step process and step on 1. Click `Deploy` near the top right hand corner of the screen. -2. Select the `v1.0` tag you created earlier +2. Select the `game-v1.0` tag you created earlier 3. Click `Deploy` @@ -134,15 +138,15 @@ To deploy a microservice in Quix is a very simple three step process and step on Now that you have the first service up and running it's time for the next one. -Follow the same process as above and deploy the `Streaming Demo - Control` project. +Follow the same process as above and deploy the `Streaming Demo - Control` application. Remember the steps are: 1. Search the Code Samples for `Streaming Demo` -2. Select the `Streaming Demo - Control` project +2. Select the `Streaming Demo - Control` application -3. Save it to your workspace +3. Save it to your environment. 4. Tag it @@ -171,7 +175,7 @@ You should be familiar with the process by now. 2. Select the `Streaming Demo - UI` tile. -3. Save it to your workspace +3. Save it to your environment 4. Tag it @@ -194,13 +198,13 @@ You should be familiar with the process by now. !!! success - 🚀 You deployed the UI to your workspace and can now proceed to the fun part of the tutorial + 🚀 You deployed the UI to your environment and can now proceed to the fun part of the tutorial ???- info "About the code" This UI code is Javascript and HTML, it displays the track and car and subscribes to data coming from the topics to keep the car where it's supposed to be or at least where you drive it! - The most relevant part of the code is where websockets are used via Microsoft's [SignalR](https://dotnet.microsoft.com/en-us/apps/aspnet/signalr){target=_blank}. + The most relevant part of the code is where WebSockets are used via Microsoft's [SignalR](https://dotnet.microsoft.com/en-us/apps/aspnet/signalr){target=_blank}. For example these lines subscribe to various parameter values on the `car-game-control` topic. diff --git a/docs/platform/tutorials/event-detection/conclusion.md b/docs/platform/tutorials/event-detection/conclusion.md index a868e824..5f1065ec 100644 --- a/docs/platform/tutorials/event-detection/conclusion.md +++ b/docs/platform/tutorials/event-detection/conclusion.md @@ -10,7 +10,7 @@ Here are some suggested next steps to continue on your Quix learning journey: * Build a pipeline to perform [real-time sentiment analysis](../sentiment-analysis/index.md) on text, including high volume messages from Twitter. -What will you build? Let us know! We’d love to feature your project or use case in our [newsletter](https://www.quix.io/community/). +What will you build? Let us know! We’d love to feature your applicaton or use case in our [newsletter](https://www.quix.io/community/). ## Getting help diff --git a/docs/platform/tutorials/event-detection/crash-detection-ui.md b/docs/platform/tutorials/event-detection/crash-detection-ui.md index d6a5b5b0..8470c18a 100644 --- a/docs/platform/tutorials/event-detection/crash-detection-ui.md +++ b/docs/platform/tutorials/event-detection/crash-detection-ui.md @@ -8,13 +8,13 @@ The UI you will deploy is shown in the following screenshot: ## Deploying the UI -The following steps demonstrate how to select the UI from the Code Samples and deploy it to your Quix workspace. +The following steps demonstrate how to select the UI from the Code Samples and deploy it to your Quix environment. Follow these steps to deploy the prebuilt UI: 1. Navigate to `Code Samples` and search for `Event Detection Demo UI`. -2. Click the `Setup & deploy` button. +2. Click the `Deploy` button. 3. Ensure that the `topic` input box contains `phone-data`. @@ -25,7 +25,7 @@ Follow these steps to deploy the prebuilt UI: This topic will be subscribed to and will contain any events generated by the crash event detection service you deployed earlier. 5. Click `Deploy` and wait while the UI is deployed and started. - You will be redirected to your workspace homepage once it's completed. + You will be redirected to your environment homepage once it's completed. !!! success diff --git a/docs/platform/tutorials/event-detection/crash-detection.md b/docs/platform/tutorials/event-detection/crash-detection.md index ed2ace10..58465fa1 100644 --- a/docs/platform/tutorials/event-detection/crash-detection.md +++ b/docs/platform/tutorials/event-detection/crash-detection.md @@ -22,9 +22,9 @@ Follow these steps to create the event detection service: 6. Enter `phone-out` into the output field. -7. Click `Save as project`. +7. Click `Save as Application`. -You now have the basic template for the service saved to your workspace. +You now have the basic template for the service saved to your environment. ## Test the template @@ -216,7 +216,7 @@ You can once again run the code in the development environment to test the funct 5. Observe the `Console` tab. You should see a message saying "Crash detected". -4. On the `Messages` tab select `output : phone-out` from the first drop-down. +4. On the `Messages` tab select `output : phone-out` from the first dropdown. 5. Gently shake your phone, or wait for another crash event from the CSV data, and observe that crash events are streamed to the output topic. You can click these rows to investigate the event data, for example: @@ -240,16 +240,16 @@ Now that you have verified the service is working you can go ahead and deploy th 1. Tag the code by clicking the `add tag` icon at the top of the code panel. -2. Enter a tag such as `v1`. +2. Enter a tag such as `crash-v1`. 3. Now click the `Deploy` button near the top right of the code panel. -4. From the `Version tag` drop-down, select the tag you created. +4. From the `Version tag` dropdown, select the tag you created. 5. Click `Deploy`. !!! success - You now have a data source and the crash detection service running in your workspace. + You now have a data source and the crash detection service running in your environment. Next you’ll deploy a real-time UI to visualize the route being taken, the location of any crash events and also to see some of the sensor data. diff --git a/docs/platform/tutorials/event-detection/data-acquisition.md b/docs/platform/tutorials/event-detection/data-acquisition.md index 2d96b470..44914eda 100644 --- a/docs/platform/tutorials/event-detection/data-acquisition.md +++ b/docs/platform/tutorials/event-detection/data-acquisition.md @@ -26,7 +26,7 @@ To add an external source: 3. Locate the `External Source` sample and click `Add external source`. -4. Enter `phone-data` in the `Output` field and click `Add new topic` in the drop-down. +4. Enter `phone-data` in the `Output` field and click `Add new topic` in the dropdown. 5. Enter `Quix companion web gateway` in the `Name` field. @@ -34,7 +34,7 @@ To add an external source: ### Install and configure the apps -To stream data from your phone you’ll need to install the `Quix Companion App` on your Android phone and deploy the QR Settings Share app to your Quix workspace. +To stream data from your phone you’ll need to install the `Quix Companion App` on your Android phone and deploy the QR Settings Share app to your Quix environment. Follow these steps: @@ -58,7 +58,7 @@ Follow these steps: 8. Navigate to the `Code Samples` and search for `QR Settings Share`. -9. Click `Setup & deploy`. +9. Click `Deploy`. 10. Paste the token into the `token` field. @@ -98,7 +98,7 @@ Follow these steps: 20. Click the `START` button. - This will open a connection to your Quix workspace and start streaming data from your phone. + This will open a connection to your Quix environment and start streaming data from your phone. ### Verify the live data @@ -119,7 +119,7 @@ Follow these steps to ensure that everything is working as expected: 7. Move or gently shake your phone and notice that the waveform reflects whatever movement your phone is experiencing. !!! success - You have connected the Quix Companion App to your workspace and verified the connection using the Live Data Explorer. + You have connected the Quix Companion App to your environment and verified the connection using the Live Data Explorer. ## CSV data @@ -137,7 +137,7 @@ Follow these instructions to deploy the data source: 5. Change the `output` field to `phone-data`. -6. Click `Save as project`. +6. Click `Save as Application`. 7. Open the `requirements.txt` file and add `urllib3` to a new line. diff --git a/docs/platform/tutorials/image-processing/add-service.md b/docs/platform/tutorials/image-processing/add-service.md new file mode 100644 index 00000000..8ce61473 --- /dev/null +++ b/docs/platform/tutorials/image-processing/add-service.md @@ -0,0 +1,235 @@ +# 👩‍🔬 Lab - Add a new service + +In this lab you use everything you've learned so far, to add a new service to the pipeline. Specifically, you add a service to publish the number of cars captured by the TfL cams to a new topic. You will then observe the number of cars change in real time using the waveform view of the Quix Data Explorer. This service could be useful if you want to easily store the number of cars, or perhaps create an alarm if the number of cars rises above a certain threshold. This service is a simple example of filtering - where you filter out data you are not interested in for subsequent processing. + +You develop this service on a feature branch, and then you create a PR to merge your new feature into the develop branch. This is a common pattern for development - you can test your new service on the feature branch, and then test again on the develop branch, before final integration into the production `main` branch. + +## Create an environment + +To create a new environment (and branch): + +1. Click `+ New environment` to create a new environment (note, your screen will look slightly different to the one shown here): + + ![New environment](./images/new-environment.png) + +2. Create a new environment called `Cars Only`. + +3. Create a new branch called `cars-only`. To do this, from the branch dropdown click `+ New branch` which displays the New branch dialog: + + ![New branch](./images/new-branch.png) + + !!! important + + Make sure you branch from the `develop` branch, not `main`, as you are going to merge your changes onto the `develop` branch. + +4. Complete creation of the environment using the default options. + +5. On the projects screen, click your newly created environment, `Cars Only`. + +## Sync the environment + +You now see that the Quix environment is out of sync with the Git repository. You need to synchronize the Quix view of the environment, with that stored in the repository. + +To synchronize Quix with the repository: + +1. Click `Sync environment`: + + ![Sync environment](./images/sync-environment.png) + + The sync environment dialog is displayed, showing you the changes that are to be made to the `quix.yaml` file, which is the configuration file that defines the pipeline. + +2. Click `Sync environment`, and then `Go to pipeline`. + + In the pipeline view, you see the services building. Ensure all services are "Running" before continuing. + +## Add a transform + +You now add a transform to the output of the stream merge service. This is a convenient point, as the multiple streams are now merged to one stream (all cameras are merged into one stream), and this will make viewing the number of cars easier in the waveform view of the data explorer, as there is only one stream to examine. + +To create the transform: + +1. Click the small `+` on the output of the stream merge service, and then select `Transformation` from the dropdown list. + +2. Click `Preview code` for the `Starter transformation` in the Code Samples view. + +3. Click `Edit code`, and enter an application name of `Cars Only` and leave the path as the default, then click `Save`. + +4. Replace the complete `main.py` code with the following: + + ``` python + import quixstreams as qx + import os + import pandas as pd + import datetime + + client = qx.QuixStreamingClient() + + topic_consumer = client.get_topic_consumer(os.environ["input"], consumer_group = "empty-transformation") + topic_producer = client.get_topic_producer(os.environ["output"]) + + def on_dataframe_received_handler(stream_consumer: qx.StreamConsumer, df: pd.DataFrame): + d = df.to_dict() + if 'car' in d: + # Create a clean data frame + data = qx.TimeseriesData() + data.add_timestamp(datetime.datetime.utcnow()) \ + .add_value("Cars", d['car'][0]) + + stream_producer = topic_producer.get_or_create_stream(stream_id = stream_consumer.stream_id) + stream_producer.timeseries.buffer.publish(data) + + def on_stream_received_handler(stream_consumer: qx.StreamConsumer): + stream_consumer.timeseries.on_dataframe_received = on_dataframe_received_handler + + # subscribe to new streams being received + topic_consumer.on_stream_received = on_stream_received_handler + + print("Listening to streams. Press CTRL-C to exit.") + + # Handle termination signals and provide a graceful exit + qx.App.run() + ``` + + ??? example "Understand the code" + + The code is a little different to the starter transform. The handler for event data has been removed, along with its registration code, as you are only interested in time series data in this transform. This time series data is received in a pandas dataframe format. For ease of manipulation this is converted to a Python dictionary, so the car data can be simply extracted. + + If you want to check the format of the message processed here, you can use the message view for the stream merge service output, or the Data Explorer message view, to examine it in great detail. You will see something similar to the following: + + ``` json + { + "Epoch": 0, + "Timestamps": [ + 1694788651367069200 + ], + "NumericValues": { + "truck": [ + 1 + ], + "car": [ + 3 + ], + "lat": [ + 51.55164 + ], + "lon": [ + -0.01853 + ], + "delta": [ + -0.43226194381713867 + ] + }, + "StringValues": { + "image": [ + "iVBOR/snip/QmCC" + ] + }, + "BinaryValues": {}, + "TagValues": { + "parent_streamId": [ + "JamCams_00002.00820" + ] + } + } + ``` + + A new pandas dataframe is then created, as the data published to the output topic is only going to consist of a timestamp and the number of cars on it. This is an example of simple filtering. + + Once prepared, the dataframe is then published to the output topic. + +5. Edit environment variables, so that the input topic is `image-processed-merged` and the output topic is a new topic called `cars-only`, as shown in the following screenshot: + + ![Edit environment variables](./images/edit-env-variables.png) + + !!! tip + + These environment variables are used by the code. For example, the input topic is read by the code with the Python code `os.environ["input"]`. + +6. Click the tag icon (see screenshot), and give the code a tag such as `cars-only-v1`: + + ![Tag icon](./images/tag.png) + +7. Click the `Deploy` button and select the version tag `cars-only-v1` from the `Version tag` dropdown, and leaving all other values at their defaults, click `Deploy`. + +## View the data in real time + +You now use the Quix Data Explorer to view the cars data in real time. + +1. In the left-hand navigation, click `Data explorer`. + + !!! tip + + While this is the most direct way to access the Data Explorer, it's not the only way. You learn about other methods in other tutorials. You can, for example, click on the topic you want to view in the pipeline view, and then select `View live data` - that takes you into the Data Explorer. + +2. Click `Live data` and make sure the `cars-only` topic is selected. + +3. Check the `image-feed` stream checkbox, and also the `Cars` parameter data checkbox. + +4. Make sure Waveform view is selected. + + !!! tip + + If no data is visible, stop and start the TFL Camera Feed service, as it may be sleeping. + + You see the waveform showing the number of cars detected: + + ![Cars waveform](./images/cars-waveform.png) + +## Merge the feature + +Once you are sure that the changes on your feature branch are tested, you can then merge your changes onto the develop branch. Here your changes undergo further tests before finally being merged into production. + +To merge your feature branch, `cars-only` into `develop`: + +1. Select `Merge request` from the menu as shown: + + ![Merge request menu](./images/merge-request-menu.png) + +2. In the `Merge request` dialog, set the `cars-only` branch to merge into the `develop` branch, as shown: + + ![Merge request dialog](./images/merge-request-dialog.png) + +You are going to create a pull request, rather than perform a direct merge. This enables you to have the PR reviewed in GitHub (or other Git provider). You are also going to do a squash and merge, as much of the feature branch history is not required. + +To create the pull request: + +1. Click `Create pull request`. You are taken to your Git provider, in this case GitHub. + +2. Click the `Pull request` button: + + ![Pull request GitHub](./images/pull-request-github.png) + +3. Add your description, and then click `Create pull request`: + + ![Pull request description](./images/pr-add-description.png) + +4. Get your PR reviewed and approved. Then squash and merge the commits: + + ![Squash and merge](./images/squash-and-merge.png) + + You can replace the prefilled description by something more succinct. Then click `Confirm squash and merge`. + + !!! tip + + You can just merge, you don't have to squash and merge. You would then retain the complete commit history for your service while it was being developed. Squash and merge is used in this case by way of example, as the commit messages generated while the service was being developed were deemed to be not useful in this case. + +## Resync the Develop environment + +You have now merged your new feature into the `develop` branch in the Git repository. Your Quix view in the Develop environment is now out of sync with the Git repository. If you click on your Develop environment in Quix, you'll see it is now a commit (the merge commit) behind: + +![Develop behind](./images/develop-behind.png) + +You now need to make sure your Develop environment in Quix is synchronized with the Git repository. To do this: + +1. Click on `Sync environment`. The `Sync environment` dialog is displayed. + +2. Review the changes and click `Sync environment`. + +3. Click `Go to pipeline`. + +Your new service will build and start in the Develop environment, where you can now carry out further testing. When you are satisfied this feature can be released tp production, then you would repeat the previous process to merge your changes to Production `main`. + +## 🏃‍♀️ Next step + +[Part 8 - Summary :material-arrow-right-circle:{ align=right }](summary.md) + diff --git a/docs/platform/tutorials/image-processing/connect-video-tfl.md b/docs/platform/tutorials/image-processing/connect-video-tfl.md deleted file mode 100644 index 806f29e4..00000000 --- a/docs/platform/tutorials/image-processing/connect-video-tfl.md +++ /dev/null @@ -1,21 +0,0 @@ -# 4. Connect the TfL video feeds - -In this part of the tutorial you connect your pipeline to the TfL traffic cam video feeds. - -Follow these steps to deploy the **traffic camera feed service**: - -1. Navigate to the `Code Samples` and locate `TfL Camera Feed`. - -2. Click `Deploy`. - -3. Paste your TfL API Key into the appropriate input. - -4. Click `Deploy` again. - - Deploying will start the service in the Quix pre-provisioned infrastructure. This service will stream data from the TfL cameras to the `tfl-cameras` topic. - - At this point your pipeline view has one service deployed. When it has started the arrow pointing out of the service will be green. This indicates that data is flowing out of the service into a topic. Now, you need to deploy something to consume the data that is streaming into that topic. - -5. Once deployed successfully, stop the service. You will restart it later, but for now it can be stopped. - -[Part 5 - Frame grabber :material-arrow-right-circle:{ align=right }](tfl-frame-grabber.md) diff --git a/docs/platform/tutorials/image-processing/connect-video-webcam.md b/docs/platform/tutorials/image-processing/connect-video-webcam.md deleted file mode 100644 index 962402db..00000000 --- a/docs/platform/tutorials/image-processing/connect-video-webcam.md +++ /dev/null @@ -1,25 +0,0 @@ -# 1. Connect the webcam video feed - -In this part of the tutorial you connect your webcam video feed. - -Follow these steps to deploy the **webcam service**: - -1. Navigate to the Samples and locate `Image processing - Webcam input`. - -2. Click `Deploy`. - -3. Once again, click `Deploy`. - - This service will stream data from your webcam to the `image-base64` topic. - -4. Click the `Public URL` icon, in the webcam service tile. - - ![image processing web UI](./images/webcam-public-url.png) - - This opens the deployed website which uses your webcam to stream images to Quix. - - !!! note - - Your browser may prompt you to allow access to your webcam. You can allow access. - -[Part 2 - Decode images :material-arrow-right-circle:{ align=right }](decode.md) diff --git a/docs/platform/tutorials/image-processing/decode.md b/docs/platform/tutorials/image-processing/decode.md deleted file mode 100644 index 70bf4f11..00000000 --- a/docs/platform/tutorials/image-processing/decode.md +++ /dev/null @@ -1,114 +0,0 @@ -# 2. Decode images - -In this part of the tutorial you decode the base64 encoded images coming from the webcam. - -## Create the base64 decoder service - -Follow these steps to deploy the **base64 decoder service**: - -1. Navigate to the `Code Samples` and locate the Python `Starter transformation`. - - !!! tip - You can use the filters on the left hand side to select `Python` and `Transformation` then select `Starter transformation` in the resulting filtered items. - -2. Click `Edit code`. - -3. Enter `base64 decoder service` as the name for the project. - -4. Select or enter `image-base64` for the input. - -5. Select or enter `image-raw` for the output. - -6. Click `Save as project`. - - -## Update the code - -The code is now saved to your workspace and you can edit it to perform any actions you need it to. - -Using the following steps, update the default code so it decodes the web cam images being received on the `image-base64` topic. Then publish the decoded images to the `image-raw` topic. - -1. Add `import base64` to the imports at the top of `main.py` - -2. Update the `on_dataframe_received_handler` method by adding the following line to base64 decode the images. - - ```py - df['image'] = df["image"].apply(lambda x: base64.b64decode(x)) - ``` - - This should go immediately before this line: - - ```py - stream_producer.timeseries.buffer.publish(df) - ``` - -???- note "The completed `main.py` should look like this" - - ```py - import quixstreams as qx - import os - import pandas as pd - import base64 # Added import (1) - - client = qx.QuixStreamingClient() - - topic_consumer = client.get_topic_consumer(os.environ["input"], consumer_group = "empty-transformation") - topic_producer = client.get_topic_producer(os.environ["output"]) - - - def on_dataframe_received_handler(stream_consumer: qx.StreamConsumer, df: pd.DataFrame): - - # Transform data frame here in this method. You can filter data or add new features. - # Pass modified data frame to output stream using stream producer. - # Set the output stream id to the same as the input stream or change it, - # if you grouped or merged data with different key. - stream_producer = topic_producer.get_or_create_stream(stream_id = stream_consumer.stream_id) - df['image'] = df["image"].apply(lambda x: base64.b64decode(x)) # Added code (2) - stream_producer.timeseries.buffer.publish(df) - - - # Handle event data from samples items that emit event data - def on_event_data_received_handler(stream_consumer: qx.StreamConsumer, data: qx.EventData): - print(data) - # handle your event data here - - - def on_stream_received_handler(stream_consumer: qx.StreamConsumer): - # subscribe to new DataFrames being received - # if you aren't familiar with DataFrames there are other callbacks available - # refer to the docs here: https://docs.quix.io/client-library/subscribe.html - stream_consumer.events.on_data_received = on_event_data_received_handler # register the event data callback - stream_consumer.timeseries.on_dataframe_received = on_dataframe_received_handler - - - # subscribe to new streams being received - topic_consumer.on_stream_received = on_stream_received_handler - - print("Listening to streams. Press CTRL-C to exit.") - - # Handle termination signals and provide a graceful exit - qx.App.run() - ``` - - 1. Import base64 which will be used to decode the images - 2. Call `base64.b64decode` and store the resulting data in the dataframe - - -## Deploy - -Now it's time to deploy this microservice. - -Follow these steps: - -1. Tag the code by clicking `add tag` at the top of the code panel. Enter `v1.0` for your tag. - -1. Click `Deploy` near the top right hand corner of the screen. - -2. Select the `v1.0` from the verison tag drop down. - -3. Click `Deploy`. - - You will be redirected to the homepage and the code will be built and deployed and your microservice will be started. - - -[Part 3 - Object detection :material-arrow-right-circle:{ align=right }](object-detection.md) \ No newline at end of file diff --git a/docs/platform/tutorials/image-processing/get-project.md b/docs/platform/tutorials/image-processing/get-project.md new file mode 100644 index 00000000..1832665e --- /dev/null +++ b/docs/platform/tutorials/image-processing/get-project.md @@ -0,0 +1,135 @@ +# Get the project + +While you can try out the live demo, or experiment using the ungated product experience, it can be useful to learn how to get a project up and running in Quix. + +Once you have the project running in your Quix account, you can modify the project as required, and save your changes to your forked copy of the project. With a forked copy of the repository, you can also receive upstream bug fixes and improvements if you want to, by syncing the fork with the upstream repository. + +In the following sections you learn how to: + +1. Fork an existing project repository, in this case the image processing template project. +2. Create a new project (and environment) in Quix linked to your forked repository. + +In later parts of the tutorial you explore the project pipeline using the Quix data explorer and other tools, viewing code, examining data structures, and getting a practical feel for the Quix Portal. + +## 💡 Key ideas + +The key ideas on this page: + +* Forking a public template project repository +* Connecting Quix to an external Git repository, in this case the forked repository +* Quix projects, environments, and applications +* Pipeline view of project +* Synchronizing an environment + +## Fork the project repository + +Quix provides the image processing template project as a [public GitHub repository](https://github.com/quixio/computer-vision-demo){target="_blank"}. If you want to use this template as a starting point for your own project, then the best way to accomplish this is to fork the project. Using fork allows you to create a complete copy of the project, but also benefit from future bug fixes and improvements by using the upstream changes. + +To fork the repository: + +1. Navigate to the [Quix GitHub repository](https://github.com/quixio/computer-vision-demo){target="_blank"}. + +2. Click the `Fork` button to fork the repo into your GitHub account (or equivalent Git provider if you don't have a GitHub account). Make sure you fork all branches, as you will be looking at the `develop` branch. + + !!! tip + + If you don't have GitHub account you can use another Git provider, such as GitLab or Bitbucket. If using Bitbucket, for example, you could import the repository - this would act as a clone (a static snapshot) of the repository. This is a simple option for Bitbucket, but you would not receive upstream changes from the original repository once the repository has been imported. You would however have a copy of the project you could then modify to suit your use case. Other providers support other options, check the documentation for your Git provider. + +## Create your Quix project + +Now that you have a forked copy of the repository in your GitHub account, you can now link your Quix account to it. Doing this enables you to build and deploy the project in your Quix account, and examine the pipeline much more closely. + +To link Quix to this forked repository: + +1. Log into your Quix account. + +2. Click `+ Create project`. + +3. Give your project a name. For example, "Computer Vision". + +4. Select `Connect to your own Git repo`, and follow the setup guide for your provider. + + !!! tip + + A setup guide is provided for each of the common Git providers. Other Git providers are supported, as long as they support SSH keys. + + The setup guide for GitHub is shown here: + + ![Git seup guide](../../images/git-setup-guide.png) + +5. Assuming you are connecting to a GitHub account, you'll now need to copy the SSH key provided by Quix into your GitHub account. See the setup guide for further details. + + !!! important + + It is recommended that you create a new user in your Git provider for managing your Quix projects. You are reminded of this when you create a project (the notice is shown in the following screenshot). + + ![Create new user](../../images/create-new-github-user.png) + + +6. Click `Validate` to test the connection between Quix and GitHub. + + !!! tip + + If errors occur you need to address them before continuing. For example, make sure you have the correct link to the repository, and you have have added the provided SSH key to your provider account, as outlined in the setup guide for that provider. + +7. Click `Done` to proceed. + +You now need to add an environment to your project. This is explained in the following section. + +## Create your Develop environment + +A Quix project contains at least one branch. For the purposes of this tutorial you will examine the `develop` branch of the project. In a Quix project a branch is encapsulated in an environment. You'll create a `Develop` environment mapped to the `develop` branch of the repository. + +Now create an environment called `Develop` which uses the `develop` branch: + +1. Enter the environment name `Develop`. + +2. Select the `develop` branch from the dropdown. + + Make sure the branch is protected, as shown in the following screenshot: + + ![Protected branch](./images/protected-branch.png){width=70%} + + !!! tip + + Making a branch protected ensures that developers cannot commit directly into the branch. Developers have to raise pull requests (PRs), which need to be approved before they can be merged into the protected branch. + +3. Click `Continue` and then select the Quix Broker and Standard storage options to complete creation of the environment, and the project. + +4. Go to the pipeline view. You will see that Quix is out of sync with the repository. + +5. Click the `Sync` button to synchronize the environment, and then click `Go to pipeline`. You will see the pipeline building. + +At this point you can wait a few minutes for the pipeline services to completely build and start running. + +## Configure credentials + +As the project uses Quix API credentials, you'll now need to configure your details for the services that require API keys. + +### TfL camera feed + +Open the service and edit the environment variable as shown here: + +![TfL credentials](./images/tfl-credentials.png){width=60%} + +### Web UI service + +When testing the UI you might find Google Maps does not load correctly for you - this is because the code has the Quix Google Maps API key. To work around this, you can set the Google Maps API key to an empty string, and then enable "developer mode" in your browser - the maps then display correctly. + +To set the Google Maps API key to an empty string, you need to edit `app.module.ts` and modify the `apiKey` field in `AgmCoreModule.forRoot` to the following: + +``` typescript +AgmCoreModule.forRoot({ + apiKey: '' + }), +``` + +Other optional services may require similar configuration, for example, the Quix Amazon S3 connector service requires your S3 credentials fo you want to use it. + +## See also + +If you are new to Quix it is worth reviewing the [recent changes page](../../changes.md), as that contains very useful information about the significant recent changes, and also has a number of useful videos you can watch to gain familiarity with Quix. + +## 🏃‍♀️ Next step + +[Part 2 - TfL camera feed :material-arrow-right-circle:{ align=right }](tfl-camera-feed.md) diff --git a/docs/platform/tutorials/image-processing/images/cars-waveform.png b/docs/platform/tutorials/image-processing/images/cars-waveform.png new file mode 100644 index 00000000..0cfcada3 Binary files /dev/null and b/docs/platform/tutorials/image-processing/images/cars-waveform.png differ diff --git a/docs/platform/tutorials/image-processing/images/detected-objects.png b/docs/platform/tutorials/image-processing/images/detected-objects.png new file mode 100644 index 00000000..526e540b Binary files /dev/null and b/docs/platform/tutorials/image-processing/images/detected-objects.png differ diff --git a/docs/platform/tutorials/image-processing/images/develop-behind.png b/docs/platform/tutorials/image-processing/images/develop-behind.png new file mode 100644 index 00000000..f306b131 Binary files /dev/null and b/docs/platform/tutorials/image-processing/images/develop-behind.png differ diff --git a/docs/platform/tutorials/image-processing/images/edit-env-variables.png b/docs/platform/tutorials/image-processing/images/edit-env-variables.png new file mode 100644 index 00000000..f2fe4f9a Binary files /dev/null and b/docs/platform/tutorials/image-processing/images/edit-env-variables.png differ diff --git a/docs/platform/tutorials/image-processing/images/external-link.png b/docs/platform/tutorials/image-processing/images/external-link.png new file mode 100644 index 00000000..53e0f678 Binary files /dev/null and b/docs/platform/tutorials/image-processing/images/external-link.png differ diff --git a/docs/platform/tutorials/image-processing/images/merge-request-dialog.png b/docs/platform/tutorials/image-processing/images/merge-request-dialog.png new file mode 100644 index 00000000..4f74759c Binary files /dev/null and b/docs/platform/tutorials/image-processing/images/merge-request-dialog.png differ diff --git a/docs/platform/tutorials/image-processing/images/merge-request-menu.png b/docs/platform/tutorials/image-processing/images/merge-request-menu.png new file mode 100644 index 00000000..a3867ae3 Binary files /dev/null and b/docs/platform/tutorials/image-processing/images/merge-request-menu.png differ diff --git a/docs/platform/tutorials/image-processing/images/new-branch.png b/docs/platform/tutorials/image-processing/images/new-branch.png new file mode 100644 index 00000000..d6b0f1aa Binary files /dev/null and b/docs/platform/tutorials/image-processing/images/new-branch.png differ diff --git a/docs/platform/tutorials/image-processing/images/new-environment.png b/docs/platform/tutorials/image-processing/images/new-environment.png new file mode 100644 index 00000000..34fd5fc0 Binary files /dev/null and b/docs/platform/tutorials/image-processing/images/new-environment.png differ diff --git a/docs/platform/tutorials/image-processing/images/object-detection-code.png b/docs/platform/tutorials/image-processing/images/object-detection-code.png new file mode 100644 index 00000000..16c0bb95 Binary files /dev/null and b/docs/platform/tutorials/image-processing/images/object-detection-code.png differ diff --git a/docs/platform/tutorials/image-processing/images/object-detection-logs.png b/docs/platform/tutorials/image-processing/images/object-detection-logs.png new file mode 100644 index 00000000..729a3f69 Binary files /dev/null and b/docs/platform/tutorials/image-processing/images/object-detection-logs.png differ diff --git a/docs/platform/tutorials/image-processing/images/object-detection-pipeline-segment.png b/docs/platform/tutorials/image-processing/images/object-detection-pipeline-segment.png new file mode 100644 index 00000000..94404e83 Binary files /dev/null and b/docs/platform/tutorials/image-processing/images/object-detection-pipeline-segment.png differ diff --git a/docs/platform/tutorials/image-processing/images/other-services-pipeline-segment.png b/docs/platform/tutorials/image-processing/images/other-services-pipeline-segment.png new file mode 100644 index 00000000..06c0a903 Binary files /dev/null and b/docs/platform/tutorials/image-processing/images/other-services-pipeline-segment.png differ diff --git a/docs/platform/tutorials/image-processing/images/pipeline-overview-1.png b/docs/platform/tutorials/image-processing/images/pipeline-overview-1.png new file mode 100644 index 00000000..53f8d19e Binary files /dev/null and b/docs/platform/tutorials/image-processing/images/pipeline-overview-1.png differ diff --git a/docs/platform/tutorials/image-processing/images/pipeline-overview-2.png b/docs/platform/tutorials/image-processing/images/pipeline-overview-2.png new file mode 100644 index 00000000..fba72a2a Binary files /dev/null and b/docs/platform/tutorials/image-processing/images/pipeline-overview-2.png differ diff --git a/docs/platform/tutorials/image-processing/images/pipeline-overview.png b/docs/platform/tutorials/image-processing/images/pipeline-overview.png deleted file mode 100644 index 3a506907..00000000 Binary files a/docs/platform/tutorials/image-processing/images/pipeline-overview.png and /dev/null differ diff --git a/docs/platform/tutorials/image-processing/images/pr-add-description.png b/docs/platform/tutorials/image-processing/images/pr-add-description.png new file mode 100644 index 00000000..a38e7eed Binary files /dev/null and b/docs/platform/tutorials/image-processing/images/pr-add-description.png differ diff --git a/docs/platform/tutorials/image-processing/images/protected-branch.png b/docs/platform/tutorials/image-processing/images/protected-branch.png new file mode 100644 index 00000000..33639647 Binary files /dev/null and b/docs/platform/tutorials/image-processing/images/protected-branch.png differ diff --git a/docs/platform/tutorials/image-processing/images/pull-request-github.png b/docs/platform/tutorials/image-processing/images/pull-request-github.png new file mode 100644 index 00000000..e612d256 Binary files /dev/null and b/docs/platform/tutorials/image-processing/images/pull-request-github.png differ diff --git a/docs/platform/tutorials/image-processing/images/road-capacity.png b/docs/platform/tutorials/image-processing/images/road-capacity.png new file mode 100644 index 00000000..75ac77b2 Binary files /dev/null and b/docs/platform/tutorials/image-processing/images/road-capacity.png differ diff --git a/docs/platform/tutorials/image-processing/images/squash-and-merge.png b/docs/platform/tutorials/image-processing/images/squash-and-merge.png new file mode 100644 index 00000000..be8ac05e Binary files /dev/null and b/docs/platform/tutorials/image-processing/images/squash-and-merge.png differ diff --git a/docs/platform/tutorials/image-processing/images/sync-environment.png b/docs/platform/tutorials/image-processing/images/sync-environment.png new file mode 100644 index 00000000..63d8e07c Binary files /dev/null and b/docs/platform/tutorials/image-processing/images/sync-environment.png differ diff --git a/docs/platform/tutorials/image-processing/images/tag.png b/docs/platform/tutorials/image-processing/images/tag.png new file mode 100644 index 00000000..286d93e4 Binary files /dev/null and b/docs/platform/tutorials/image-processing/images/tag.png differ diff --git a/docs/platform/tutorials/image-processing/images/tfl-camera-feed-message-view.png b/docs/platform/tutorials/image-processing/images/tfl-camera-feed-message-view.png new file mode 100644 index 00000000..11afead7 Binary files /dev/null and b/docs/platform/tutorials/image-processing/images/tfl-camera-feed-message-view.png differ diff --git a/docs/platform/tutorials/image-processing/images/tfl-camera-feed-pipeline-segment.png b/docs/platform/tutorials/image-processing/images/tfl-camera-feed-pipeline-segment.png new file mode 100644 index 00000000..85fee716 Binary files /dev/null and b/docs/platform/tutorials/image-processing/images/tfl-camera-feed-pipeline-segment.png differ diff --git a/docs/platform/tutorials/image-processing/images/tfl-camera-feed-tile.png b/docs/platform/tutorials/image-processing/images/tfl-camera-feed-tile.png new file mode 100644 index 00000000..31475473 Binary files /dev/null and b/docs/platform/tutorials/image-processing/images/tfl-camera-feed-tile.png differ diff --git a/docs/platform/tutorials/image-processing/images/tfl-credentials.png b/docs/platform/tutorials/image-processing/images/tfl-credentials.png new file mode 100644 index 00000000..2a9d52f8 Binary files /dev/null and b/docs/platform/tutorials/image-processing/images/tfl-credentials.png differ diff --git a/docs/platform/tutorials/image-processing/images/tfl-frame-grabber-pipeline-segment.png b/docs/platform/tutorials/image-processing/images/tfl-frame-grabber-pipeline-segment.png new file mode 100644 index 00000000..56ec7546 Binary files /dev/null and b/docs/platform/tutorials/image-processing/images/tfl-frame-grabber-pipeline-segment.png differ diff --git a/docs/platform/tutorials/image-processing/images/tfl-frame-grabber-tile.png b/docs/platform/tutorials/image-processing/images/tfl-frame-grabber-tile.png new file mode 100644 index 00000000..bd5bc6a2 Binary files /dev/null and b/docs/platform/tutorials/image-processing/images/tfl-frame-grabber-tile.png differ diff --git a/docs/platform/tutorials/image-processing/images/web-ui-pipeline-segment.png b/docs/platform/tutorials/image-processing/images/web-ui-pipeline-segment.png new file mode 100644 index 00000000..d2ab8642 Binary files /dev/null and b/docs/platform/tutorials/image-processing/images/web-ui-pipeline-segment.png differ diff --git a/docs/platform/tutorials/image-processing/images/web-ui.png b/docs/platform/tutorials/image-processing/images/web-ui.png index 00054c4d..83b4af15 100644 Binary files a/docs/platform/tutorials/image-processing/images/web-ui.png and b/docs/platform/tutorials/image-processing/images/web-ui.png differ diff --git a/docs/platform/tutorials/image-processing/index.md b/docs/platform/tutorials/image-processing/index.md index 9c40759b..b57ac777 100644 --- a/docs/platform/tutorials/image-processing/index.md +++ b/docs/platform/tutorials/image-processing/index.md @@ -1,42 +1,85 @@ # Real-time image processing -In this tutorial you learn how to build a real-time image processing pipeline in Quix, using the Transport for London (TfL) traffic cameras, known as Jam Cams, the webcam on your laptop or phone, and a [YOLO v3](https://viso.ai/deep-learning/yolov3-overview/) machine learning model. +In this tutorial you learn about a real-time image processing pipeline, using a [Quix template project](https://github.com/quixio/computer-vision-demo){target=_blank}. -You'll use prebuilt Code Samples to build the pipeline. A prebuilt UI is also provided that shows you where the recognized objects are located around London. +The pipeline uses the Transport for London (TfL) traffic cameras, known as Jam Cams, as the video input. The [YOLO v8](https://docs.ultralytics.com/) machine learning model is used to identify various objects such as types of vehicles. Additional services count the vehicles and finally the data is displayed on a map which is part of the web UI that has been creatde specially for this project. -The following screenshot shows the pipeline you build in this tutorial: +You'll fork the complete project from GitHub, and then create a Quix project from the forked repo, so you have a copy of the full pipeline code running in your Quix account. You then examine the data flow through the pipeline, using tools provided by the Quix Portal. -![pipeline overview](./images/pipeline-overview.png) +## Technologies used +Some of the technologies used by this template project are listed here. + +**Infrastructure:** + +* [Quix](https://quix.io/){target=_blank} +* [Docker](https://www.docker.com/){target=_blank} +* [Kubernetes](https://kubernetes.io/){target=_blank} + +**Backend:** + +* [Apache Kafka](https://kafka.apache.org/){target=_blank} +* [Quix Streams](https://github.com/quixio/quix-streams){target=_blank} +* [Flask](https://flask.palletsprojects.com/en/2.3.x/#){target=_blank} +* [pandas](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html){target=_blank} + +**Video capture:** + +* [TfL API](https://api-portal.tfl.gov.uk){target=_blank} +* [OpenCV](https://opencv.org/){target=_blank} + +**Object detection:** + +* [YOLOv8](https://github.com/ultralytics/ultralytics){target=_blank} + +**Frontend:** + +* [Angular](https://angular.io/){target=_blank} +* [Typescript](https://www.typescriptlang.org/){target=_blank} +* [Microsoft SignalR](https://learn.microsoft.com/en-us/aspnet/signalr/){target=_blank} +* [Google Maps](https://developers.google.com/maps){target=_blank} + +## Live demo + +You can see the project running live on Quix: -This is the tutorial running live on Quix:
- +
-You can interact with it here, on this page, or open the page to view it more clearly [here](https://tfl-image-processing-ui-quix-realtimeimageprocessingtutorial.deployments.quix.ai/){target="_blank"}. -## Getting help +You can interact with it here, on this page, or open the page to view it more clearly [here](https://app-demo-computervisiondemo-prod.deployments.quix.ai/){target="_blank"}. -If you need any assistance while following the tutorial, we're here to help in [The Stream community](https://join.slack.com/t/stream-processing/shared_invite/zt-13t2qa6ea-9jdiDBXbnE7aHMBOgMt~8g), our public Slack channel. +## Watch a video -## Tutorial live stream +Explore the pipeline: -If you'd rather watch a live stream, where one of our developers steps through this tutorial, you can view it here: +**Loom video coming soon.** - -
- -
+??? Transcript + + **Transcript** + +## GitHub repository + +The complete code for this project can be found in the [Quix GitHub repository](https://github.com/quixio/computer-vision-demo){target="_blank"}. + +## Getting help + +If you need any assistance while following the tutorial, we're here to help in the [Quix forum](https://forum.quix.io/){target="_blank"}. ## Prerequisites To get started make sure you have a [free Quix account](https://portal.platform.quix.ai/self-sign-up). -You'll also need a [free TfL account](https://api-portal.tfl.gov.uk). +If you are new to Quix it is worth reviewing the [recent changes page](../../changes.md), as that contains very useful information about the significant recent changes, and also has a number of useful videos you can watch to gain familiarity with Quix. + +### TfL account and API key + +You'll also need a [free TfL account](https://api-portal.tfl.gov.uk){target=_blank}. Follow these steps to locate your TfL API key: - 1. Register for an account. + 1. Register for a [free TfL account](https://api-portal.tfl.gov.uk){target=_blank}. 2. Login and click the `Products` menu item. @@ -48,62 +91,88 @@ Follow these steps to locate your TfL API key: 6. You can now find your API Keys in the profile page. -## Code Samples +Later, you'll need to configure the TfL service with your own TfL API key. To do this, open the service and edit the environment variable as shown here: + +![TfL credentials](./images/tfl-credentials.png){width=60%} + +### Google Maps API key + +When testing the project you might find Google Maps does not load correctly for you - this is because the code has the Quix Google Maps API key. To work around this, you can set the Google Maps API key to an empty string, and then enable "developer mode" in your browser - the maps then display correctly. + +To set the Google Maps API key to an empty string, you need to edit `app.module.ts` and modify the `apiKey` field in `AgmCoreModule.forRoot` to the following: + +``` typescript +AgmCoreModule.forRoot({ + apiKey: '' + }), +``` -The Code Samples is a collection of ready-to-use components you can leverage to build your own real-time streaming solutions. Typically these components require minimal configuration. +### Git provider -Most of the code you need for this tutorial has already been written, and is located in the `Code Samples`. +You also need to have a Git account. This could be GitHub, Bitbucket, GitLab, or any other Git provider you are familar with, and that supports SSH keys. The simplest option is to create a free [GitHub account](){target=_blank}. -When you are logged into the Quix Portal, click on the `Code Samples` icon in the left-hand navigation, to access the Code Samples. +!!! tip -## The pipeline you will create + While this tutorial uses an external Git account, Quix can also provide a Quix-hosted Git solution using Gitea for your own projects. You can watch a video on [how to create a project using Quix-hosted Git](https://www.loom.com/share/b4488be244834333aec56e1a35faf4db?sid=a9aa124a-a2b0-45f1-a756-11b4395d0efc){target=_blank}. -There are five stages to the processing pipeline you build in this tutorial: +If you want to use the Quix AWS S3 service (optional), you'll need to provide your credentials for accessing AWS S3. -1. Video feeds - - - Webcam image capture - - TfL Camera feed or "Jam Cams" +## The pipeline -2. Frame grabber - - - Grab frames from TfL video feed +The following screenshots show the pipeline you build in this tutorial. -3. Object detection +The first part of the pipeline is: - - Detect objects within images +![pipeline overview](./images/pipeline-overview-1.png) -4. Stream merge +The second part of the pipeline is: - - Merge the separate data streams into one +![pipeline overview](./images/pipeline-overview-2.png) -5. Web UI configuration +There are several *main* stages in the pipeline: - - A simple UI showing: +1. *TfL camera feed* - TfL Camera feed or "Jam Cams". This service retrieves the raw data from the TfL API endpoint. A list of all JamCams is retrieved, along with the camera data. The camera data contains a link to a video clip from the camera. These video clips are hosted by TfL in MP4 format on AWS S3. A stream is created for each camera, and the camera data published to this stream. Using multiple streams in this way enables a solution capable of horizontal scaling, through additional topic partitions and, optionally, replicated services in a consumer group. Once the camera list has been scanned, the service sleeps for two minutes, and then repeats the previous code. This reduces the load, and also means the API limit of 500 requests per minute is not exceeded. Messages are passed to the frame grabber. - - Images with identified objects - - Map with count of objects at each camera's location +2. *TfL traffic camera frame grabber* - this service grabs frames from a TfL video file (MP4 format) at the rate specified. By default the grabber extracts one frame every 100 frames, which is typically one per five seconds of video. Messages are passed to the object detection service. -Now that you know which components will be needed in the image processing pipeline, the following sections will step through the creation of the required microservices. +3. *Object detection* - this service uses the YOLOv8 computer vision algorthm to detect objects within a given frame. + +4. *Stream merge* - merges the separate data streams (one for each camera) back into one, prior to sending to the UI. + +5. *Web UI* - a UI that displays: frames with the objects that have been identified, and a map with a count of objects at each camera's location. The web UI is a web client app that uses the [Quix Streaming Reader API](../../../apis/streaming-reader-api/intro.md), to read data from a Quix topic. + +There are also some additional services in the pipeline: + +1. *Cam vehicles* - calculates the total vehicles, where vehicle is defined as one of: car, 'bus', 'truck', 'motorbike'. This number is published to its utput topic. The *Max vehicle window* service subscribes to this topic. + +2. *Max vehicle window* - calculates the total vehicles over a time window of one day. This service publishes messages its output topic. + +3. *Data API* - this REST API service provide two endpoints: one returns the *Max vehicle window* values for the specified camera, and the other endpoint returns camera data for the specified camera. This API is called by the UI to obtain useful data. + +4. *S3* - stores objects in Amazon Web Services (AWS) S3. This service enables you to persist any data or results you might like to keep more permanently. + +More details are provided on all these services later in the tutorial. ## The parts of the tutorial This tutorial is divided up into several parts, to make it a more manageable learning experience. The parts are summarized here: -1. **Connect the webcam video feed**. You learn how to quickly connect a video feed from your webcam, using a prebuilt sample. +1. [Get the project](get-project.md) - you get the project up and running in your Quix account. + +2. [TfL camera feed](tfl-camera-feed.md) service. You examine the code and then see how to view the message data format used in the service, in real time. -2. **Decode images**. You decode the base64 encoded images coming from the webcam. +3. [Frame grabber](tfl-frame-grabber.md) service. You examine the code and then see how to view the message data format used in the service, in real time. -3. **Object detection**. You use a computer vision sample to detect a chosen type of object. You'll preview these events in the live preview. The object type to detect can be selected through a web UI, which is described later. +4. [Object detection](object-detection.md) service. This is the YOLO v8 logic that identifies and annotates the objects identified in the frame. You examine the code and then see how to view the message data format used in the service, in real time. -4. **Connect the TfL video feed**. You learn how to quickly connect the TfL traffic cam feeds, using a prebuilt sample. You can perform object detection across these feeds, as they are all sent into the objection detection service in this tutorial. +5. [Web UI](web-ui.md) service. This is a JavaScript web client app that uses the Quix Streaming Reader API to read data from a Quix topic (the output of the stream merge service). There are various UI components that are beyond the scope of this tutorial. -5. **Frame grabber**. You use a standard sample to grab frames from the TfL video feed. +6. [Other services](other-services.md). The other services are fairly simple so are collected together for discussion. You can optionally investigate the message data format and code. -6. **Stream merge**. You use a standard sample to merge the different streams into one. +7. [Add new service](add-service.md). You add a new service to a feature branch, test it, and then merge to the develop branch. -7. **Deploy the web UI**. You the deploy a prebuilt web UI. This UI enables you to select an object type to detect across all of your input video feeds. It displays the location pof object detection and object detection count on a map. +8. [Summary](summary.md). In this concluding part you are presented with a summary of the work you have completed, and also some next steps for more advanced learning about the Quix Platform. -8. **Summary**. In this [concluding](summary.md) part you are presented with a summary of the work you have completed, and also some next steps for more advanced learning about the Quix Platform. +## 🏃‍♀️ Next step -[Part 1 - Connect the webcam feed :material-arrow-right-circle:{ align=right }](connect-video-webcam.md) +[Part 1 - Get the project :material-arrow-right-circle:{ align=right }](get-project.md) diff --git a/docs/platform/tutorials/image-processing/object-detection.md b/docs/platform/tutorials/image-processing/object-detection.md index f4351262..b3ddffdd 100644 --- a/docs/platform/tutorials/image-processing/object-detection.md +++ b/docs/platform/tutorials/image-processing/object-detection.md @@ -1,67 +1,118 @@ -# 3. Object detection +# Object detection -In this part of the tutorial you add an object detection service into the pipeline. This service detects objects in any video feeds connected to its input. This service uses a [YOLO v3](https://viso.ai/deep-learning/yolov3-overview/) machine learning model for object detection. +This service takes frames from the frame grabber and detects objects in each frame. This service uses the [YOLOv8 object detection library](https://github.com/ultralytics/ultralytics){target=_blank}. -In a later stage of the pipeline you add a simple UI which enables you to select the type of object to detect. +![Object detection](./images/object-detection-pipeline-segment.png) -Follow these steps to deploy the **object detection service**: +## 💡 Key ideas -1. Navigate to the `Code Samples` and locate `Computer Vision object detection`. +The key ideas on this page: -2. Click `Deploy`. +* Using the YOLOv8 library to detect objects in a frame +* Intro to Data frame handler: `on_dataframe_received_handler` +* How to view logs +* How to view the code of a Quix Application +* Using pipeline view to examine topics -3. Click `Deploy` again. +## What it does - This service receives data from the `image-raw` topic and streams data to the `image-processed` topic. +The key thing this service does is detect objects in frames passed to it. You will remember from the previous part of this tutorial, the frame grabber, that the frame grabber service outputs time series data, rather than event data. A different handler is invoked for time series data: -??? example "Understand the code" +``` python +def on_dataframe_received_handler(stream_consumer: qx.StreamConsumer, df: pd.DataFrame): +``` - Here's the code in the file `quix_function.py`: +This callback receives the time series data in pandas dataframe format. Each dataframe received in the stream causes this handler to be invoked. - ```python - # Callback triggered for each new parameter data. (1) - def on_parameter_data_handler(self, data: ParameterData): - - # Loop every row in incoming data. (2) - for timestamp in data.timestamps: +Objects are detected in the frame by the YOLOv8 code. Data is published to the output stream. The messages on the output topic have the following format: - binary_value = timestamp.parameters['image'].binary_value - source_img = self.image_processor.img_from_base64(binary_value) - start = time.time() +``` json +{ + "Epoch": 0, + "Timestamps": [ + 1694003142728625200 + ], + "NumericValues": { + "car": [ + 5 + ], + "truck": [ + 2 + ], + "person": [ + 2 + ], + "traffic light": [ + 1 + ], + "lat": [ + 51.4739 + ], + "lon": [ + -0.09045 + ], + "delta": [ + -2.597770929336548 + ] + }, + "StringValues": {}, + "BinaryValues": { + "image": [ + "(Binary of 152.47 KB)" + ] + }, + "TagValues": {} +} +``` - # We call YOLO3 model with binary values of the image - # and receive objects with confidence values. (3) - img, class_ids, confidences = self.image_processor.process_image(source_img) - delta = start - time.time() # (4) +The key data here is the count of each vehicle type in the frame. Further, an annotated image (detected objects are marked with a green rectangle) is also included as binary data. The annotated image is used by the UI to display detected objects, as shown in the following screenshot: - # We count how many times each class ID is present in the picture. (5) - counter = Counter(class_ids) +![Detected object](./images/detected-objects.png) - print("New image in {0} at {1}".format(self.input_stream.stream_id, timestamp.timestamp)) +## 👩‍🔬 Lab - Examine the logs - # Starts by creating new row with timestamp that we carry from input. (6) - row = self.output_stream.parameters.buffer.add_timestamp_nanoseconds(timestamp.timestamp_nanoseconds) +In this section, you learn how to examine the logs for the service. The logs are a very useful resource when debugging a service - you can see trace messages output from the service, and any errors that are generated. - # For each class ID we sent column with number of occurrences in the picture. (7) - for key, value in counter.items(): - print("Key:{}".format(key)) - row = row.add_value(key, value) +To view the logs for a service: - # Attach image column with binary data, GPS coordinates and model performance metrics. (8) - row.add_value("image", self.image_processor.img_to_binary(img)) \ - .add_value("lat", timestamp.parameters["lat"].numeric_value) \ - .add_value("lon", timestamp.parameters["lon"].numeric_value) \ - .add_value("delta", delta) \ - .write() - ``` +1. In the pipeline view, click on the object detection service tile. - 1. Each time a new parameter data arrives, this callback is invoked. - 2. Parameter data can be thought of as data in a tabular form. This code loops over all rows in the table. - 3. The object detection model is called with the source image. It returns an annoted image, an array containing the ids of the types of objects detected (for example: ['bus', 'car', 'truck', 'car', 'car', 'person', 'car', 'car', 'person', 'car', 'car', 'car', 'person', 'car', 'person', 'person']), and the confidence of the detection. - 4. The `delta` is a variable used to record how long it takes for the object detection. This is used as a measure of performance. - 5. `Counter` is a dictionary that object type counts, for example, `{'truck': 3, 'car': 3}`. - 6. Timestamp TDB - 7. Add the class ID (object type detected) and the count for that object type. - 8. The row is written out. The row includes the image binary, geolocation, and delta as a measure of performance for object detection. +2. Click on the Logs tab, if not selected by default. -[Part 4 - TfL video :material-arrow-right-circle:{ align=right }](connect-video-tfl.md) +3. You can now see the log messages being produced by the service: + + ![Object detection logs](./images/object-detection-logs.png) + + !!! tip + + There is a pause button to allow you to pause the logs (see the screenshot). There is also a button you can use to download the logs for the service. + +There also some tasks for you to carry out in the following sections. + +## 👩‍🔬 Lab - Examine the application code + +You now learn how to examine the code for the service. You may want to fix bugs in it, or otherwise improve the code for a service. Once the code is edited, you can use the `Redeploy` button the redeploy the service, even if it is already running. + +1. Click the panel indicated in the screenshot: + + ![object detection code](./images/object-detection-code.png){width=60%} + + This takes you to the code view. You can view or edit the complete code for this service here. + +2. You could for example, make some changes, and then redeploy the service using the `Redeploy` button, or simply test your changes using the `Run` button. + +3. In the code view, click the `History` tab to see the complete revision history for changes to the code. + +### Task - Check the output message format for this service + +Using what you have learned in previous parts of this tutorial, check the format of the messages published by this service. + +## See also + +For more information refer to: + +* [Quix Streams](../../../client-library-intro.md) - More about streams, publishing, consuming, events and much more. + +## 🏃‍♀️ Next step + +[Part 5 - Web UI :material-arrow-right-circle:{ align=right }](web-ui.md) diff --git a/docs/platform/tutorials/image-processing/other-services.md b/docs/platform/tutorials/image-processing/other-services.md new file mode 100644 index 00000000..8a7c3cab --- /dev/null +++ b/docs/platform/tutorials/image-processing/other-services.md @@ -0,0 +1,228 @@ +# Other services + +There are some additional services in the pipeline that provide useful functionality. These range from S3 storage of data to calculation of the maximum vehicles per day in a specific location. + +![Other services](./images/other-services-pipeline-segment.png) + +Briefly, these services are: + +* *Stream merge* - merges all the traffic cam streams into a single stream to make things easier to process in the UI. + +* *Cam vehicles* - calculates the total vehicles, where vehicle is defined as one of: car, 'bus', 'truck', 'motorbike'. This number is fed into the *Max vehicle window* service. + +* *Max Vehicle Window* - calculates the maximum vehicles over a time window of one day. This service sends messages to the Data API service. + +* *Data buffer* - this provides a one second data buffer. This helps reduce load on the Data API service. + +* *Data API* - this REST API service provide two endpoints: one returns the *Max vehicle window* values for the specified camera, and the other endpoint returns camera data for the specified camera. This API is called by the UI to obtain useful data. + +* *S3* - stores objects in Amazon Web Services (AWS) S3. This service enables you to persist any data or results you might like to keep more permanently. + +!!! tip + + If you ever need to obtain the stream ID, and it is not in the messsages available to the service, it is available through the stream object by using the `stream_id` property, for example, `stream_id = stream_consumer.stream_id`. + +## Stream merge + +This service prepares data for ease of processing by the UI. Merges all streams onto a single stream. The input stream is `image-processed`, the output stream is `image-processed-merged`. Note the code also decodes the image and then does a Base64 encode prior to passing to the output topic. The UI uses the Quix Streaming Reader to read the messages from `image-processed-merged`, including the Base64 encoded image data. + +The key code: + +``` python + # Callback triggered for each new parameter data. + def on_dataframe_handler(self, stream_consumer: qx.StreamConsumer, df: pd.DataFrame): + + df["TAG__parent_streamId"] = self.consumer_stream.stream_id + df['image'] = df["image"].apply(lambda x: str(base64.b64encode(x).decode('utf-8'))) + + self.producer_topic.get_or_create_stream("image-feed") \ + .timeseries.buffer.publish(df) +``` + +## Cam vehicles + +This service simply adds together objects of the following types: car, bus, truck, motorbike to obtain a total number of vehicles. It classes these objects as vehicles. The message output to the next stage in the pipeline, max vehicles, is as follows: + +``` json +{ + "Epoch": 0, + "Timestamps": [ + 1694077540745375700 + ], + "NumericValues": { + "truck": [ + 1 + ], + "car": [ + 2 + ], + "lat": [ + 51.4075 + ], + "lon": [ + -0.19236 + ], + "delta": [ + -2.177236557006836 + ], + "vehicles": [ + 3 + ] + }, + "StringValues": {}, + "BinaryValues": { + "image": [ + "(Binary of 157.97 KB)" + ] + }, + "TagValues": {} +} +``` + +In this example there are 2 cars, and 1 truck giving a `vehicles` count of 3. + +The main code is: + +``` python +def on_dataframe_received_handler(stream_consumer: qx.StreamConsumer, df: pd.DataFrame): + # List of vehicle columns + vehicle_columns = ['car', 'bus', 'truck', 'motorbike'] + + # Calculate the total vehicle count based on existing columns + total_vehicle_count = df.apply(lambda row: sum(row.get(column, 0) for column in vehicle_columns), axis=1) + + # Store vehicle count in the data frame + df["vehicles"] = total_vehicle_count + stream_producer = topic_producer.get_or_create_stream(stream_id = stream_consumer.stream_id) + # Publish data frame to the producer stream + stream_producer.timeseries.buffer.publish(df) +``` + +You can find out more about pandas DataFrames in the [pandas documentation](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html){target=_blank}. + +## Max Vehicle Window + +The max vehicles service takes the total vehicle count and finds the maximum value over a one day window. This value is made available to the Data API service. The message passed to the Data API has the following format: + +``` json +{ + "Epoch": 0, + "Timestamps": [ + 1694088514402644000 + ], + "NumericValues": { + "max_vehicles": [ + 8 + ] + }, + "StringValues": {}, + "BinaryValues": {}, + "TagValues": { + "window_start": [ + "2023-09-06 12:08:12.394372" + ], + "window_end": [ + "2023-09-07 12:08:12.394372" + ], + "window": [ + "1d 0h 0m" + ], + "cam": [ + "JamCams_00001.08959" + ] + } +} +``` + +You can see the exact time window is recorded, along with the maximum vehicle count during that time window. This provides a crude measure of the capacity of the road. This capacity can then be used by the UI to calculate a percentage of capacity. For example, if there are 8 cars on a road, and the maximum seen is 10, then the road is considered to be at 80% capacity, and this is displayed on the UI, as shown in the following screenshot: + +![Road capacity](./images/road-capacity.png) + +This service uses [state](https://quix.io/docs/client-library/state-management.html), as you need to save the maximum count reached during the time window. + +## Data buffer + +This service provides a one second data buffer. This reduces load on the Data API service. There are three input topics to the service, `max-vehicles`, `processed-images`, and `vehicle-counts`: and one output topic, `buffered-data`. + +See the documentation on [using buffers](https://quix.io/docs/client-library/publish.html#using-a-buffer). + +## Data API + +The data service offloads calculations that could be done in the web client, and instead provides key data only when the UI needs it. The UI can request this data when it needs it through the REST API of the Data API service. + +The Data API provides these endpoints: + +* max_vehicles +* detected_objects + +These are used by the UI to obtain and then display the data on the web interface. + +### Max Vehicles + +Returns the maximum number of "vehicles" seen on a camera, where vehicles is one of cars, buses, trucks, or motorbikes. + +For a `GET` on the endpoint `/max_vehicles`, the response is a dictionary item per camera: + +* Key=camera name +* Value=max vehicle count + +Example response JSON: + +``` json +{ + "JamCams_00001.01251":2.0, + "JamCams_00001.01252":1.0 +} +``` + +This service is implemented as a simple [Flask web app](https://flask.palletsprojects.com/en/2.3.x/quickstart/){target=_blank} hosted in Quix. + +### Detected Objects + +Returns a dictionary of all the data for a given camera (except for the images as these are quite large to store, even temporarily). + +For a `GET` on the endpoint `/detected_objects`, the response is an array of: + +* Key=camera name +* Value=dictionary of the data + +Where the data dictionary is: + +* object counts (car: 3, bus: 11 etc) +* lat +* lon +* timestamp + +Using this you can plot/display every camera and its count as soon as you get this data. + +Example response JSON: + +``` json +[ + "JamCams_00001.01419": { + "car":{"0":3.0}, + "delta":{"0":-1.0003459453582764}, + "image":{"0":""},"lat":{"0":51.5596}, + "lon":{"0":-0.07424},"person":{"0":3.0}, + "timestamp":{"0":1692471825406959867}, + "traffic light":{"0":1.0}, + "truck":{"0":1.0} + }, + ... +] +``` + +## S3 + +This is the standard Quix code sample [AWS S3 destination connector](https://quix.io/docs/library_readmes/connectors/s3-destination.html). It takes messages on the input topic and writes them to S3. There is an optional batching facility whereby you can batch messages and then write them to S3 in a batch - this can be more efficient for higher frequency data. You can control batching based on time interval or message count. + +## See also + +For more information refer to: + +* [Connectors](../../connectors/index.md) - connectors, both source and destination. +* [Quix Streams](../../../client-library-intro.md) - the client library. + +## 🏃‍♀️ Next step + +[Part 7 - Add a new service :material-arrow-right-circle:{ align=right }](add-service.md) diff --git a/docs/platform/tutorials/image-processing/stream-merge.md b/docs/platform/tutorials/image-processing/stream-merge.md deleted file mode 100644 index bd62fe90..00000000 --- a/docs/platform/tutorials/image-processing/stream-merge.md +++ /dev/null @@ -1,48 +0,0 @@ -# 6. Stream merge - -In this part of the tutorial you add a stream merge service into the pipeline. This service merges the inbound streams into one outbound stream. This is required because the images from each traffic camera are published to a different stream, allowing the image processing services to be scaled up if needed. Once all the image processing is completed and to allow the UI to easily use the data generated by the processing stages, the data from each stream is merged into one stream. - -Follow these steps to deploy the **Stream merge service**: - -1. Navigate to the `Code Samples` and locate `Stream merge`. - -2. Click `Deploy`. - -3. And again, click `Deploy`. - - This service receives data from the `image-processed` topic and streams data to the `image-processed-merged` topic. - -??? example "Understand the code" - - Here's the code in the file `quix_function.py`: - - ```python - # Callback triggered for each new event. (1) - def on_event_data_handler(self, stream_consumer: qx.StreamConsumer, data: qx.EventData): - print(data.value) - - # All of the data received by this event data handler is published to the same predefined topic (2) - self.producer_topic.get_or_create_stream("image-feed").events.publish(data) - - # Callback triggered for each new parameter data. (3) - def on_dataframe_handler(self, stream_consumer: qx.StreamConsumer, df: pd.DataFrame): - - # Add a tag for the parent stream (4) - df["TAG__parent_streamId"] = self.consumer_stream.stream_id - - # add the base64 encoded image to the dataframe (5) - df['image'] = df["image"].apply(lambda x: str(base64.b64encode(x).decode('utf-8'))) - - # All of the data received by this dataframe handler is published to the same predefined topic (6) - self.producer_topic.get_or_create_stream("image-feed") \ - .timeseries.buffer.publish(df) - ``` - - 1. `on_event_data_handler` handles each new event on the topic that is subscribed to. - 2. All events are published to the output topic in a single stream called `image-feed`. - 3. `on_dataframe_handler` handles each new dataframe or timeseries data on the topic that is subscribed to. - 4. Add a tag to preserve the parent stream id. - 5. Add an `image` column to the dataframe and set the value to the base64 encoded image. - 6. All data is published to the output topic in a single stream called `image-feed`. - -[Part 7 - Web UI :material-arrow-right-circle:{ align=right }](web-ui.md) \ No newline at end of file diff --git a/docs/platform/tutorials/image-processing/summary.md b/docs/platform/tutorials/image-processing/summary.md index b5e933ca..e1e48eb1 100644 --- a/docs/platform/tutorials/image-processing/summary.md +++ b/docs/platform/tutorials/image-processing/summary.md @@ -1,28 +1,37 @@ -# 8. Summary +# Summary -In this tutorial you have learned that it is possible to quickly build a real-time image processing pipeline, using prebuilt Code Samples. You have seen how to can connect to multiple types of video feed, perform object detection, and display the locations of the detected objects on a map, using the prebuilt UI. +In this tutorial you have learned that it is possible to quickly fork a complete Quix project and get it up and running in your Quix account very quickly. This allows you to start your own project by using one of our templates. You can then modify our code, keeping your project in a public or private Git repository as required. Alternatively, you can build your own custom pipeline from a mix of our code samples, and your own custom sources, transforms, and destinations. + +In addition, you have see how in the Quix portal you can: + +* See the power of navigating your pipeline visually +* View the logs of a service as an aid to debugging +* View live data in the application view, or using the Quix Data Explorer +* Learned how you can host your project in a Git repository +* Learn how to examine the raw message data in the messages view +* Examine and edit the code of a service ## Code Samples used -Here is a list of the Quix open source Code Samples used in this tutorial, with links to their code in GitHub: +While you forked a ready-to-go pipeline from GitHub and then explored it in this tutorial, it is possible to build your own pipeline in Quix using ready-made services called [Quix Code Samples](../../samples/samples.md). These code samples enable you to quickly build your own pipeline from scratch using these tested open source components. For example, there + +Here is a list of the Quix open source Code Samples related to this tutorial, with links to their code in the [Quix Code Samples GitHub repository](https://github.com/quixio/quix-samples){target=_blank}: -* [TfL traffic cam video feed](https://github.com/quixio/quix-samples/tree/main/python/sources/TFL-Camera-Feed) -* [TfL traffic cam frame grabber](https://github.com/quixio/quix-samples/tree/main/python/transformations/TFL-Camera-Frame-Extraction) -* [Webcam interface](https://github.com/quixio/quix-samples/tree/main/applications/image-processing/webcam-input) +* [TfL traffic camera feed](https://github.com/quixio/quix-samples/tree/main/python/sources/TFL-Camera-Feed) +* [TfL frame grabber](https://github.com/quixio/quix-samples/tree/main/python/transformations/TFL-Camera-Frame-Extraction) * [Computer vision object detection](https://github.com/quixio/quix-samples/tree/main/python/transformations/Image-processing-object-detection) -* [Stream merge](https://github.com/quixio/quix-samples/tree/develop/python/transformations/Stream-Merge) * [Web UI](https://github.com/quixio/quix-samples/tree/main/nodejs/advanced/Image-Processing-UI) +## Getting help + +If you need any assistance, we're here to help in the [Quix forum](https://forum.quix.io/){target="_blank"}. + ## Next Steps Here are some suggested next steps to continue on your Quix learning journey: +* Build something with our code samples. You could take the camera feed, frame grabber, and object detection code samples, and add your own service to these, for example add a custom UI, or a simple service that just logs vehicles for a road that you have a specific interest in (maybe you live there). * Try the [sentiment analysis tutorial](../sentiment-analysis/index.md). - * If you decide to build your own connectors and apps, you can contribute something to the Code Samples. Visit the [GitHub Code Samples repository](https://github.com/quixio/quix-samples){target=_blank}. Fork our Code Samples repo and submit your code, updates, and ideas. -What will you build? Let us know! We’d love to feature your project or use case in our [newsletter](https://www.quix.io/community/). - -## Getting help - -If you need any assistance, we're here to help in [The Stream](https://join.slack.com/t/stream-processing/shared_invite/zt-13t2qa6ea-9jdiDBXbnE7aHMBOgMt~8g){target=_blank}, our free Slack community. Introduce yourself and then ask any questions in `quix-help`. +What will you build? Let us know! We’d love to feature your application or use case in our [newsletter](https://www.quix.io/community/). diff --git a/docs/platform/tutorials/image-processing/tfl-camera-feed.md b/docs/platform/tutorials/image-processing/tfl-camera-feed.md new file mode 100644 index 00000000..d4eb7279 --- /dev/null +++ b/docs/platform/tutorials/image-processing/tfl-camera-feed.md @@ -0,0 +1,174 @@ +# TfL camera feed + +In this part of the tutorial you take a look at the TfL camera feed service. The main function of this service is to retrieve camera data from the TfL API and pass it to the frame grabber service. + +![TfL camera feed](./images/tfl-camera-feed-pipeline-segment.png) + +!!! tip + + In the pipeline view, you can always determine a topic name by hovering over the connecting line that represents that topic. You can also click the connecting line, to see its name, and optionally to jump to the Data Explorer to view live data for the topic. + +## 💡 Key ideas + +The key ideas on this page: + +* Reading external REST API +* Publishing event data with Quix Streams +* Multiple streams within a topic +* Passing video file URLs through the pipeline +* Explore raw message format using Quix + +## What it does + +The key thing this service does is retrieve the camera feeds from the TfL API endpoint, using your TfL API key. This is done using the `requests` library and a simple REST `GET`: + +``` python +cameras = requests.get( + "https://api.tfl.gov.uk/Place/Type/JamCam/?app_id=QuixFeed&app_key={}".format(api_key)) +``` + +With this data the code loops, writing the data for each camera to its own stream in the output topic, `tfl-cameras`. The code also adds a timestamp and the camera data, as a value called `camera`: + +``` python +producer_topic.get_or_create_stream(camera_id).events.add_timestamp_nanoseconds(time.time_ns()) \ + .add_value("camera", json.dumps(camera)) \ + .publish() +``` + +Note the stream name is derived from the camera ID, which has the format `JamCams_00001.01606`. + +!!! tip + + It is a common pattern to publish data to its own stream when it is from a different device or source. For example, if you had multiple IoT devices each with its own ID these would publish to their own stream. As streams are mapped to partitions by Quix Streams, the messages are guaranteed to be delivered in order. Publishing to multiple streams enables you to horizontally scale too. If you increased the number of paritions in the topic, the streams would be spread across all available partitions, enabling increased throughput and fault tolerance. Further, multiple consumer replicas could be used, and stream data would be processed by all available replicas in the consumer group. In the TfL camera feed service, data for each camera is published to its own stream for these reasons, the stream name being based on the camera ID. If you ever need to obtain the stream ID, and it is not in the messsages available to that service, it is available through the stream object by using the `stream_id` property, for example, `stream_id = stream_consumer.stream_id`. + +The code then sleeps for two minutes. This prevents exceeding the 500 API requests from being exceeded. + +The `publish` method from the previous code publishes data to the output topic in the following format: + +``` json +[ + { + "Timestamp": 1693925495304353500, + "Tags": {}, + "Id": "camera", + "Value": "" + } +] +``` + +The field `` has a format as shown in the following example: + +``` json +{ + "$type": "Tfl.Api.Presentation.Entities.Place, Tfl.Api.Presentation.Entities", + "id": "JamCams_00001.03766", + "url": "/Place/JamCams_00001.03766", + "commonName": "A20 Sidcup Bypass/Perry St", + "placeType": "JamCam", + "additionalProperties": [ + { + "$type": "Tfl.Api.Presentation.Entities.AdditionalProperties, Tfl.Api.Presentation.Entities", + "category": "payload", + "key": "available", + "sourceSystemKey": "JamCams", + "value": "false", + "modified": "2023-08-31T15:46:06.093Z" + }, + { + "$type": "Tfl.Api.Presentation.Entities.AdditionalProperties, Tfl.Api.Presentation.Entities", + "category": "payload", + "key": "imageUrl", + "sourceSystemKey": "JamCams", + "value": "https://s3-eu-west-1.amazonaws.com/jamcams.tfl.gov.uk/00001.03766.jpg", + "modified": "2023-08-31T15:46:06.093Z" + }, + { + "$type": "Tfl.Api.Presentation.Entities.AdditionalProperties, Tfl.Api.Presentation.Entities", + "category": "payload", + "key": "videoUrl", + "sourceSystemKey": "JamCams", + "value": "https://s3-eu-west-1.amazonaws.com/jamcams.tfl.gov.uk/00001.03766.mp4", + "modified": "2023-08-31T15:46:06.093Z" + }, + { + "$type": "Tfl.Api.Presentation.Entities.AdditionalProperties, Tfl.Api.Presentation.Entities", + "category": "cameraView", + "key": "view", + "sourceSystemKey": "JamCams", + "value": "West - A222 Perry St Twds Chislehurst", + "modified": "2023-08-31T15:46:06.093Z" + }, + { + "$type": "Tfl.Api.Presentation.Entities.AdditionalProperties, Tfl.Api.Presentation.Entities", + "category": "Description", + "key": "LastUpdated", + "sourceSystemKey": "JamCams", + "value": "Aug 31 2023 3:46PM", + "modified": "2023-08-31T15:46:06.093Z" + } + ], + "children": [], + "childrenUrls": [], + "lat": 51.4183, + "lon": 0.09822 +} +``` + +There is much useful data here, including a link to the camera's video stream, which is in a MP4 video file stored on AWS S3, for example `https://s3-eu-west-1.amazonaws.com/jamcams.tfl.gov.uk/00001.03766.mp4`. + +!!! tip + + It is more efficient to pass a link to a video through the pipeline than the video itself, as the video file size can be relatively large. + +This message is passed on to the next service in the pipeline, the frame grabber. + +## 👩‍🔬 Lab - Examine the data + +In this section, you learn how to use the Quix Portal to examine the message data format. There are various ways of doing this, and several ways are shown in later parts of this tutorial. Having clarity on the message format enables better undertanding of the data flow in the pipeline. + +To see the message format on the output topic of the service: + +1. In the pipeline view click on the TFL camera feed service tile. + +2. Click the `Messages` tab and then click on a message. You will see something similar to the following screenshot: + + ![Message view](./images/tfl-camera-feed-message-view.png) + + You might see messages that have the format: + + ``` json + { + "Name": null, + "Location": null, + "Metadata": {}, + "Parents": [], + "TimeOfRecording": null + } + ``` + + These are stream metadata messages and are not used in this tutorial. + +3. You can now see the message format in the right-hand pane: + + ``` json + [ + { + "Timestamp": 1693925495304353500, + "Tags": {}, + "Id": "camera", + "Value": "" + } + ] + ``` + + This is the data published to the output topic, and passed on to the frame grabber. + +## See also + +For more information refer to: + +* [Quix Streams](../../../client-library-intro.md) - More about streams, publishing, consuming, events and much more. + +## 🏃‍♀️ Next step + +[Part 3 - Frame grabber :material-arrow-right-circle:{ align=right }](tfl-frame-grabber.md) diff --git a/docs/platform/tutorials/image-processing/tfl-frame-grabber.md b/docs/platform/tutorials/image-processing/tfl-frame-grabber.md index 2caa6507..cb9335b7 100644 --- a/docs/platform/tutorials/image-processing/tfl-frame-grabber.md +++ b/docs/platform/tutorials/image-processing/tfl-frame-grabber.md @@ -1,17 +1,132 @@ -# 5. Frame extraction +# TfL frame grabber -In this part of the tutorial you add a frame extraction service. +In this part of the tutorial you learn about the TfL frame grabber service. The main job of the frame grabber is to grab frames from the TfL video feed file, and then pass this on to the object detection service. -The frame extraction service grabs single frames from the video feeds, so that object detection can be performed in the next stage of the pipeline. +![TfL frame grabber](./images/tfl-frame-grabber-pipeline-segment.png) -Follow these steps to deploy the **frame extraction service**: +## 💡 Key ideas -1. Navigate to the `Code Samples` and locate `TfL traffic camera frame grabber`. +The key ideas on this page: -2. Click `Deploy`. +* Using computer vision library to extract frames from a video file +* Publishing time series data +* Publishing binary data +* Using the Quix Data Explorer to examine raw message format -3. Click `Deploy` once more. +## What it does - This service receives data from the `tfl-cameras` topic and streams data to the `image-raw` topic. +The key thing this service does is extract frames from the TfL video file. By default the frame grabber grabs one frame in every 100 frames, which is typically one per five seconds of video. This is done using the [OpenCV](https://opencv.org/){target=_blank} Python library. -[Part 6 - Stream merge :material-arrow-right-circle:{ align=right }](stream-merge.md) +The frame grabber needs to obtain the video URL, as that is where it's going to grab frames from. Much of the other information can be ignored, so this is filtered by the following code: + +``` python +camera_video_feed = list(filter(lambda x: x["key"] == "videoUrl", camera["additionalProperties"]))[0] +``` + +This create the `camera_video_feed`, which consists of the following data: + +``` json +{ + "$type": "Tfl.Api.Presentation.Entities.AdditionalProperties, Tfl.Api.Presentation.Entities", + "category": "payload", + "key": "videoUrl", + "sourceSystemKey": "JamCams", + "value": "https://s3-eu-west-1.amazonaws.com/jamcams.tfl.gov.uk/00001.03766.mp4", + "modified": "2023-08-31T15:46:06.093Z" +}, +``` + +The code then publishes the frames as binary data: + +``` python +self.stream_producer.timeseries.buffer.add_timestamp_nanoseconds(time.time_ns()) \ + .add_value("image", bytearray(frame_bytes)) \ + .add_value("lon", lon) \ + .add_value("lat", lat) \ + .publish() +``` + +Notice the data is now sent as time series data, rather than event data, with addition of a timestamp. + +Geolocation information from the camera data is also added to the message. The message then has the format: + +``` json +{ + "Epoch": 0, + "Timestamps": [ + 1693998068342837200 + ], + "NumericValues": { + "lon": [ + 0.22112 + ], + "lat": [ + 51.50047 + ] + }, + "StringValues": {}, + "BinaryValues": { + "image": [ + "(Binary of 31.67 KB)" + ] + }, + "TagValues": {} +} +``` + +This can be used by later stages of the pipeline to locate the capacity information, and frame thumbnail, on the map. + +## 👩‍🔬 Lab - Examine the data + +In this section, you learn how to use the Quix Data Explorer to examine data output from this service. The Data Explorer enables you to view data in real time. This is very useful when debugging a pipeline, or ensuring the data you are receiving is what you expect. + +To examine the data published by the service: + +1. In the pipeline view, click on the arrow (representing the output topic) on the right side of the frame grabber service tile, and select `Explore live data`. This opens a new tab and displays the Data Explorer. + +2. In the Data Explorer, ensure that live data is selected (it should be selected by default), and then click on messages to see all raw messages. + +3. Click on a message to see its data structure in JSON format. + + !!! tip + + There are actually two types of message here: stream metadata messages and actually data messages. The messages you're interested in have `timestamp` in them. You can ignore the metadata messages in this tutorial, as they are not used. + +4. Examine the data format that is being sent to the next stage of the pipeline, the object detection service. It should be similar to the following: + +``` json +{ + "Epoch": 0, + "Timestamps": [ + 1693998068342837200 + ], + "NumericValues": { + "lon": [ + 0.22112 + ], + "lat": [ + 51.50047 + ] + }, + "StringValues": {}, + "BinaryValues": { + "image": [ + "(Binary of 31.67 KB)" + ] + }, + "TagValues": {} +} +``` + +Here you see the timestamp, geolocation information, and the binary data of the frame sent. + +## See also + +For more information refer to: + +* [Quix Streams](../../../client-library-intro.md) - More about streams, publishing, consuming, event data, time series data, and much more. +* [OpenCV](https://opencv.org/){target=_blank} - More on how to use the OpenCV library. + +## 🏃‍♀️ Next step + +[Part 4 - Object detection :material-arrow-right-circle:{ align=right }](object-detection.md) diff --git a/docs/platform/tutorials/image-processing/web-ui.md b/docs/platform/tutorials/image-processing/web-ui.md index 1c1cb91f..83134693 100644 --- a/docs/platform/tutorials/image-processing/web-ui.md +++ b/docs/platform/tutorials/image-processing/web-ui.md @@ -1,31 +1,137 @@ -# 7. Deploy the web UI +# Web UI -In this part of the tutorial you add a service to provide a simple UI with which to monitor and control the pipeline. +In this part of the tutorial you learn about the web UI service. -The following screenshot shows the last image processed from one of the video streams, as well as the map with a count of all the objects detected so far, and their location: +![Web UI pipeline](./images/web-ui-pipeline-segment.png) + +This provides the rather fancy interface for you to interact with this project. + +The following screenshot shows vehicle density at various points in London: ![image processing web UI](./images/web-ui.png) -!!! tip +## 💡 Key ideas + +The key ideas on this page: + +* How a web client can read data from a Quix topic using Quix Streaming Reader API +* WebSockets as a way of streaming data into a web client +* Microsoft SignalR is the WebSockets technology used in the Streaming Reader API +* Access an external web application from the pipeline view + +## What it does + +The key thing this service does is provide a UI that enables you to see vehicle data in real time, displayed on a Google map. + +The UI is an Angular web client written using Typescript. The most important thing to understand is how this service obtains data from the Quix pipeline. This is done through use of the [Quix Streaming Reader API](../../../apis/streaming-reader-api/intro.md). + +The Streaming Reader API has both an HTTP and WebSockets interface you can use to interface with a Quix topic. This web client uses the WebSockets interface. This enables data to be streamed from the Quix topic into the web client with good performance. This is a more efficient method than using the request-response method of HTTP. + +The WebSockets interface uses Microsoft SignalR technology. You can read more about that in the [Quix SignalR documentation](../../../apis/streaming-reader-api/signalr.md) for the Reader API. + +In essence the code to read a topic needs to: + +1. Connect to Quix SignalR hub. +2. The web UI reads parameter data rather than event data, as that is the format used for inbound data in this case. +3. Handle "parameter data received" events using a callback. + +The web UI code to do this is similar to the following: + +``` javascript +ngAfterViewInit(): void { +this.getInitialData(); + +this.quixService.initCompleted$.subscribe((topicName) => { + this._topicName = topicName; + + this.quixService.ConnectToQuix().then(connection => { + this.connection = connection; + this.connection.on('ParameterDataReceived', (data: ParameterData) => { + this._parameterDataReceived$.next(data); + }); + this.subscribeToData(); + + this.connection.onreconnected((connectionId?: string) => { + if (connectionId) this.subscribeToData(); + }); + }); +}); +``` + +So, simplifying, after connection to the Quix topic, on a `ParameterDataReceived` event, the corresponding callback (event) handler is invoked. There are other events that can be subscribed to. You can read more about events and subscription in the [subscription and event documentation](../../../apis/streaming-reader-api/subscriptions.md). + +!!! note + + A web client can also write data into a Quix topic using the [Quix Streaming Writer API](../../../apis/streaming-writer-api/intro.md), but in this app you only consume (read) data. + +The data read from the topic is as follows: + +``` json +{ + "Epoch": 0, + "Timestamps": [ + 1693573934793346000 + ], + "NumericValues": { + "car": [ + 7 + ], + "traffic light": [ + 1 + ], + "person": [ + 3 + ], + "lat": [ + 51.5107 + ], + "lon": [ + -0.11512 + ], + "delta": [ + -4.353343725204468 + ] + }, + "StringValues": { + "image": [ + "iVBO…to/v37HG18UyZ1Qz/fby/+yXUGc5UVWZfIHnX0iqM6aEAAAAASUVORK5CYII=" + ] + }, + "BinaryValues": {}, + "TagValues": { + "parent_streamId": [ + "JamCams_00001.02500" + ] + } +} +``` + +The interesting thing here is that the detected object image to be displayed by the UI is passed to it in Base64 encoded format, as the HTTP interface of the streaming reader API is being used. + +## Understand the code + +You learned how to explore the code for a service in previous parts of this tutorial. The web UI is a fairly standard web client using Angular. + +For more details on using the Quix Streaming Reader API see the [API documentation](../../../apis/streaming-reader-api/intro.md). - At this point, make sure that all the services in your pipeline are running. +## 👩‍🔬 Lab - Explore the UI -Follow these steps to deploy the **web UI service**: +If you have not done so, explore the web UI. -1. Navigate to the `Code Samples` and locate `TFL image processing UI`. +1. In your pipeline view, click on the `external link` icon: -2. Click `Deploy`. + ![External link](./images/external-link.png) -3. Click `Deploy` again. +2. Now interact with the web UI. -4. Once deployed, click the service tile. +Have fun! -5. Click the `Public URL` to launch the UI in a new browser tab. +## See also - ![image processing web UI](./images/ui-public-url.png) +For more information refer to: -You have now deployed the web UI. +* [Quix Streaming Reader API](../../../apis/streaming-reader-api/intro.md) - read about the API used by clients external to Quix to read data from a Quix topic. -You can select the type of object you want to detect, and the locations at which that object are detected are displayed on the map. The number of occurrences of detection at that location are also displayed in the map pin. +## 🏃‍♀️ Next step -[Part 7 - Summary :material-arrow-right-circle:{ align=right }](summary.md) +[Part 6 - Other services :material-arrow-right-circle:{ align=right }](other-services.md) diff --git a/docs/platform/tutorials/index.md b/docs/platform/tutorials/index.md index 8982d9fe..89d6e05a 100644 --- a/docs/platform/tutorials/index.md +++ b/docs/platform/tutorials/index.md @@ -12,7 +12,7 @@ Each tutorial is divided into parts, so that you can leave a tutorial at a conve --- - Deploy a real-time **data science** project into a scalable self-maintained solution. + Deploy a real-time **data science** application into a scalable self-maintained solution. [:octicons-arrow-right-24: Data Science](./data-science/index.md) @@ -35,7 +35,7 @@ Each tutorial is divided into parts, so that you can leave a tutorial at a conve --- - Deploy a real-time data science project into a scalable self-maintained solution. + Deploy a real-time data science application into a scalable self-maintained solution. [:octicons-arrow-right-24: ML Predictions](./data-science/index.md) diff --git a/docs/platform/tutorials/matlab/images/code_samples.png b/docs/platform/tutorials/matlab/images/code_samples.png deleted file mode 100644 index d78a4115..00000000 Binary files a/docs/platform/tutorials/matlab/images/code_samples.png and /dev/null differ diff --git a/docs/platform/tutorials/matlab/images/matlab_data_explorer.png b/docs/platform/tutorials/matlab/images/matlab_data_explorer.png index e05e1c28..cf20a57c 100644 Binary files a/docs/platform/tutorials/matlab/images/matlab_data_explorer.png and b/docs/platform/tutorials/matlab/images/matlab_data_explorer.png differ diff --git a/docs/platform/tutorials/matlab/images/matlab_deployment_details.png b/docs/platform/tutorials/matlab/images/matlab_deployment_details.png index 53d59739..e89d3992 100644 Binary files a/docs/platform/tutorials/matlab/images/matlab_deployment_details.png and b/docs/platform/tutorials/matlab/images/matlab_deployment_details.png differ diff --git a/docs/platform/tutorials/matlab/images/matlab_deployment_dialog.png b/docs/platform/tutorials/matlab/images/matlab_deployment_dialog.png index 566d62e3..4c1103e6 100644 Binary files a/docs/platform/tutorials/matlab/images/matlab_deployment_dialog.png and b/docs/platform/tutorials/matlab/images/matlab_deployment_dialog.png differ diff --git a/docs/platform/tutorials/matlab/images/matlab_deployment_tag.png b/docs/platform/tutorials/matlab/images/matlab_deployment_tag.png index 3d9f8c1f..ace18706 100644 Binary files a/docs/platform/tutorials/matlab/images/matlab_deployment_tag.png and b/docs/platform/tutorials/matlab/images/matlab_deployment_tag.png differ diff --git a/docs/platform/tutorials/matlab/images/matlab_pipeline_view.png b/docs/platform/tutorials/matlab/images/matlab_pipeline_view.png index 731fa36a..c6c880e7 100644 Binary files a/docs/platform/tutorials/matlab/images/matlab_pipeline_view.png and b/docs/platform/tutorials/matlab/images/matlab_pipeline_view.png differ diff --git a/docs/platform/tutorials/matlab/images/matlab_pkg_upload.png b/docs/platform/tutorials/matlab/images/matlab_pkg_upload.png deleted file mode 100644 index 94243ba6..00000000 Binary files a/docs/platform/tutorials/matlab/images/matlab_pkg_upload.png and /dev/null differ diff --git a/docs/platform/tutorials/matlab/images/matlab_project_preview.png b/docs/platform/tutorials/matlab/images/matlab_project_preview.png deleted file mode 100644 index b4f65ba4..00000000 Binary files a/docs/platform/tutorials/matlab/images/matlab_project_preview.png and /dev/null differ diff --git a/docs/platform/tutorials/matlab/images/matlab_set_up_project.png b/docs/platform/tutorials/matlab/images/matlab_set_up_application.png similarity index 100% rename from docs/platform/tutorials/matlab/images/matlab_set_up_project.png rename to docs/platform/tutorials/matlab/images/matlab_set_up_application.png diff --git a/docs/platform/tutorials/matlab/images/matlab_starter_application_creation.png b/docs/platform/tutorials/matlab/images/matlab_starter_application_creation.png new file mode 100644 index 00000000..2e7bc092 Binary files /dev/null and b/docs/platform/tutorials/matlab/images/matlab_starter_application_creation.png differ diff --git a/docs/platform/tutorials/matlab/images/matlab_starter_project_creation.png b/docs/platform/tutorials/matlab/images/matlab_starter_project_creation.png deleted file mode 100644 index b00146de..00000000 Binary files a/docs/platform/tutorials/matlab/images/matlab_starter_project_creation.png and /dev/null differ diff --git a/docs/platform/tutorials/matlab/images/pipeline_view.png b/docs/platform/tutorials/matlab/images/pipeline_view.png index c3782dde..e881bc7f 100644 Binary files a/docs/platform/tutorials/matlab/images/pipeline_view.png and b/docs/platform/tutorials/matlab/images/pipeline_view.png differ diff --git a/docs/platform/tutorials/matlab/images/simulink_code_samples.png b/docs/platform/tutorials/matlab/images/simulink_code_samples.png deleted file mode 100644 index 90fe1bde..00000000 Binary files a/docs/platform/tutorials/matlab/images/simulink_code_samples.png and /dev/null differ diff --git a/docs/platform/tutorials/matlab/images/simulink_data_explorer.png b/docs/platform/tutorials/matlab/images/simulink_data_explorer.png index 0411d3dd..3261722c 100644 Binary files a/docs/platform/tutorials/matlab/images/simulink_data_explorer.png and b/docs/platform/tutorials/matlab/images/simulink_data_explorer.png differ diff --git a/docs/platform/tutorials/matlab/images/simulink_deployment_details.png b/docs/platform/tutorials/matlab/images/simulink_deployment_details.png index 25094e84..0e2bea9e 100644 Binary files a/docs/platform/tutorials/matlab/images/simulink_deployment_details.png and b/docs/platform/tutorials/matlab/images/simulink_deployment_details.png differ diff --git a/docs/platform/tutorials/matlab/images/simulink_deployment_dialog.png b/docs/platform/tutorials/matlab/images/simulink_deployment_dialog.png index bdb697f7..3544666d 100644 Binary files a/docs/platform/tutorials/matlab/images/simulink_deployment_dialog.png and b/docs/platform/tutorials/matlab/images/simulink_deployment_dialog.png differ diff --git a/docs/platform/tutorials/matlab/images/simulink_deployment_tag.png b/docs/platform/tutorials/matlab/images/simulink_deployment_tag.png index 910d2a43..c000d351 100644 Binary files a/docs/platform/tutorials/matlab/images/simulink_deployment_tag.png and b/docs/platform/tutorials/matlab/images/simulink_deployment_tag.png differ diff --git a/docs/platform/tutorials/matlab/images/simulink_set_up_project.png b/docs/platform/tutorials/matlab/images/simulink_set_up_project.png deleted file mode 100644 index f0763ca4..00000000 Binary files a/docs/platform/tutorials/matlab/images/simulink_set_up_project.png and /dev/null differ diff --git a/docs/platform/tutorials/matlab/images/starter_project_creation.png b/docs/platform/tutorials/matlab/images/starter_project_creation.png deleted file mode 100644 index 4845b0f7..00000000 Binary files a/docs/platform/tutorials/matlab/images/starter_project_creation.png and /dev/null differ diff --git a/docs/platform/tutorials/matlab/images/starter_source.png b/docs/platform/tutorials/matlab/images/starter_source.png deleted file mode 100644 index e77439d0..00000000 Binary files a/docs/platform/tutorials/matlab/images/starter_source.png and /dev/null differ diff --git a/docs/platform/tutorials/matlab/matlab-and-simulink.md b/docs/platform/tutorials/matlab/matlab-and-simulink.md index a63b6c09..a9477dcc 100644 --- a/docs/platform/tutorials/matlab/matlab-and-simulink.md +++ b/docs/platform/tutorials/matlab/matlab-and-simulink.md @@ -10,16 +10,18 @@ This section describes the steps for deploying a MATLAB function that rotates 2D ### Prerequisites - - A Quix account. You can sign up for a free account from the Quix [website](https://quix.io/product/){target=_blank}. - - MathWorks licenses for MATLAB, MATLAB Compiler and the MATLAB Compiler SDK. +* A Quix account. You can sign up for a free account from the Quix [website](https://quix.io/product/){target=_blank}. +* It is assumed you have created a Quix project and environment in which to contain your application. Alternatively, you can use a legacy workspace, but this is not recommended. +* MathWorks licenses for MATLAB, MATLAB Compiler and the MATLAB Compiler SDK. This tutorial uses MATLAB R2023a. Please refer to the [Working with different MATLAB versions](#working-with-different-matlab-versions) section for information on how to use a different version of MATLAB. ### Preparing a MATLAB function for deployment -This section describes the process for packaging MATLAB functions for deployment. The project templates in the Quix Portal have pre-built MATLAB packages. To deploy the default packages without compiling them, go to [deploying a MATLAB function](#deploying-a-matlab-function) section. +This section describes the process for packaging MATLAB functions for deployment. The application templates in the Quix Portal have pre-built MATLAB packages. To deploy the default packages without compiling them, go to [deploying a MATLAB function](#deploying-a-matlab-function) section. 1. In MATLAB, create a new `*.m` file with the following function and save it as `rot.m`: + ``` function M = rot(v, theta) R = [cos(theta) -sin(theta); sin(theta) cos(theta)]; @@ -46,27 +48,19 @@ This section describes the process for packaging MATLAB functions for deployment ### Deploying a MATLAB function - 1. Sign in to your workspace on the [Quix Portal](https://portal.platform.quix.ai/){target=_blank}. - - 2. Click on the `Code Samples` on the left navigation panel and search for `matlab` in the search bar on the top left to filter code samples: + 1. Sign in to your environment in the [Quix Portal](https://portal.platform.quix.ai/){target=_blank}. - ![code samples view on Quix](./images/code_samples.png){width=600} + 2. Click on the `Code Samples` in the left navigation panel, and search for `matlab` in the search box on the top left to filter code samples. 3. Select the MATLAB template for your programming language of choice. Optionally, use the `LANGUAGES` filter on the left to filter templates based on the programming language. - 4. Click on the `Preview code` button to open a preview of the project, and then click on the `Edit code` button to generate a project from the template: - - ![project preview on Quix](./images/matlab_project_preview.png){width=600} - - 5. Enter `Rotation Transform` as the project name. Enter `matlab-input` for input topic and `matlab-output` for output topic, and click `Save as Project.` - - ![create project from template](./images/matlab_set_up_project.png){width=600} + 4. Click on the `Preview code` button to open a preview of the application, and then click on the `Edit code` button to generate an application from the template. - 6. To use your own MATLAB packages, replace the contents of the `MATLAB` directory in the project with your assets by clicking on the upload icon in the project explorer. + 5. Enter `Rotation Transform` as the application name. Enter `matlab-input` for input topic and `matlab-output` for output topic, and click `Save as Application.` - ![upload MATLAB packages](./images/matlab_pkg_upload.png){width=600} + 6. To use your own MATLAB packages, replace the contents of the `MATLAB` directory in the application with your assets by clicking on the upload icon in the application explorer. - 7. Assign a tag to the project by clicking on the `add tag` icon and typing in a release tag such as `1.0`. Click on `Deploy` on the top right to open the deployment dialog: + 7. Assign a tag to the application by clicking on the `add tag` icon and typing in a release tag such as `1.0`. Click on `Deploy` on the top right to open the deployment dialog: ![create a release tag](./images/matlab_deployment_tag.png){width=600} @@ -78,17 +72,15 @@ This section describes the process for packaging MATLAB functions for deployment This section describes the steps to test the MATLAB function by deploying a service to generate test data. In production environments, data from sensors or the output of another function or simulation takes the place of this service. - 1. Click on the `Code Samples` on the left navigation, select `Python` under languages, and `Source` under the pipeline stage. Then type `starter` in the search box to filter the starter project for a data source. Follow the steps described in the previous section to create a project named `2D Vector Source` based on this template by clicking on `Preview code` followed by `Edit code`: + 1. Click on the `Code Samples` on the left navigation, select `Python` under languages, and `Source` under the pipeline stage. Then type `starter` in the search box to filter the starter template for a data source. Follow the steps described in the previous section to create an application named `2D Vector Source` based on this template by clicking on `Preview code` followed by `Edit code`. - ![create project from starter template](./images/starter_source.png){width=600} + 2. Set the output topic of the application to the input topic of the deployment containing the MATLAB transformation: - 2. Set the output topic of the project to the input topic of the deployment containing the MATLAB transformation: + ![starter application configuration](./images/matlab_starter_application_creation.png){width=600} - ![starter project configuration](./images/matlab_starter_project_creation.png){width=600} + 3. Replace the contents of the `main.py` file of the application with the following script, which generates a random stream of 2D unit vectors: - 3. Replace the contents of the `main.py` file of the project with the following script, which generates a random stream of 2D unit vectors: - - ``` + ```python import quixstreams as qx import time import datetime @@ -113,7 +105,7 @@ This section describes the steps to test the MATLAB function by deploying a serv time.sleep(0.5) ``` - 4. Create a tag and deploy the `2D Vector Source`. + 4. Create a tag and deploy the `2D Vector Source`. Click `Pipeline` in the main navigation to display your pipeline: ![pipeline](./images/matlab_pipeline_view.png){width=600} @@ -121,7 +113,7 @@ This section describes the steps to test the MATLAB function by deploying a serv ![deployment details](./images/matlab_deployment_details.png){width=600} - 6. Next, click on `Data explorer` on the left navigation panel, select `Live data` from the top menu, and select the output topic of the MATLAB transformation (for example, `matlab-output`), the stream, and the parameters as shown in the figure below to view the live transformation: + 6. Next, click on `Data explorer` on the left-hand navigation, select `Live data` from the top menu, and select the output topic of the MATLAB transformation (for example, `matlab-output`), the stream, and the parameters as shown in the figure below to view the live transformation: ![data explorer](./images/matlab_data_explorer.png){width=600} @@ -139,11 +131,12 @@ This tutorial uses MATLAB R2023a. Please refer to the [Working with different MA ### Preparing a Simulink model for deployment 1. Download the MATLAB and Simulink assets for the internal combustion engine from the MathWorks [site](https://www.mathworks.com/help/simulink/slref/modeling-engine-timing-using-triggered-subsystems.html){target=_blank} or the `samples` directory in the Quix `Code Samples` for MATLAB. - 2. Convert the input and output of the Simulink model to workspace variables. For other means of interacting with Simulink programmatically, refer to [How to Bring Data from MATLAB Into Simulink](https://www.youtube.com/watch?v=kM2qL__YxBQ){target=_blank} and [Simulate a Simulink® model from Python](https://github.com/mathworks/Call-Simulink-from-Python){target=_blank}. + 2. Convert the input and output of the Simulink model to environment variables. For other means of interacting with Simulink programmatically, refer to [How to Bring Data from MATLAB Into Simulink](https://www.youtube.com/watch?v=kM2qL__YxBQ){target=_blank} and [Simulate a Simulink® model from Python](https://github.com/mathworks/Call-Simulink-from-Python){target=_blank}. ![Simulink](./images/simulink_console.png){width=600} 3. Create a MATLAB function in `engine.m` file with the following content to bootstrap the Simulink model and prepare it for deployment: + ``` function R = engine(throttle_angle, time) ta = timeseries(throttle_angle, time); @@ -154,12 +147,14 @@ This tutorial uses MATLAB R2023a. Please refer to the [Working with different MA R = sout.engine_speed.Data(end); end ``` - 4. If you use workspace variables to pass arguments to Simulink, create them in the workspace before compiling the model. Run the following commands on the MATLAB command window to seed some input data and run the Simulink model using the `engine.m` function: + 4. If you use environment variables to pass arguments to Simulink, create them in the environment before compiling the model. Run the following commands on the MATLAB command window to seed some input data and run the Simulink model using the `engine.m` function: + ``` throttle_a = [0.2, 0.23, 1.2, 4.2, 5.3 ]; ts = [1, 2, 3, 4, 5]; engine(throttle_a, ts); ``` + 5. On the MATLAB command window, type the following command to compile the MATLAB function to your preferred runtime environment. For information on compiling MATLAB functions for deployment, please refer to [MATLAB compiler documentation](https://www.mathworks.com/help/compiler/mcc.html#d124e20858){target=_blank}: === "Python" @@ -178,29 +173,23 @@ This tutorial uses MATLAB R2023a. Please refer to the [Working with different MA ### Deploying a Simulink model - 1. Sign in to the workspace on the [Quix Portal](https://portal.platform.quix.ai/){target=_blank}. + 1. Sign in to the environment on the [Quix Portal](https://portal.platform.quix.ai/){target=_blank}. 2. Click on the `Code Samples` on the left navigation panel and search for `simulink` in the search bar on the top left to filter code samples (templates for MATLAB and Simulink are the same): - ![code samples view on Quix](./images/simulink_code_samples.png){width=600} - 3. Click on the template for your programming language of choice. You can also use the `LANGUAGES` filters on the left to filter templates based on the programming language. - 4. Click on the `Preview code` button to open a preview of the project, and then click on the `Edit code` button to generate a new project from the template: + 4. Click on the `Preview code` button to open a preview of the application, and then click on the `Edit code` button to generate a new application from the template. - ![project preview on Quix](./images/matlab_project_preview.png){width=600} + 5. Enter `Engine Model` for the application name. Enter `simulink-input` for input topic and `simulink-output` for output topic, and click `Save as Application.` - 5. Enter `Engine Model` for the project name. Enter `simulink-input` for input topic and `simulink-output` for output topic, and click `Save as Project.` - - ![create project from template](./images/simulink_set_up_project.png){width=600} - - 6. To deploy your own functions and models, replace the contents of the `MATLAB` directory in the project with your packages. + 6. To deploy your own functions and models, replace the contents of the `MATLAB` directory in the application with your packages. 7. Replace the contents of the main function (`main.py` for Python, `Program.cs` for C#) with the following for your target programming language. They are responsible for calling the `engine` function with the correct arguments: === "Python" - ``` python + ```python import quixstreams as qx import os import quixmatlab @@ -235,7 +224,7 @@ This tutorial uses MATLAB R2023a. Please refer to the [Working with different MA === "C\#" - ``` cs + ```cs using System; using MathWorks.MATLAB.Runtime; using MathWorks.MATLAB.Types; @@ -278,7 +267,7 @@ This tutorial uses MATLAB R2023a. Please refer to the [Working with different MA } ``` - 8. Assign a tag to the project by clicking on the `add tag` icon and typing in a release tag such as `1.0`, and clicking on `Deploy` on the top right to open the deployment dialog: + 8. Assign a tag to the application by clicking on the `add tag` icon and typing in a release tag such as `1.0`, and clicking on `Deploy` on the top right to open the deployment dialog: ![create a release tag](./images/simulink_deployment_tag.png){width=600} @@ -290,18 +279,13 @@ This tutorial uses MATLAB R2023a. Please refer to the [Working with different MA This section describes the steps to deploy a service to generate test data for the model. In production environments, data from sensors or output of another simulation takes the place of this service. - 1. Click on the `Code Samples` on the left navigation, select `Python` under languages, and `Source` under the pipeline stage. Then type `starter` in the search box to filter the starter project for a data source. Follow the steps described in the previous section to create a project based on this template by clicking on `Preview code` followed by `Edit code`: - - - ![create project from starter template](./images/starter_source.png){width=600} + 1. Click on the `Code Samples` on the left navigation, select `Python` under languages, and `Source` under the pipeline stage. Then type `starter` in the search box to filter the starter template for a data source. Follow the steps described in the previous section to create an application based on this template by clicking on `Preview code` followed by `Edit code`. - 2. Set the output topic of the `Engine Data Source` to the input topic of the deployment containing the Simulink model: + 2. Set the output topic of the `Engine Data Source` to the input topic of the deployment containing the Simulink model, `simulink-input`. - ![starter project configuration](./images/starter_project_creation.png){width=600} + 3. Replace the contents of the `main.py` file in the `Engine Data Source` application with the following script, which randomly generates a throttle angle once every second: - 3. Replace the contents of the `main.py` file in the `Engine Data Source` project with the following script, which randomly generates a throttle angle once every second: - - ``` + ```python import quixstreams as qx import time import datetime @@ -337,7 +321,7 @@ Quix supports all versions of MATLAB with support for MATLAB Runtime API. To red === "Python" - ``` python + ```python FROM ubuntu:22.04 ENV PYTHONUNBUFFERED=1 ENV PYTHONIOENCODING=UTF-8 @@ -505,7 +489,8 @@ Quix supports all versions of MATLAB with support for MATLAB Runtime API. To red ``` === "C\# (SDK)" - ``` cs + + ```cs FROM mcr.microsoft.com/dotnet/sdk:6.0-jammy # MathWorks base dependencies @@ -525,7 +510,8 @@ Quix supports all versions of MATLAB with support for MATLAB Runtime API. To red ``` === "C\# (Runtime)" - ``` cs + + ```cs FROM mcr.microsoft.com/dotnet/runtime:6.0-jammy # MathWorks base dependencies @@ -545,6 +531,7 @@ Quix supports all versions of MATLAB with support for MATLAB Runtime API. To red ``` === "Base dependencies" + ``` ca-certificates libasound2 libc6 libcairo2 libcairo-gobject2 libcap2 libcrypt1 libcrypt-dev libcups2 libdrm2 libdw1 libgbm1 libgdk-pixbuf2.0-0 libgl1 libglib2.0-0 libgomp1 libgstreamer1.0-0 libgstreamer-plugins-base1.0-0 libgtk-3-0 libice6 libnspr4 libnss3 libodbc1 libpam0g libpango-1.0-0 libpangocairo-1.0-0 libpangoft2-1.0-0 libsndfile1 libsystemd0 libuuid1 libwayland-client0 libxcomposite1 libxcursor1 libxdamage1 libxfixes3 libxft2 libxinerama1 libxrandr2 libxt6 libxtst6 libxxf86vm1 linux-libc-dev locales locales-all make net-tools odbcinst1debian2 procps sudo unzip wget zlib1g ``` \ No newline at end of file diff --git a/docs/platform/tutorials/nocode-sentiment/nocode-sentiment-analysis.md b/docs/platform/tutorials/nocode-sentiment/nocode-sentiment-analysis.md index 507624d6..436c9c39 100644 --- a/docs/platform/tutorials/nocode-sentiment/nocode-sentiment-analysis.md +++ b/docs/platform/tutorials/nocode-sentiment/nocode-sentiment-analysis.md @@ -1,12 +1,7 @@ # No code sentiment analysis -This tutorial shows how to build a data processing pipeline without -code. You’ll analyze tweets that contain information about Bitcoin and -stream both raw and transformed data into -[Snowflake](https://www.snowflake.com/){target=_blank}, a storage platform, -using the Twitter, -[HuggingFace](https://huggingface.co/){target=_blank} and Snowflake -connectors. +This tutorial shows how to build a data processing pipeline without code. You’ll analyze tweets that contain information about Bitcoin and stream both raw and transformed data into [Snowflake](https://www.snowflake.com/){target=_blank}, a storage platform, +using the Twitter, [HuggingFace](https://huggingface.co/){target=_blank} and Snowflake connectors. I’ve made a video of this tutorial if you prefer watching to reading. @@ -25,16 +20,13 @@ I’ve made a video of this tutorial if you prefer watching to reading. ## Step one: create a database -Sign in to your Snowflake account to create the Snowflake database which -will receive your data. Call this "demodata" and click "Create." +Sign in to your Snowflake account to create the Snowflake database which will receive your data. Call this "demodata" and click "Create." ## Step two: get your data -In the Quix Portal, navigate to the `Code Samples` and search for the Twitter source -connector. +In the Quix Portal, navigate to the `Code Samples` and search for the Twitter source connector. -Click "Add new." This adds the source to your pipeline and brings you -back to the Code Samples. +Click "Add new." This adds the source to your pipeline and brings you back to the Code Samples. Fill in the necessary fields: @@ -60,7 +52,7 @@ Click "Deploy" - In the Code Samples, search for "HuggingFace" - - Click "Set up and deploy" on the HuggingFace connector + - Click "Deploy" on the HuggingFace connector - Choose "Twitter data" as the input topic @@ -83,7 +75,7 @@ Click "Deploy" - Search the Code Samples for the Snowflake connector - - Click "Set up and deploy" on the connector + - Click "Deploy" on the connector - Fill in the necessary fields: @@ -93,26 +85,17 @@ Click "Deploy" "hugging-face-output". This means it will receive the data being output by the sentiment analysis model. -To fill in the Snowflake locator and region (these are similar to a -unique ID for your Snowflake instance), navigate to your Snowflake -account. Copy the locator and region from the URL and paste them into -the corresponding fields in the connector setup in Quix. Lastly, input -your username and password. +To fill in the Snowflake locator and region (these are similar to a unique ID for your Snowflake instance), navigate to your Snowflake account. Copy the locator and region from the URL and paste them into the corresponding fields in the connector setup in Quix. Lastly, input your username and password. ![image](image2.png) -Click "Deploy" on the Snowflake connector. If the credentials and -connection details are correct, you’ll see the "Connected" status in the -log and will be redirected to your workspace. +Click "Deploy" on the Snowflake connector. If the credentials and connection details are correct, you’ll see the "Connected" status in the log and will be redirected to your environment. ![image](image3.png) -Congratulations\! You built a no-code pipeline that filters and collects -data from Twitter, transforms it with a HuggingFace model and delivers -it to a Snowflake database. +Congratulations! You built a no-code pipeline that filters and collects data from Twitter, transforms it with a HuggingFace model and delivers it to a Snowflake database. -You can now go back over to Snowflake and find the "Databases" menu. -Expand the "demodata" database and then find the tables under "public". +You can now go back over to Snowflake and find the "Databases" menu. Expand the "demodata" database and then find the tables under "public". ![image](snowflake.png) diff --git a/docs/platform/tutorials/rss-tutorial/rss-processing-pipeline.md b/docs/platform/tutorials/rss-tutorial/rss-processing-pipeline.md index a0bbf51a..21417c17 100644 --- a/docs/platform/tutorials/rss-tutorial/rss-processing-pipeline.md +++ b/docs/platform/tutorials/rss-tutorial/rss-processing-pipeline.md @@ -19,7 +19,7 @@ This tutorial has three parts What you need - A free [Quix account](https://quix.io/signup){target=_blank}. It - comes with enough credits to create this project. + comes with enough credits to create this application. - A Slack account with access to create a webhook. ([This guide](https://api.slack.com/messaging/webhooks){target=_blank} can help you with this step.) @@ -29,7 +29,7 @@ What you need In your Quix account, navigate to the `Code Samples` and search for `RSS Data Source.` (Hint: you can watch Steve prepare this code in the video tutorial if you’re like to learn more about it.) -Click `Setup & deploy` on the `RSS Data Source` sample. (The card has a blue line across its top that indicates it’s a source connector.) +Click `Deploy` on the `RSS Data Source` sample. (The card has a blue line across its top that indicates it’s a source connector.) ![RSSTutorial/image1.png](image1.png) @@ -40,7 +40,7 @@ Enter the following URL into the rss_url field: [https://stackoverflow.com/feeds/tag/python](https://stackoverflow.com/feeds/tag/python){target=_blank} Click `Deploy` and wait a few seconds for the pre-built connector to be -deployed to your workspace. +deployed to your environment. You will then begin to receive data from the RSS feed. The data then goes into the configured output topic. Don’t worry, you won’t lose data. @@ -56,11 +56,11 @@ questions with certain tags get delivered to you. #### 1\. Get the `RSS Data Filtering` connector Return to the `Code Samples` and search for `RSS Data Filtering.` -Click `Setup & deploy` on the card. +Click `Deploy` on the card. -If you created a new workspace for this project, the fields -automatically populate. If you’re using the workspace for other -projects, you may need to specify the input topic as `rss-data.` +If you created a new envi for this application, the fields +automatically populate. If you’re using the environment for other +applications, you may need to specify the input topic as `rss-data.` You might also want to customize the tag_filter. It is automatically populated with a wide range of tags related to Python. This works well @@ -74,11 +74,11 @@ connector will begin processing the data that’s been building up in the rss-data topic. Have a look in the logs by clicking the Data Filtering Model tile (pink -outlined) on the workspace home page. +outlined) on the environment home page. ![RSSTutorial/image2.png](image2.png) -The transformation stage is now complete. Your project is now sending +The transformation stage is now complete. Your application is now sending the filtered and enhanced data to the output topic. ### Sending alerts @@ -106,8 +106,8 @@ developing this. Trust me. #### 2\. Modify and deploy the `Slack Notification` connector -Enter your webhook into the webhook_url field. Click `Save as project.` -This will save the code to your workspace, which is a GitLab repository. +Enter your webhook into the webhook_url field. Click `Save as Application.` +This will save the code to your environment, which is a Gitea repository. Once saved, you’ll see the code again. The quix_function.py file should be open. This is what you’ll alter. The default code dumps everything in @@ -120,7 +120,7 @@ The code picks out several field values from the parameter data and combines them to form the desired Slack alert. Copy the code and paste it over the quix_function.py file in your -project in the Quix portal. +application in the Quix portal. Save it by clicking `CTRL+S` or `Command + S` or click the tick in the top right. diff --git a/docs/platform/tutorials/sentiment-analysis/analyze.md b/docs/platform/tutorials/sentiment-analysis/analyze.md deleted file mode 100644 index 420348f8..00000000 --- a/docs/platform/tutorials/sentiment-analysis/analyze.md +++ /dev/null @@ -1,51 +0,0 @@ -# 2. Analyzing sentiment - -In [Part 1](sentiment-demo-ui.md) you deployed the Sentiment Demo UI, interacted with the UI to send messages and view messages of other users, and saw those messages displayed in the UI in real time. - -In this part of the tutorial you analyze the sentiment of the conversation by adding a new node to the processing pipeline. - -This sentiment analysis microservice utilizes a prebuilt model from [huggingface.co](https://huggingface.co/){target=_blank} to analyze the sentiment of each message flowing through the microservice. - -The microservice subscribes to data from the `messages` topic and publishes sentiment results to the `sentiment` topic. - -!!! tip - - While this tutorial uses a prebuilt sentiment analysis sample, it is also possible to build one from a basic template available in the Code Samples. If you are interested in building your own service, you can refer to an optional part of this tutorial, where you learn how to [code a sentiment analysis service](./code-and-deploy-sentiment-service.md) from the basic template. - -## Deploying the sentiment analysis service - -The sentiment of each message will be evaluated by this new microservice in your message processing pipeline. - -Follow these steps to deploy the prebuilt sentiment analysis microservice: - -1. Navigate to the `Code Samples` and search for `Sentiment analysis`. - -2. Click the `Setup & deploy` button. - -3. Ensure the "input" is set to `messages`. - - This is the topic that is subscribed to for messages to analyze. - -4. Ensure the "output" is set to `sentiment`. - - This is the topic that sentiment results are published to. - -5. Click the `Deploy` button. - - This deploys the service using the default settings. If you later find that this microservice is not performing as expected, then you can subsequently edit the deployment, and increase the resources allocated. - -6. Navigate to the web page for the UI project you deployed in [Part 1](sentiment-demo-ui.md). - -7. Enter values for `Room` and `Name` and click `CONNECT`, or re-enter the room. - -8. Now enter chat messages and see the sentiment being updated in real time each time a message is posted. An example of this is shown in the following screenshot: - - ![Positive and negative sentiment chats](./sentiment-analysis-media/image2.png){width=200px} - -The sentiment analysis service you just deployed subscribes to the `messages` topic. The sentiment is returned to the UI through the `sentiment` topic, and displayed both in the chart and next to the comment in the chat window by colorizing the chat user's name. - -!!! success - - You have added to the pipeline by building and deploying a microservice to analyze the chat messages in real time. - -[Subscribe to Tweets from Twitter by following Part 3 of this tutorial :material-arrow-right-circle:{ align=right }](twitter-data.md) \ No newline at end of file diff --git a/docs/platform/tutorials/sentiment-analysis/code-and-deploy-sentiment-service.md b/docs/platform/tutorials/sentiment-analysis/code-and-deploy-sentiment-service.md deleted file mode 100644 index 971e8138..00000000 --- a/docs/platform/tutorials/sentiment-analysis/code-and-deploy-sentiment-service.md +++ /dev/null @@ -1,420 +0,0 @@ -# Sentiment analysis microservice - -In this optional tutorial part, you learn how to code a sentiment analysis microservice, starting with a template from the Code Samples. Templates are useful building blocks the Quix platform provides, and which give you a great starting point from which to build your own microservices. - -!!! note - The code shown here is kept as simple as possible for learning purposes. Production code would require more robust error handling. - -## Prerequisites - -It is assumed that you have a data source such as the [Sentiment Demo UI](sentiment-demo-ui.md) used in the Sentiment Analysis tutorial. It supplies data to a `messages` topic and has a `chat-message` column in the dataset. - -Follow the steps below to code, test, and deploy a new microservice to your workspace. - -## Select the template - -Follow these steps to locate and save the code to your workspace: - -1. Navigate to the `Code Samples` and apply the following filters: - - 1. Languages = `Python` - - 2. Pipeline Stage = `Transformation` - - 3. Type = `Basic templates` - -2. Select `Starter transformation`. - - - This is a simple example of how to subscribe to and publish messages to Quix. - - You can't edit anything here, this is a read-only view so you can explore the files in the template and see what each one does. - -3. Click `Preview code` then `Edit code`. - -4. Change the name to `Sentiment analysis`. - -5. Ensure the "input" is set to `messages`. - - This is the topic that is subscribed to for messages to analyze. - -6. Ensure the "output" is set to `sentiment`. - - This is the topic that sentiment results are published to. - -7. Click `Save as project`. - - The code is now saved to your workspace, you can edit and run it as needed before deploying it into the Quix production-ready, serverless, and scalable environment. - -## Development lifecycle - -You're now located in the Quix online development environment, where you will develop the code to analyze the sentiment of each message passing through the pipeline. The following sections step through the development process for this tutorial: - -1. Running the unedited code -2. Creating a simple transformation to test your code -3. Implementing the sentiment analysis code -4. Running the sentoment analysis code - -### Running the code - -Begin by running the code as it is, using the following steps: - -1. To get started with this code, click the `run` button near the top right of the code window. - - You'll see the message below in the console output: - - ```sh - Opening input and output topics - Listening to streams. Press CTRL-C to exit. - ``` - -2. Open the Chat App UI you deployed in part 1 of this tutorial and send some messages. - - You will see output similar to this: - - ``` sh - Opening input and output topics - Listening to streams. Press CTRL-C to exit. - time ... TAG__email - 0 1670349744309000000 ... - - [1 rows x 7 columns] - ``` - - This is the Panda DataFrame printed to the console. - -3. To enable you to view the messages more easily you can click the "Messages" tab and send another message from the UI. - - You will see messages arriving in the messages tab: - - ![Messages tab](./sentiment-analysis-media/sentiment-messages.png){width=250px} - - Now click one of the messages. You will see the [JSON](https://www.w3schools.com/whatis/whatis_json.asp){target=_blank} formatted message showing the various parts of the message payload, for example, the "chat-message" and "room": - - ![Expanded message](./sentiment-analysis-media/sentiment-message-expanded.png){width=250px} - -### Creating a simple transformation - -Now that you know the code can subscribe to messages, you need to transform the messages and publish them to an output topic. - -1. If your code is still running, stop by clicking the same button you used to run it. - -2. Locate the `on_pandas_frame_handler` in `quix_function.py`. - -3. Replace the comment `# Here transform your data.` with the code below: - - ```python - # transform "chat-message" column to uppercase - df["chat-message"] = df["chat-message"].str.upper() - ``` - -4. Run the code again, and send some more chat messages from the UI. - -5. The messages in the UI are now all in uppercase as a result of your transformation. - -Don't forget to stop the code again. - -### Sentiment analysis - -Now it's time to update the code to perform the sentiment analysis. - -#### requirements.txt - -1. Select the `requirements.txt` file. - -2. Add a new line, insert the following text and save the file: - - ```sh - transformers[torch] - ``` - -#### main.py - -Follow these steps to make the necessary changes: - -1. Locate the file `main.py`. - -2. Import `pipeline` from `transformers`: - - ```python - from transformers import pipeline - ``` - -3. Create the `classifier` property and set it to a new pipeline: - - ```python - classifier = pipeline('sentiment-analysis') - ``` - - ???- info "What's this `pipeline` thing?" - - The pipeline object comes from the transformers library. It's a library used to integrate [huggingface.co](https://huggingface.co/){target=_blank} models. - - The pipeline object contains several transformations in series, including cleaning and transforming to using the prediction model, hence the term `pipeline`. - - When you initialize the pipeline object you specify the model you want to use for predictions. - - You specified `sentiment-analysis` which directs huggingface to provide their standard one for sentiment analysis. - -4. Locate the `read_stream` method and pass the `classifier` property into the `QuixFunction` initializer as the last parameter: - - The `QuixFunction` initialization should look like this: - ```python - # handle the data in a function to simplify the example - quix_function = QuixFunction(input_stream, output_stream, classifier) - ``` - -???- info "The completed `main.py` should look like this" - - ```python - from quixstreaming import QuixStreamingClient, StreamEndType, StreamReader, AutoOffsetReset - from quixstreaming.app import App - from quix_function import QuixFunction - import os - from transformers import pipeline - - classifier = pipeline('sentiment-analysis') - - - # Quix injects credentials automatically to the client. Alternatively, you can always pass an SDK token manually as an argument. - client = QuixStreamingClient() - - # Change consumer group to a different constant if you want to run model locally. - print("Opening input and output topics") - - input_topic = client.open_input_topic(os.environ["input"], auto_offset_reset=AutoOffsetReset.Latest) - output_topic = client.open_output_topic(os.environ["output"]) - - - # Callback called for each incoming stream - def read_stream(input_stream: StreamReader): - - # Create a new stream to output data - output_stream = output_topic.create_stream(input_stream.stream_id) - output_stream.properties.parents.append(input_stream.stream_id) - - # handle the data in a function to simplify the example - quix_function = QuixFunction(input_stream, output_stream, classifier) - - # React to new data received from input topic. - input_stream.events.on_read += quix_function.on_event_data_handler - input_stream.parameters.on_read_pandas += quix_function.on_pandas_frame_handler - - # When input stream closes, we close output stream as well. - def on_stream_close(endType: StreamEndType): - output_stream.close() - print("Stream closed:" + output_stream.stream_id) - - input_stream.on_stream_closed += on_stream_close - - # Hook up events before initiating read to avoid losing out on any data - input_topic.on_stream_received += read_stream - - # Hook up to termination signal (for docker image) and CTRL-C - print("Listening to streams. Press CTRL-C to exit.") - - # Handle graceful exit of the model. - App.run() - ``` - -#### quix_function.py - -You have completed the changes needed in `main.py`, now you need to update `quix_function.py`. - -##### imports - -1. Select the `quix_function.py` file. - -2. Add the following to the top of the file under the existing imports: - - ```python - from transformers import Pipeline - ``` - -##### init function - -1. Add the following parameter to the `__init__` function: - - ```python - classifier: Pipeline - ``` - - You will pass this in from the `main.py` file in a moment. - -2. Initialize the `classifier` property with the passed in parameter: - - ```python - self.classifier = classifier - ``` - -3. Initialize `sum` and `count` properties: - - ```python - self.sum = 0 - self.count = 0 - ``` - - !!! info "__init__" - - The completed `__init__` function should look like this: - - ```python - def __init__(self, input_stream: StreamReader, output_stream: StreamWriter, classifier: Pipeline): - self.input_stream = input_stream - self.output_stream = output_stream - self.classifier = classifier - - self.sum = 0 - self.count = 0 - ``` - -##### on_pandas_frame_handler function - -Now, following these steps, edit the code to calculate the sentiment of each chat message using the classifier property you set in the init function. - -1. Locate the `on_pandas_frame_handler` function you added code to earlier. - -2. Change the `on_pandas_frame_handler` function to the following code: - - ```python - # Callback triggered for each new parameter data. - def on_pandas_frame_handler(self, df_all_messages: pd.DataFrame): - - # Use the model to predict sentiment label and confidence score on received messages - model_response = self.classifier(list(df_all_messages["chat-message"])) - - # Add the model response ("label" and "score") to the pandas dataframe - df = pd.concat([df_all_messages, pd.DataFrame(model_response)], axis=1) - - # Iterate over the df to work on each message - for i, row in df.iterrows(): - - # Calculate "sentiment" feature using label for sign and score for magnitude - df.loc[i, "sentiment"] = row["score"] if row["label"] == "POSITIVE" else - row["score"] - - # Add average sentiment (and update memory) - self.count = self.count + 1 - self.sum = self.sum + df.loc[i, "sentiment"] - df.loc[i, "average_sentiment"] = self.sum/self.count - - # Output data with new features - self.output_stream.parameters.write(df) - ``` - - This is the heart of the sentiment analysis processing code. It analyzes the sentiment of each message and tracks the average sentiment of the whole conversation. The code works as follows: - - 1. Pass a list of all of the "chat messages" in the data frame to the classifier (the sentiment analysis model) and store the result in memory. - - 2. Concatenate (or add) the model response data to the original data frame. - - 3. For each row in the data frame: - - 1. Use the `label`, obtained from running the model, which is either `POSITIVE` or `NEGATIVE` together with the `score` to assign either `score` or `- score` to the `sentiment` column. - - 2. Maintain the count of all messages and total of the sentiment for all messages so that the average sentiment can be calculated. - - 3. Calculate and assign the average sentiment to the `average_sentiment` column in the data frame. - -???- info "The completed `quix_function.py` should look like this" - - ```python - from quixstreaming import StreamReader, StreamWriter, EventData, ParameterData - import pandas as pd - from transformers import Pipeline - - class QuixFunction: - def __init__(self, input_stream: StreamReader, output_stream: StreamWriter, classifier: Pipeline): - self.input_stream = input_stream - self.output_stream = output_stream - self.classifier = classifier - - self.sum = 0 - self.count = 0 - - # Callback triggered for each new event. - def on_event_data_handler(self, data: EventData): - print(data.value) - - print("events") - - # Callback triggered for each new parameter data. - def on_pandas_frame_handler(self, df_all_messages: pd.DataFrame): - - # Use the model to predict sentiment label and confidence score on received messages - model_response = self.classifier(list(df_all_messages["chat-message"])) - - # Add the model response ("label" and "score") to the pandas dataframe - df = pd.concat([df_all_messages, pd.DataFrame(model_response)], axis=1) - - # Iterate over the df to work on each message - for i, row in df.iterrows(): - - # Calculate "sentiment" feature using label for sign and score for magnitude - df.loc[i, "sentiment"] = row["score"] if row["label"] == "POSITIVE" else - row["score"] - - # Add average sentiment (and update memory) - self.count = self.count + 1 - self.sum = self.sum + df.loc[i, "sentiment"] - df.loc[i, "average_sentiment"] = self.sum/self.count - - # Output data with new features - self.output_stream.parameters.write(df) - ``` - -### Running the completed code - -Now that the code is complete you can `Run` it one more time, just to be certain it's doing what you expect. - -!!! note - - This time, when you run the code, it will start-up and then immediately download the `sentiment-analysis` model from [huggingface.co](https://huggingface.co/){target=_blank} - -1. Click `Run`. - -2. Click the `Messages` tab and select the `output` topic called `sentiment`. - -3. Send some "Chat" messages from the Chat App UI. - -4. Now select a row in the `Messages` tab and inspect the JSON message. - - You will see the `sentiment` and `average_sentiment` in the `NumericValues` section and the `chat-message` and `label` in the `StringValues` section: - - ![Message JSON value](./sentiment-analysis-media/final-message-json.png){width=350px} - -5. You can also verify that the Web Chat UI shows an indication of the sentiment for each message as well as showing the average sentiment in the graph: - - ![Final UI showing sentiment](./sentiment-analysis-media/end-result.gif){width=450px} - -## Deploying your sentiment analysis code - -Now that the sentiment analysis stage is working as expected you can deploy it to the Quix serverless environment. - -!!! info - - If you're thinking that it's already running, so why do you need to bother with this extra step, you should know that the code is currently running in a development sandbox environment. This is separate from the production environment, and is not scalable or resilient. Its main purpose is to allow you to iterate on the development cycle of your Python code, and make sure it runs without error, before deployment. - -Tag the code and deploy the service: - -1. Click the `+tag` button at the top of the code file. - -2. Enter `v1` and press ++enter++. - - This tags the code with a specific identifier and allows you to know exactly which version of the code you are deploying. - -3. Click `Deploy` near the top right corner. - -4. Select `v1` under the `Version Tag`. - - This is the same tag you created in step 2. - -5. In `Deployment settings` change the CPU to 1 and the Memory to 1. - - This ensures the service has enough resources to download and store the hugging face model and to efficiently process the messages. If you are on the free tier, you can try things out with your settings on the maximum for CPU and Memory. - -6. Click `Deploy`. - - - Once the service has been built and deployed it will be started. - - The first thing it will do is download the hugging face model for `sentiment-analysis`. - - Then the input and output topics will be opened and the service will begin listening for messages to process. - -7. Go back to the UI, and make sure everything is working as expected. Your messages will have a color-coded sentiment, and the sentiment will displayed on the graph. - -You have now completed this optional tutorial part. You have learned how to create your own sentiment analysis microservice from the Code Samples. diff --git a/docs/platform/tutorials/sentiment-analysis/conclusion.md b/docs/platform/tutorials/sentiment-analysis/conclusion.md deleted file mode 100644 index 6e4f65f2..00000000 --- a/docs/platform/tutorials/sentiment-analysis/conclusion.md +++ /dev/null @@ -1,21 +0,0 @@ -# Conclusion - -You’ve just made extensive use of the Code Samples, our collection of open source connectors, and examples, to deploy a UI and sentiment analysis microservice, and subscribe to Tweets. - -Congratulations, that's quite an achievement! - -## Next Steps - -Here are some suggested next steps to continue on your Quix learning journey: - -* If you want to build your own sentiment analysis service, rather than use a prebuilt service, you can learn how in the optional tutorial part [how to code a sentiment analysis service](code-and-deploy-sentiment-service.md). - -* If you want to customize the Sentiment Demo UI, you can learn how in the optional tutorial part [how to customize the UI](customize-the-ui.md). - -* If you decide to build your own connectors and apps, you can contribute something to the Code Samples. Visit the [Quix GitHub](https://github.com/quixio/quix-samples){target=_blank}. Fork our Code Samples repo and submit your code, updates, and ideas. - -What will you build? Let us know! We’d love to feature your project or use case in our [newsletter](https://www.quix.io/community/). - -## Getting help - -If you need any assistance, we're here to help in [The Stream](https://join.slack.com/t/stream-processing/shared_invite/zt-13t2qa6ea-9jdiDBXbnE7aHMBOgMt~8g){target=_blank}, our free Slack community. Introduce yourself and then ask any questions in `quix-help`. diff --git a/docs/platform/tutorials/sentiment-analysis/customize-the-ui.md b/docs/platform/tutorials/sentiment-analysis/customize-the-ui.md index 4bb3bf71..510fd1dd 100644 --- a/docs/platform/tutorials/sentiment-analysis/customize-the-ui.md +++ b/docs/platform/tutorials/sentiment-analysis/customize-the-ui.md @@ -1,62 +1,127 @@ -# Customizing the Sentiment Demo UI +# 👩‍🔬 Lab - Customize the UI -In this optional tutorial part, you learn how to customize the Sentiment Demo UI. +In this lab you use everything you've learned so far, to add a customization to the pipeline. Specifically, you change the name of the chat room in the web UI. -If you want to customize the Sentiment Demo UI, you would follow three main steps: +You develop this change on a feature branch, and then you create a PR to merge your new feature into the develop branch. -1. Create the new project from the existing UI code. -2. Modify the code in your next project as required to customize the UI. -3. Deploy the modified UI. +This is a common pattern for development - you can test your new service on the feature branch, and then test again on the develop branch, before final integration into the production `main` branch. -## Create the project +## Create an environment -1. Navigate to the `Code Samples` and locate `Sentiment Demo UI`. +To create a new environment (and branch): -2. Click `Preview code` and then `Edit code`. +1. Click `+ New environment` to create a new environment (**note, your screen will look slightly different to the one shown here**): -3. Ensure that the `sentiment` input box contains `sentiment`. + ![New environment](./images/new-environment.png) - This topic will be subscribed to and will contain the sentiment scores from the sentiment analysis service, you'll deploy this in a later part of this tutorial. +2. Create a new environment called `Rename Chat Room`. -3. Ensure that the `messages` input contains `messages`. +3. Create a new branch called `rename-chat-room`. To do this, from the branch dropdown click `+ New branch` which displays the New branch dialog: - - This topic will contain all the chat messages. - - The UI will subscribe to this topic, to display new messages, as well as publishing to the topic when a user sends a message using the 'send' button in the UI. - - Later, the sentiment analysis service will also subscribe to messages on this topic to produce sentiment scores. + ![New branch](./images/new-branch.png) -3. Click `Save as project`. + !!! important - The code for this Angular UI is now saved to your workspace. + Make sure you branch from the `develop` branch, not `main`, as you are going to merge your changes onto the `develop` branch. -You have created the project and you can now modifiy the code as required. +4. Complete creation of the environment using the default options. -## Modify the code +5. On the projects screen, click your newly created environment, `Rename Chat Room`. -At this stage if you want to customize the code you can do so. You can also deploy what you have and customize it later by repeating the steps in the following section. +## Sync the environment -## Deploy your modified code +You now see that the Quix environment is out of sync with the Git repository. You need to synchronize the Quix view of the environment, with that stored in the repository. -1. Click the `+tag` button at the top of any code file. +To synchronize Quix with the repository: -2. Enter `v1` and press ++enter++. +1. Click `Sync environment`: -3. Click `Deploy` near the top right corner. + ![Sync environment](./images/sync-environment.png) -4. In the deployment dialog, select your tag, for example, `v1` under the `Version Tag`. - - This is the tag you just created. + The sync environment dialog is displayed, showing you the changes that are to be made to the `quix.yaml` file, which is the configuration file that defines the pipeline. -5. Click `Service` in `Deployment Settings`. - - This ensures the service runs continuously. +2. Click `Sync environment`, and then `Go to pipeline`. -6. Click the toggle in `Public Access`. + In the pipeline view, you see the services building. Ensure all services are "Running" before continuing. - This enables access from anywhere on the internet. +## Edit the code -7. Click `Deploy`. - - - The UI will stream data from the `sentiment` and `messages` topics as well as send messages to the `messages` topic. - - The `sentiment` topic will be used later for sentiment analysis. +You are now going to edit the code for the UI to rename the chat room. To do this: -In this tutorial you've learned how you can modify the Sentiment Demo UI. +1. Click `Applications` in the left-hand navigation. Locate the UI application in the list and click on it. The code view loads. + +2. Locate the file `room.service.ts` and click it. You can then change the room name to something like `Support Chat Room`: + + ![Edit code](./images/edit-code.png) + +3. Click `Commit` to save your changes (or use your usual Save hotkey such as Command-s). + +4. Click the tag icon, and enter a tag value such as `rename-room-v1`: + + ![Tag icon](./images/tag.png) + +5. Now click the `Redeploy` button on the top right of the code screen. + +6. In the `Edit deployment` dialog select the tag `rename-room-v1` from the `Version tag` dropdown, and then click `Redeploy`. + +At this point the redeployment will restart. You see the spinner as the service rebuilds. After some time, the service the spinner will disappear and you can test the UI again. The name of the chat room has changed: + +![New name](./images/new-name.png) + +Once you're happy with your change you can move on to merge this to the `develop` branch. + +## Merge the feature + +Once you are sure that the changes on your feature branch are tested, you can then merge your changes onto the `develop`` branch. Here your changes undergo further tests before finally being merged into production. + +To merge your feature branch, `rename-chat-room` into `develop`: + +1. Select `Merge request` from the menu as shown: + + ![Merge request menu](./images/merge-request-menu.png) + +2. In the `Merge request` dialog, set the `rename-chat-room` branch to merge into the `develop` branch, as shown: + + ![Merge request dialog](./images/merge-request-dialog.png) + +You are going to create a pull request, rather than perform a direct merge. This enables you to have the PR reviewed in GitHub (or other Git provider). You are also going to do a squash and merge, as much of the feature branch history is not required. + +## Create the pull request + +To create the pull request: + +1. Click `Create pull request` in Quix. You are taken to your Git provider, in this case GitHub. + +2. Click the `Pull request` button. + +3. Add your description, and then click `Create pull request`. + +4. Get your PR reviewed and approved. Then squash and merge the commits. + + ![Squash and merge](./images/squash-and-merge.png) + + You can replace the prefilled description by something more succinct. Then click `Confirm squash and merge`. + + !!! tip + + You can just merge, you don't have to squash and merge. You would then retain the complete commit history for your service while it was being developed. Squash and merge is used in this case by way of example, as the commit messages generated while the service was being developed were deemed to be not useful in this case. + +## Resync the Develop environment + +You have now merged your new feature into the `develop` branch in the Git repository. Your Quix view in the Develop environment is now out of sync with the Git repository. If you click on your Develop environment in Quix, you'll see it is now a commit (the merge commit) behind: + +![Develop behind](./images/develop-behind.png) + +You now need to make sure your Develop environment in Quix is synchronized with the Git repository. To do this: + +1. Click on `Sync environment`. The `Sync environment` dialog is displayed. + +2. Review the changes and click `Sync environment`. + +3. Click `Go to pipeline`. + +Your new service will build and start in the Develop environment, where you can now carry out further testing. When you are satisfied this feature can be released tp production, then you would repeat the previous process to merge your changes to Production `main`. + +## 🏃‍♀️ Next step + +[Part 7 - Summary :material-arrow-right-circle:{ align=right }](summary.md) diff --git a/docs/platform/tutorials/sentiment-analysis/get-project.md b/docs/platform/tutorials/sentiment-analysis/get-project.md new file mode 100644 index 00000000..fcec9525 --- /dev/null +++ b/docs/platform/tutorials/sentiment-analysis/get-project.md @@ -0,0 +1,109 @@ +# Get the project + +While you can try out the live demo, or experiment using the ungated product experience, it can be useful to learn how to get a project up and running in Quix. + +Once you have the project running in your Quix account, you can modify the project as required, and save your changes to your forked copy of the project. With a forked copy of the repository, you can also receive upstream bug fixes and improvements if you want to, by syncing the fork with the upstream repository. + +In the following sections you learn how to: + +1. Fork an existing project repository, in this case the image processing template project. +2. Create a new project (and environment) in Quix linked to your forked repository. + +In later parts of the tutorial you explore the project pipeline using the Quix data explorer and other tools, viewing code, examining data structures, and getting a practical feel for the Quix Portal. + +## 💡 Key ideas + +The key ideas on this page: + +* Forking a public template project repository +* Connecting Quix to an external Git repository, in this case the forked repository +* Quix projects, environments, and applications +* Pipeline view of project +* Synchronizing an environment + +## Fork the project repository + +Quix provides the image processing template project as a [public GitHub repository](https://github.com/quixio/chat-demo-app){target="_blank"}. If you want to use this template as a starting point for your own project, then the best way to accomplish this is to fork the project. Using fork allows you to create a complete copy of the project, but also benefit from future bug fixes and improvements by using the upstream changes. + +To fork the repository: + +1. Navigate to the [Quix GitHub repository](https://github.com/quixio/chat-demo-app){target="_blank"}. + +2. Click the `Fork` button to fork the repo into your GitHub account (or equivalent Git provider if you don't have a GitHub account). Make sure you fork all branches, as you will be looking at the `develop` branch. + + !!! tip + + If you don't have GitHub account you can use another Git provider, such as GitLab or Bitbucket. If using Bitbucket, for example, you could import the repository - this would act as a clone (a static snapshot) of the repository. This is a simple option for Bitbucket, but you would not receive upstream changes from the original repository once the repository has been imported. You would however have a copy of the project you could then modify to suit your use case. Other providers support other options, check the documentation for your Git provider. + +## Create your Quix project + +Now that you have a forked copy of the repository in your GitHub account, you can now link your Quix account to it. Doing this enables you to build and deploy the project in your Quix account, and examine the pipeline much more closely. + +To link Quix to this forked repository: + +1. Log into your Quix account. + +2. Click `+ Create project`. + +3. Give your project a name. For example, "Sentiment Analysis". + +4. Select `Connect to your own Git repo`, and follow the setup guide for your provider. + + !!! tip + + A setup guide is provided for each of the common Git providers. Other Git providers are supported, as long as they support SSH keys. + + The setup guide for GitHub is shown here: + + ![Git seup guide](../../images/git-setup-guide.png) + +5. Assuming you are connecting to a GitHub account, you'll now need to copy the SSH key provided by Quix into your GitHub account. See the setup guide for further details. + + !!! important + + It is recommended that you create a new user in your Git provider for managing your Quix projects. You are reminded of this when you create a project (the notice is shown in the following screenshot). + + ![Create new user](../../images/create-new-github-user.png) + + +6. Click `Validate` to test the connection between Quix and GitHub. + + !!! tip + + If errors occur you need to address them before continuing. For example, make sure you have the correct link to the repository, and you have have added the provided SSH key to your provider account, as outlined in the setup guide for that provider. + +7. Click `Done` to proceed. + +You now need to add an environment to your project. This is explained in the following section. + +## Create your Develop environment + +A Quix project contains at least one branch. For the purposes of this tutorial you will examine the `develop` branch of the project. In a Quix project a branch is encapsulated in an environment. You'll create a `Develop` environment mapped to the `develop` branch of the repository. + +Now create an environment called `Develop` which uses the `develop` branch: + +1. Enter the environment name `Develop`. + +2. Select the `develop` branch from the dropdown. + + Make sure the branch is protected, by making sure the `This branch is protected` checkbox is selected. + + !!! tip + + Making a branch protected ensures that developers cannot commit directly into the branch. Developers have to raise pull requests (PRs), which need to be approved before they can be merged into the protected branch. + +3. Click `Continue` and then select the Quix Broker and Standard storage options to complete creation of the environment, and the project. + +4. Go to the pipeline view. You will see that Quix is out of sync with the repository. + +5. Click the `Sync` button to synchronize the environment, and then click `Go to pipeline`. You will see the pipeline building. + +At this point you can wait a few minutes for the pipeline services to completely build and start running. + +## See also + +If you are new to Quix it is worth reviewing the [recent changes page](../../changes.md), as that contains very useful information about the significant recent changes, and also has a number of useful videos you can watch to gain familiarity with Quix. + +## 🏃‍♀️ Next step + +[Part 2 - Try the UI :material-arrow-right-circle:{ align=right }](try-the-ui.md) diff --git a/docs/platform/tutorials/sentiment-analysis/images/click-code-tile.png b/docs/platform/tutorials/sentiment-analysis/images/click-code-tile.png new file mode 100644 index 00000000..9ab654b1 Binary files /dev/null and b/docs/platform/tutorials/sentiment-analysis/images/click-code-tile.png differ diff --git a/docs/platform/tutorials/sentiment-analysis/images/develop-behind.png b/docs/platform/tutorials/sentiment-analysis/images/develop-behind.png new file mode 100644 index 00000000..f306b131 Binary files /dev/null and b/docs/platform/tutorials/sentiment-analysis/images/develop-behind.png differ diff --git a/docs/platform/tutorials/sentiment-analysis/images/edit-code.png b/docs/platform/tutorials/sentiment-analysis/images/edit-code.png new file mode 100644 index 00000000..246ef083 Binary files /dev/null and b/docs/platform/tutorials/sentiment-analysis/images/edit-code.png differ diff --git a/docs/platform/tutorials/sentiment-analysis/images/merge-request-dialog.png b/docs/platform/tutorials/sentiment-analysis/images/merge-request-dialog.png new file mode 100644 index 00000000..6c371082 Binary files /dev/null and b/docs/platform/tutorials/sentiment-analysis/images/merge-request-dialog.png differ diff --git a/docs/platform/tutorials/sentiment-analysis/images/merge-request-menu.png b/docs/platform/tutorials/sentiment-analysis/images/merge-request-menu.png new file mode 100644 index 00000000..a3867ae3 Binary files /dev/null and b/docs/platform/tutorials/sentiment-analysis/images/merge-request-menu.png differ diff --git a/docs/platform/tutorials/sentiment-analysis/images/messages-view.png b/docs/platform/tutorials/sentiment-analysis/images/messages-view.png new file mode 100644 index 00000000..b81fd40d Binary files /dev/null and b/docs/platform/tutorials/sentiment-analysis/images/messages-view.png differ diff --git a/docs/platform/tutorials/sentiment-analysis/images/new-branch.png b/docs/platform/tutorials/sentiment-analysis/images/new-branch.png new file mode 100644 index 00000000..2cc4f09d Binary files /dev/null and b/docs/platform/tutorials/sentiment-analysis/images/new-branch.png differ diff --git a/docs/platform/tutorials/sentiment-analysis/images/new-environment.png b/docs/platform/tutorials/sentiment-analysis/images/new-environment.png new file mode 100644 index 00000000..34fd5fc0 Binary files /dev/null and b/docs/platform/tutorials/sentiment-analysis/images/new-environment.png differ diff --git a/docs/platform/tutorials/sentiment-analysis/images/new-name.png b/docs/platform/tutorials/sentiment-analysis/images/new-name.png new file mode 100644 index 00000000..46311a80 Binary files /dev/null and b/docs/platform/tutorials/sentiment-analysis/images/new-name.png differ diff --git a/docs/platform/tutorials/sentiment-analysis/images/pipeline-view.png b/docs/platform/tutorials/sentiment-analysis/images/pipeline-view.png new file mode 100644 index 00000000..89c9d92a Binary files /dev/null and b/docs/platform/tutorials/sentiment-analysis/images/pipeline-view.png differ diff --git a/docs/platform/tutorials/sentiment-analysis/images/running-ui.png b/docs/platform/tutorials/sentiment-analysis/images/running-ui.png new file mode 100644 index 00000000..41781424 Binary files /dev/null and b/docs/platform/tutorials/sentiment-analysis/images/running-ui.png differ diff --git a/docs/platform/tutorials/sentiment-analysis/images/sentiment-analysis-pipeline-segment.png b/docs/platform/tutorials/sentiment-analysis/images/sentiment-analysis-pipeline-segment.png new file mode 100644 index 00000000..e5aff16e Binary files /dev/null and b/docs/platform/tutorials/sentiment-analysis/images/sentiment-analysis-pipeline-segment.png differ diff --git a/docs/platform/tutorials/sentiment-analysis/images/squash-and-merge.png b/docs/platform/tutorials/sentiment-analysis/images/squash-and-merge.png new file mode 100644 index 00000000..be8ac05e Binary files /dev/null and b/docs/platform/tutorials/sentiment-analysis/images/squash-and-merge.png differ diff --git a/docs/platform/tutorials/sentiment-analysis/images/sync-environment.png b/docs/platform/tutorials/sentiment-analysis/images/sync-environment.png new file mode 100644 index 00000000..aa6d5f4a Binary files /dev/null and b/docs/platform/tutorials/sentiment-analysis/images/sync-environment.png differ diff --git a/docs/platform/tutorials/sentiment-analysis/images/tag.png b/docs/platform/tutorials/sentiment-analysis/images/tag.png new file mode 100644 index 00000000..286d93e4 Binary files /dev/null and b/docs/platform/tutorials/sentiment-analysis/images/tag.png differ diff --git a/docs/platform/tutorials/sentiment-analysis/images/topics-view-live-data.png b/docs/platform/tutorials/sentiment-analysis/images/topics-view-live-data.png new file mode 100644 index 00000000..7b80bc14 Binary files /dev/null and b/docs/platform/tutorials/sentiment-analysis/images/topics-view-live-data.png differ diff --git a/docs/platform/tutorials/sentiment-analysis/images/twitch-channels.png b/docs/platform/tutorials/sentiment-analysis/images/twitch-channels.png new file mode 100644 index 00000000..92252ee6 Binary files /dev/null and b/docs/platform/tutorials/sentiment-analysis/images/twitch-channels.png differ diff --git a/docs/platform/tutorials/sentiment-analysis/images/twitch-credentials.png b/docs/platform/tutorials/sentiment-analysis/images/twitch-credentials.png new file mode 100644 index 00000000..37037735 Binary files /dev/null and b/docs/platform/tutorials/sentiment-analysis/images/twitch-credentials.png differ diff --git a/docs/platform/tutorials/sentiment-analysis/sentiment-analysis-media/ui-tile.png b/docs/platform/tutorials/sentiment-analysis/images/ui-tile.png similarity index 100% rename from docs/platform/tutorials/sentiment-analysis/sentiment-analysis-media/ui-tile.png rename to docs/platform/tutorials/sentiment-analysis/images/ui-tile.png diff --git a/docs/platform/tutorials/sentiment-analysis/images/web-ui-pipeline-segment.png b/docs/platform/tutorials/sentiment-analysis/images/web-ui-pipeline-segment.png new file mode 100644 index 00000000..74cff9cb Binary files /dev/null and b/docs/platform/tutorials/sentiment-analysis/images/web-ui-pipeline-segment.png differ diff --git a/docs/platform/tutorials/sentiment-analysis/index.md b/docs/platform/tutorials/sentiment-analysis/index.md index a74a2221..04ef0077 100644 --- a/docs/platform/tutorials/sentiment-analysis/index.md +++ b/docs/platform/tutorials/sentiment-analysis/index.md @@ -1,34 +1,124 @@ # Sentiment analysis -In this tutorial you will learn how to build a real-time sentiment analysis pipeline. You'll deploy a UI, a sentiment analysis service, and then connect to Twitter to analyze a high-volume of Twitter data. +In this tutorial you learn about a real-time sentiment analysis pipeline, using a [Quix template project](https://github.com/quixio/chat-demo-app){target=_blank}. -This is the message processing pipeline you will build in this tutorial: +Sentiment analysis is performed on chat messages. The project includes a chat UI, where you can type chat messages. You can also connect to Twitch and perform sentiment analysis on large volumes of messages. -![The pipeline being built in this tutorial](./sentiment-analysis-media/pipeline-view.png) +The completed application is illustrated in the following screenshot: -The completed project is capable of the sentiment analysis of a high volume of tweets, or your own chat messages, as illustrated in the following screenshot: +![Chat with sentiment analysis](./images/running-ui.png) -![The completed project. Chats on screen with sentiment and overall sentiment](./sentiment-analysis-media/running-ui.png){width=450px} +You learn how to get the project, try out the UI, look more deeply into the UI and the sentiment analysis service, and then customize the UI. -There are also optional parts of the tutorial where you can learn how to build your own sentiment analysis service rather than use a prebuilt service, and customize your UI, which all help to enhance your learning of key Quix concepts. +## Technologies used -!!! tip - If you need any assistance, we’re here to help in [The Stream](https://join.slack.com/t/stream-processing/shared_invite/zt-13t2qa6ea-9jdiDBXbnE7aHMBOgMt~8g){target=_blank}, our free Slack community. Introduce yourself and then ask any questions in `quix-help`. +Some of the technologies used by this template project are listed here. + +**Infrastructure:** + +* [Quix](https://quix.io/){target=_blank} +* [Docker](https://www.docker.com/){target=_blank} +* [Kubernetes](https://kubernetes.io/){target=_blank} + +**Backend:** + +* [Apache Kafka](https://kafka.apache.org/){target=_blank} +* [Quix Streams](https://github.com/quixio/quix-streams){target=_blank} +* [Flask](https://flask.palletsprojects.com/en/2.3.x/#){target=_blank} +* [pandas](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html){target=_blank} + +**Sentiment analysis:** + +* [Hugging Face](https://huggingface.co/){target=_blank} + +**Frontend:** + +* [Angular](https://angular.io/){target=_blank} +* [Typescript](https://www.typescriptlang.org/){target=_blank} +* [Microsoft SignalR](https://learn.microsoft.com/en-us/aspnet/signalr/){target=_blank} + +**Data warehousing:** + +* [BigQuery](https://cloud.google.com/bigquery/){target=_blank} + +## Live demo + +You can see the project running live on Quix: + +
+ +
+ +You can interact with it here, on this page, or open the page to view it more clearly [here](https://sentimentdemoui-demo-chatappdemo-prod.deployments.quix.ai/chat){target="_blank"}. + +## Watch a video + +Explore the pipeline: + +**Loom video coming soon.** + +??? Transcript + + **Transcript - coming soon** + +## GitHub repository + +The complete code for this project can be found in the [Quix GitHub repository](https://github.com/quixio/chat-demo-app){target="_blank"}. + +## Getting help + +If you need any assistance while following the tutorial, we're here to help in the [Quix forum](https://forum.quix.io/){target="_blank"}. + +## Prerequisites + +To get started make sure you have a [free Quix account](https://portal.platform.quix.ai/self-sign-up). + +If you are new to Quix it is worth reviewing the [recent changes page](../../changes.md), as that contains very useful information about the significant recent changes, and also has a number of useful videos you can watch to gain familiarity with Quix. + +You'll also need an API key for the [Twitch](https://dev.twitch.tv/docs/api/) service (optional), if you want to try Twitch-related features. + +If you want to use the Quix BigQuery service (optional), you'll need to provide your credentials for accessing [BigQuery](https://cloud.google.com/bigquery/){target=_blank}. + +### Git provider + +You also need to have a Git account. This could be GitHub, Bitbucket, GitLab, or any other Git provider you are familar with, and that supports SSH keys. The simplest option is to create a free [GitHub account](){target=_blank}. + +!!! tip + + While this tutorial uses an external Git account, Quix can also provide a Quix-hosted Git solution using Gitea for your own projects. You can watch a video on [how to create a project using Quix-hosted Git](https://www.loom.com/share/b4488be244834333aec56e1a35faf4db?sid=a9aa124a-a2b0-45f1-a756-11b4395d0efc){target=_blank}. + +## The pipeline + +This is the message processing pipeline for this project: + +![The pipeline](./images/pipeline-view.png) + +The main services in the pipeline are: + +1. *UI* - provides the chat UI, and shows the sentiment being applied to the chat messages. + +2. *Sentiment analysis* - uses the [Hugging Face](https://huggingface.co/) model to perform sentiment analysis on the chat messages. + +3. *Twitch data source* - An alternative to typing chat messages - you select a Twitch channel and then perform sentiment analysis on Twitch messages. ## The parts of the tutorial This tutorial is divided up into several parts, to make it a more manageable learning experience. The parts are summarized here: -1. **Build your UI**. You deploy the [Sentiment Demo UI](sentiment-demo-ui.md). This is the UI for the tutorial, it allows the user to see messages from all of the users of the app and, in later parts of the tutorial, allow the users to see the sentiment of the chat messages. +1. [Get the project](get-project.md) - you get the project up and running in your Quix account. + +2. [Try the UI](try-the-ui.md) - you try the UI, typing in chat messages and observing the sentiment analysis in operation. + +3. [Explore the UI service](ui-service.md) - explore UI service and gateways... -2. **Deploy a sentiment analysis microservice**. You configure and deploy a microservice in your pipeline capable of [Analyzing](analyze.md) the sentiment of the messages sent through the Sentiment Demo UI. +4. [Explore the sentiment analysis service](sentiment-analysis-service.md) - you take a closer look at the sentiment analysis service, its code, and messages. -3. **Extend your pipeline to handle Twitter data**. In this part, you can increase the volume of messages by using the [Twitter integration](twitter-data.md). You deploy a data source that subscribes to Twitter messages and then publishes them to the Sentiment Demo UI. Sentiment is then determined in real-time. +5. [Explore ther Twitch service](twitch-service.md) - you explore the service that interfaces Quix with Twitch using the [Twitch API](https://dev.twitch.tv/docs/api/){target=_blank}. -4. **Summary**. In this [concluding](conclusion.md) part you are presented with a summary of the work you have completed, and also some next steps for more advanced learning about the Quix Platform. These additional items are listed next. +6. [Customize the UI](customize-the-ui.md) - you carry out a simple customization to the chat UI on a feature branch, and then merge your changes onto the develop branch. -5. **Build a sentiment analysis microservice**. In this optional part, you'll build your own sentiment analysis microservice, rather than use a prebuilt service. +7. [Summary](summary.md) - you are presented with a summary of the work you have completed. -6. **Customize the UI**. In this optional part you learn how to customize the Sentiment Demo UI. +## 🏃‍♀️ Next step -[Deploy the first part of the solution by following step 1 :material-arrow-right-circle:{ align=right }](sentiment-demo-ui.md) +[Part 1 - Get the project :material-arrow-right-circle:{ align=right }](get-project.md) diff --git a/docs/platform/tutorials/sentiment-analysis/sentiment-analysis-media/end-result.gif b/docs/platform/tutorials/sentiment-analysis/sentiment-analysis-media/end-result.gif deleted file mode 100644 index b3d9ee83..00000000 Binary files a/docs/platform/tutorials/sentiment-analysis/sentiment-analysis-media/end-result.gif and /dev/null differ diff --git a/docs/platform/tutorials/sentiment-analysis/sentiment-analysis-media/end-result.png b/docs/platform/tutorials/sentiment-analysis/sentiment-analysis-media/end-result.png deleted file mode 100644 index f0675314..00000000 Binary files a/docs/platform/tutorials/sentiment-analysis/sentiment-analysis-media/end-result.png and /dev/null differ diff --git a/docs/platform/tutorials/sentiment-analysis/sentiment-analysis-media/final-message-json.png b/docs/platform/tutorials/sentiment-analysis/sentiment-analysis-media/final-message-json.png deleted file mode 100644 index c06439fb..00000000 Binary files a/docs/platform/tutorials/sentiment-analysis/sentiment-analysis-media/final-message-json.png and /dev/null differ diff --git a/docs/platform/tutorials/sentiment-analysis/sentiment-analysis-media/finale-enter-chat-room.png b/docs/platform/tutorials/sentiment-analysis/sentiment-analysis-media/finale-enter-chat-room.png deleted file mode 100644 index 471d8076..00000000 Binary files a/docs/platform/tutorials/sentiment-analysis/sentiment-analysis-media/finale-enter-chat-room.png and /dev/null differ diff --git a/docs/platform/tutorials/sentiment-analysis/sentiment-analysis-media/image1.png b/docs/platform/tutorials/sentiment-analysis/sentiment-analysis-media/image1.png deleted file mode 100644 index eb89f7aa..00000000 Binary files a/docs/platform/tutorials/sentiment-analysis/sentiment-analysis-media/image1.png and /dev/null differ diff --git a/docs/platform/tutorials/sentiment-analysis/sentiment-analysis-media/image2.png b/docs/platform/tutorials/sentiment-analysis/sentiment-analysis-media/image2.png deleted file mode 100644 index 9923854d..00000000 Binary files a/docs/platform/tutorials/sentiment-analysis/sentiment-analysis-media/image2.png and /dev/null differ diff --git a/docs/platform/tutorials/sentiment-analysis/sentiment-analysis-media/image3.png b/docs/platform/tutorials/sentiment-analysis/sentiment-analysis-media/image3.png deleted file mode 100644 index 3e7a1b12..00000000 Binary files a/docs/platform/tutorials/sentiment-analysis/sentiment-analysis-media/image3.png and /dev/null differ diff --git a/docs/platform/tutorials/sentiment-analysis/sentiment-analysis-media/image4.png b/docs/platform/tutorials/sentiment-analysis/sentiment-analysis-media/image4.png deleted file mode 100644 index 576deb36..00000000 Binary files a/docs/platform/tutorials/sentiment-analysis/sentiment-analysis-media/image4.png and /dev/null differ diff --git a/docs/platform/tutorials/sentiment-analysis/sentiment-analysis-media/image5.png b/docs/platform/tutorials/sentiment-analysis/sentiment-analysis-media/image5.png deleted file mode 100644 index 28cd8518..00000000 Binary files a/docs/platform/tutorials/sentiment-analysis/sentiment-analysis-media/image5.png and /dev/null differ diff --git a/docs/platform/tutorials/sentiment-analysis/sentiment-analysis-media/pipeline-view-twitter-branch.png b/docs/platform/tutorials/sentiment-analysis/sentiment-analysis-media/pipeline-view-twitter-branch.png deleted file mode 100644 index 92d63366..00000000 Binary files a/docs/platform/tutorials/sentiment-analysis/sentiment-analysis-media/pipeline-view-twitter-branch.png and /dev/null differ diff --git a/docs/platform/tutorials/sentiment-analysis/sentiment-analysis-media/pipeline-view.png b/docs/platform/tutorials/sentiment-analysis/sentiment-analysis-media/pipeline-view.png deleted file mode 100644 index e59f65c8..00000000 Binary files a/docs/platform/tutorials/sentiment-analysis/sentiment-analysis-media/pipeline-view.png and /dev/null differ diff --git a/docs/platform/tutorials/sentiment-analysis/sentiment-analysis-media/running-ui.png b/docs/platform/tutorials/sentiment-analysis/sentiment-analysis-media/running-ui.png deleted file mode 100644 index a3165648..00000000 Binary files a/docs/platform/tutorials/sentiment-analysis/sentiment-analysis-media/running-ui.png and /dev/null differ diff --git a/docs/platform/tutorials/sentiment-analysis/sentiment-analysis-media/sentiment-message-expanded.png b/docs/platform/tutorials/sentiment-analysis/sentiment-analysis-media/sentiment-message-expanded.png deleted file mode 100644 index 75daa017..00000000 Binary files a/docs/platform/tutorials/sentiment-analysis/sentiment-analysis-media/sentiment-message-expanded.png and /dev/null differ diff --git a/docs/platform/tutorials/sentiment-analysis/sentiment-analysis-media/sentiment-messages.png b/docs/platform/tutorials/sentiment-analysis/sentiment-analysis-media/sentiment-messages.png deleted file mode 100644 index f4fd0fe8..00000000 Binary files a/docs/platform/tutorials/sentiment-analysis/sentiment-analysis-media/sentiment-messages.png and /dev/null differ diff --git a/docs/platform/tutorials/sentiment-analysis/sentiment-analysis-media/web-gateway.png b/docs/platform/tutorials/sentiment-analysis/sentiment-analysis-media/web-gateway.png deleted file mode 100644 index f911dbbd..00000000 Binary files a/docs/platform/tutorials/sentiment-analysis/sentiment-analysis-media/web-gateway.png and /dev/null differ diff --git a/docs/platform/tutorials/sentiment-analysis/sentiment-analysis-service.md b/docs/platform/tutorials/sentiment-analysis/sentiment-analysis-service.md new file mode 100644 index 00000000..1ac0137a --- /dev/null +++ b/docs/platform/tutorials/sentiment-analysis/sentiment-analysis-service.md @@ -0,0 +1,70 @@ +# Sentiment analysis service + +In this part of the tutorial you learn about the sentiment analysis service. + +![Sentiment analysis](./images/sentiment-analysis-pipeline-segment.png) + +This service uses the Hugging Face model to calculate sentiment for messages, and these are then displayed on the web UI. + +## 💡 Key ideas + +The key ideas on this page: + +* Hugging Face model is used to generate sentiment values +* Sentiment analysis service subscribes to two topics and publishes to two topics +* How to examine message formats + +## What it does + +The sentiment analysis service uses a prebuilt model from [Hugging Face](https://huggingface.co/){target=_blank} to analyze the sentiment of each message flowing through the service. + +The sentiment analysis service subscribes to the `chat-messages` and `drafts` topics. The messages and draft messages are generated by the web UI. Draft messages are messages while the user is typing them, before they are sent. These are used to generate sentiment while the user is typing. + +After sentiment analysis performed by the Hugging Face model, sentiment values are published to the `chat-with-sentiment` and `drafts_sentiment` topics. The UI subscribes to these topics, and can then display the setiment values in the UI. + +## 👩‍🔬 Lab - Examine messages + +There are several ways to view live data. This lab shows one way to do it. + +1. Click on `Topics` in the main left-hand navigation. + +2. Where you see live data for the `messages` topic, click in that area, as shown in the screenshot: + + ![View live data](./images/topics-view-live-data.png){width=80%} + + You are taken to the live view of the Quix Data Explorer. + +3. The `messages` topic is preselected for you. The stream names are the user names that you entered, or are user names from the Twitch service. Select any one and then select the `chat-message` parameter. + +4. Click the `Messages` view, and then click on any real-time message displayed. In the message code view you see something similar to the following: + + ``` json + { + "Epoch": 0, + "Timestamps": [ + 1695303751958000000 + ], + "NumericValues": {}, + "StringValues": { + "chat-message": [ + "Can you check on my order please?" + ] + }, + "BinaryValues": {}, + "TagValues": { + "room": [ + "channel" + ], + "name": [ + "gabbybe" + ], + "role": [ + "Customer" + ] + } + } + ``` + +## 🏃‍♀️ Next step + +[Part 5 - Explore the Twitch service :material-arrow-right-circle:{ align=right }](twitch-service.md) \ No newline at end of file diff --git a/docs/platform/tutorials/sentiment-analysis/sentiment-demo-ui.md b/docs/platform/tutorials/sentiment-analysis/sentiment-demo-ui.md deleted file mode 100644 index 310fff61..00000000 --- a/docs/platform/tutorials/sentiment-analysis/sentiment-demo-ui.md +++ /dev/null @@ -1,91 +0,0 @@ -# 1. Sentiment Demo UI - -The Sentiment Demo UI is the UI for the tutorial and enables the user to see messages from all of the users of the app and, in later parts of this tutorial, allows the users to see the sentiment of the chat messages. - -The UI you will build in this part of the tutorial is shown in the following screenshot: - -![The sentiment analysis demo page](./sentiment-analysis-media/image3.png){width=550px} - -## Creating the gateways - -Gateways provide a way for external apps to subscribe and publish to topics, and help visualize those connections in the pipeline view of the Quix platform. An example of their use is shown in the following screenshot: - -![Chat messages webgateway](./sentiment-analysis-media/web-gateway.png){width=550px} - -In this scenario, the Sentiment Demo UI is the external app since it is using the [Quix websockets API](../../how-to/webapps/read.md). - -Follow these steps to create the messages and sentiment web gateways: - -1. Make sure the `sentiment` topic is available. Click `Topics` on the main left-hand navigation, and click `+ Create topic`, enter `sentiment`, and then click `Create`. - - This topic needs to be available so you can select it in a later step. - -2. Navigate to the `Code Samples` and locate `External Source`. - -3. Click `Add external source`. - -4. In the `name` field enter `Chat messages WebGateway`. - -5. Select or enter `messages` in the `output` field. - -6. Click `Add external Source` to create the external source. - -7. Navigate to the `Code Samples` and locate `External Destination`. - -8. Click `Add external destination`. - -9. In the `name` field enter `Chat sentiment WebGateway`. - -10. Select `sentiment` in the `input` field. - -11. Click `Add external Destination` to create the external destination. - -You've now created the gateways needed for this tutorial. - -## Locating and deploying the Sentiment Demo UI - -The following steps demonstrate how to select the demo UI Sample and deploy it to your Quix workspace. - -Follow these steps to deploy the prebuilt UI: - -1. Navigate to the `Code Samples` and locate `Sentiment Demo UI`. - -2. Click the `Setup & deploy` button. - -3. Ensure that the `sentiment` input box contains `sentiment`. - - This topic will be subscribed to and will contain the sentiment scores from the sentiment analysis service, you'll deploy this in a later part of this tutorial. - -4. Ensure that the `messages` input contains `messages`. - - - This topic will contain all the chat messages. - - The UI will subscribe to this topic, to display new messages, as well as publishing to the topic when a user sends a message using the `send` button in the UI. - - Later, the sentiment analysis service will also subscribe to messages on this topic to produce sentiment scores. - -5. Click `Deploy`. - -You've now Deployed the Sentiment Demo UI. - -## Trying out the UI - -Now try out the UI you just deployed. - -1. Find the URL for the deployed UI by navigating to the homepage and locating the tile representing the deployed UI, as shown in the following screenshot: - - ![Deployed UI tile](./sentiment-analysis-media/ui-tile.png){width=250px} - -2. Click the `open in new window` icon ![Open in new window icon](../../../platform/images/general/open_in_new_window.png){width=18px}. - - This is the user interface for the demo. The view you’ll see after creating a `room` to chat in is shown in the following screenshot: - - ![The sentiment analysis demo page](./sentiment-analysis-media/image3.png){width=550px} - -3. Now enter some messages. They will be displayed in the chat list. - -4. To make the demo more entertaining, use your phone to scan the QR code, or send a link to this page to a friend or colleague. When they interact you'll see their chat messages appear in your UI in real time! - -!!! success - - You have successfully deployed and tested the UI. - -[Analyze the sentiment of your messages by following Part 2 of this tutorial :material-arrow-right-circle:{ align=right }](analyze.md) diff --git a/docs/platform/tutorials/sentiment-analysis/summary.md b/docs/platform/tutorials/sentiment-analysis/summary.md new file mode 100644 index 00000000..5145901d --- /dev/null +++ b/docs/platform/tutorials/sentiment-analysis/summary.md @@ -0,0 +1,23 @@ +# Summary + +In this tutorial you've learned: + +* How to fork a GitHub repository and then integrate it with Quix +* Examined the code for several services in the pipeline +* Learned that web apps can read and write to Quix topics using the [Quix Streaming APIs](../../../apis/index.md) +* Used the message view in the Quix UI to examine messages flowing on topics. +* Created a customization to the UI on a feature environment (branch) and then seen how to merge that to develop + +## Next Steps + +Here are some suggested next steps to continue on your Quix learning journey: + +* Try the [image processing tutorial](../image-processing/index.md). + +* If you decide to build your own connectors and apps, you can contribute to the [Code Samples repository](https://github.com/quixio/quix-samples){target=_blank}. Fork our Code Samples repo and submit your code, updates, and ideas. + +What will you build? Let us know! We’d love to feature your application or use case in our [newsletter](https://www.quix.io/community/). + +## Getting help + +If you need any assistance while following the tutorial, we're here to help in the [Quix forum](https://forum.quix.io/){target="_blank"}. diff --git a/docs/platform/tutorials/sentiment-analysis/try-the-ui.md b/docs/platform/tutorials/sentiment-analysis/try-the-ui.md new file mode 100644 index 00000000..990c4f2a --- /dev/null +++ b/docs/platform/tutorials/sentiment-analysis/try-the-ui.md @@ -0,0 +1,39 @@ +# Try the UI + +In this part of the tutorial you try out the UI to get a feel for the project and what it does. + +!!! tip + + In the pipeline view, you can always determine a topic name by hovering over the connecting line that represents that topic. You can also click the connecting line, to see its name, and optionally to jump to the Data Explorer to view live data for the topic. + +Now try out the UI you just deployed. To do this: + +1. In the pipeline view find the UI service tile, as shown in the following screenshot: + + ![Deployed UI tile](./images/ui-tile.png){width=200px} + +2. In the service tile, click the external link icon to launch the UI in a new tab: + + ![The sentiment analysis demo page](./images/running-ui.png) + +3. Enter your username (it can be anything) and then type in some messages. Note that the typing indicator displays the sentiment as you type your message. + +4. Type various messages and check the sentiment is as expected. Also note that the sentiment analysis is shown in the real-time graph display. + +5. Select a different source from the `Message source` dropdown, and observe the messages and corresponding sentiment analysis graph change in real time. + +## 👩‍🔬 Lab - examine the code + +There are various ways you can view the code for this service. For example: + +1. Click `Pipeline` in the left-hand navigation to go to the pipeline view. + +2. Click the UI code panel as shown: + + ![Code panel](./images/click-code-tile.png){width=60%} + +3. The code is displayed in the built-in editor. You can navigate the codebase using the file explorer. The code view also has Intellisense built in - hover over a code construct to see more details. + +## 🏃‍♀️ Next step + +[Part 3 - Explore the UI service :material-arrow-right-circle:{ align=right }](ui-service.md) \ No newline at end of file diff --git a/docs/platform/tutorials/sentiment-analysis/twitch-service.md b/docs/platform/tutorials/sentiment-analysis/twitch-service.md new file mode 100644 index 00000000..e08700a0 --- /dev/null +++ b/docs/platform/tutorials/sentiment-analysis/twitch-service.md @@ -0,0 +1,70 @@ +# Twitch service + +In the UI as well as using the chat interface to send messages, you can also select Twitch channels and perform sentiment analysis on the messages published there: + +![Twitch channels](./images/twitch-channels.png) + +## 💡 Key ideas + +The key ideas on this page: + +* A Quix service can use external APIs to retrieve data and then publish that data into a Quix topic +* Publish time series data + +## Twitch credentials + +To run the Twitch service you'll need to provide your Twitch API credentials. You can configure these as [secret variables](../../how-to/environment-variables.md#secrets-management). The credentials required are shown in the following screenshot: + +![Twitch credentials](../sentiment-analysis/images/twitch-credentials.png) + +## What it does + +The Twitch service uses the [Twitch API](https://dev.twitch.tv/docs/api/){target=_blank} to read messages from some of the most popular channels. It then publishes these messages to the output topic, `messages`. + +In the following code you can see that a time series object is created, with a timestamp, the chat message, and other data, and then published to the output stream: + +``` python +def publish_chat_message(user: str, message: str, channel: str, timestamp: datetime, role: str = "Customer"): + timeseries_data = qx.TimeseriesData() + timeseries_data \ + .add_timestamp(timestamp) \ + .add_value("chat-message", message) \ + .add_tags({"room": "channel", "name": user, "role": role}) + + stream_producer = topic_producer.get_or_create_stream(channel) + stream_producer.timeseries.publish(timeseries_data) +``` + +The message format on the output `messages` topic: + +``` python +{ + "Epoch": 0, + "Timestamps": [ + 1695378597074000000 + ], + "NumericValues": {}, + "StringValues": { + "chat-message": [ + "@CaalvaVoladora Boomerdemons is also up" + ] + }, + "BinaryValues": {}, + "TagValues": { + "room": [ + "channel" + ], + "name": [ + "benkebultsax" + ], + "role": [ + "Customer" + ] + } +} +``` + +## 🏃‍♀️ Next step + +[Part 6 - Customize the UI :material-arrow-right-circle:{ align=right }](customize-the-ui.md) + diff --git a/docs/platform/tutorials/sentiment-analysis/twitter-data.md b/docs/platform/tutorials/sentiment-analysis/twitter-data.md deleted file mode 100644 index 87481ca2..00000000 --- a/docs/platform/tutorials/sentiment-analysis/twitter-data.md +++ /dev/null @@ -1,188 +0,0 @@ -# 3. Adding Twitter data - -In the [previous part](analyze.md) of this tutorial you deployed a microservice to analyze the sentiment of messages in the chat. - -In this part of the tutorial you will learn how to: - -1. Deploy a data source that subscribes to Twitter messages. -2. Deploy a new microservice to normalize the Twitter messages, making them compatible with the sentiment analysis microservice and UI you have already deployed. The sentiment of all the messages will be determined in real time. - -The objective is to show you how to integrate with an external system, and demonstrate the sentiment analysis service processing a higher volume of messages. - -![Twitter branch of the sentiment analysis pipeline](./sentiment-analysis-media/pipeline-view-twitter-branch.png){width=450px} - -If you're asking "Why Twitter?" it's a good question. Quix has a great Twitter connector and want to show it off! Plus it allows you to source real-world data at volume (if you choose the right search parameters). - -There are two steps in this part of the tutorial: - -1. Fetching the tweets. -2. Transforming the tweets to ensure they're compatible with the Sentiment Demo UI. - -## Prerequisites - -To complete this part of the tutorial you'll need a [Twitter developer account](https://developer.twitter.com/en/portal/petition/essential/basic-info){target=_blank}. - -You can follow [this tutorial to set up a developer account](https://developer.twitter.com/en/support/twitter-api/developer-account){target=_blank}. - -## Fetching the tweets - -You are going to be using a prebuilt sample for fetching the tweets. The default search parameters for the sample are set to search for anything relating to Bitcoin, using the search term `(#BTC OR btc OR #btc OR BTC)`. It's a high-traffic subject and great for this demo. However, if you are on the Quix free tier, you might find it better to use a lower-traffic subject, as less CPU and Memory resource can be allocated to a deployment on this tier. To do this, you can edit the `twitter_search_params` field in the sample to contain a different search term, such as `(#rail OR railway)`. This will create less load on the sentiment analysis microservice. - -Follow these steps to deploy the Twitter data source: - -1. Navigate to the `Code Samples` and locate the `Twitter` data source. - -2. Click the `Setup & deploy` button. - -3. Enter your Twitter bearer token into the `twitter_bearer_token` field. - -4. Click `Deploy`. - - This service receives data from Twitter and streams it to the `twitter-data` topic. You can verify this by clicking the `Twitter` data source service in the pipeline and then viewing the `Logs` or `Messages` tab. - -!!! note - The default Twitter search criteria is looking for Bitcoin tweets, it's a high traffic subject and great for the demo. However, because of the volume of Bitcoin tweets it will use up your Twitter Developer Account credits in a matter of days. So stop the Twitter feed when you're finished with it. - - Feel free to change the search criteria once you’ve got the demo working. - - -## Building the transformation - -In the first part of this part of the tutorial, [Fetching the tweets](#fetching-the-tweets) you deployed a microservice which subscribes to tweets on a predefined subject and publishes them to a topic. - -In order to get the tweets to appear in the Sentiment Demo UI, and have their sentiment analyzed, you now need to transform the Twitter data into a format that the Sentiment Demo UI can understand. - -This service will subscribe to the `twitter-data` topic and publish data to the `messages` topic. It will transform the incoming data to make it compatible with the UI and sentiment analysis service. - -### Creating the project - -Follow these steps to code and deploy the tweet-to-chat conversion stage: - -1. Navigate to the `Code Samples` and apply the following filters: - - 1. Languages = `Python` - - 2. Pipeline Stage = `Transformation` - - 3. Type = `Basic templates` - -2. Select `Starter transformation`. - -3. Click `Preview code` then `Edit code`. - -4. Change the name to `tweet-to-chat`. - -5. Change the input to `twitter-data` by either selecting it or typing it. - -6. Ensure the output is set to `messages`. - -7. Click `Save as project`. - - The code for this transformation is now saved to your workspace. - -### Editing the code - -Once saved, you'll be redirected to the online development environment. This is where you can edit, run and test the code before deploying it to production. - -Follow these steps to create the tweet-to-chat service. - -1. Locate `main.py`. - -2. Add `import pandas as pd` to the imports at the top of the file. - -3. Locate the line of code that creates the output stream: - - ``` python - output_stream = output_topic.create_stream(input_stream.stream_id) - ``` - -4. Change this line to get or create a stream called `tweets`: - - ``` python - output_stream = output_topic.get_or_create_stream("tweets") - ``` - - This will ensure that any messages published by this service go into a stream called `tweets`. You'll use the `tweets` room later on to see all of the tweets and their sentiment. - -5. Now locate `quix_function.py`. - - Alter `on_pandas_frame_handler` to match the code below: - - ``` python - def on_pandas_frame_handler(self, df: pd.DataFrame): - - df = df.rename(columns={'text': 'chat-message'}) - df["TAG__name"] = "Twitter" - df["TAG__role"] = "Customer" - - self.output_stream.parameters.write(df) - ``` - - This will take `text` from incoming `twitter-data` and stream it to the output topics `tweets` stream, as parameter or table values, with a column name of `chat-message`, which the other stages of the pipeline will recognize. - -6. Click `Run` near the top right of the code window. - -7. Click the `Messages` tab. - - In the default view showing `input` messages you will see the incoming `twitter-data` messages. - - Select a message and you will see that these have `"text"` in the string values. This is the tweet text. - - ```sh - "StringValues": { - "tweet_id": ["1600540408815448064"], - "text": ["Some message about @BTC"] - } - ``` - -8. Select the "output" messages from the messages dropdown list. These are messages being published from the code. - - Select a message, and you will see that the output messages have a different structure. - - The string values section of the JSON message now contains "chat-message" instead of "text": - - ```sh - "StringValues": { - "tweet_id": ["1600541061583192066"], - "chat-message": ["Some message about @BTC"] - } - ``` - -9. Stop the running code and proceed to the next section. - -### Deploying the Twitter service - -You'll now tag the code and deploy the service with these steps: - -1. Click the `+tag` button at the top of any code file. - -2. Enter `v1` and press ++enter++. - -3. Click `Deploy` near the top right corner. - -4. In the deployment dialog, select `v1` under the `Version Tag`. - - There is no need to allocate much resource to this service, it is very light weight. - -5. Click `Deploy`. - -6. Navigate to, or reload, the Sentiment Demo UI you deployed in the first part of this tutorial. - -7. Enter the chat room called `tweets`. Using `tweets` for the chat room name will ensure you are seeing the messages coming from Twitter. For `Name`, you can use any name you want. The dialog is show here: - - ![Entering the chat room](./sentiment-analysis-media/finale-enter-chat-room.png){width=350px} - -8. You can now see messages arriving from Twitter and their sentiment being analyzed in real-time. - - ![Chats on screen with sentiment and overall sentiment](./sentiment-analysis-media/end-result.png){width=550px} - - -!!! success - - You will see 'Bitcoin' tweets arriving in the chat along with the calculated average sentiment in a chart. - - Your pipeline is now complete, you can send and view chat messages, receive tweets, and analyze the sentiment of all of the messages. - - Share the QR code with colleagues and friends, to talk about anything you like while Quix analyzes the sentiment in the room in real time. - -[Conclusion and next steps :material-arrow-right-circle:{ align=right }](conclusion.md) \ No newline at end of file diff --git a/docs/platform/tutorials/sentiment-analysis/ui-service.md b/docs/platform/tutorials/sentiment-analysis/ui-service.md new file mode 100644 index 00000000..4cbc1a18 --- /dev/null +++ b/docs/platform/tutorials/sentiment-analysis/ui-service.md @@ -0,0 +1,79 @@ +# UI service + +In this part of the tutorial you learn about the web UI service. + +![Web UI pipeline](./images/web-ui-pipeline-segment.png) + +This provides the rather fancy interface for you to interact with this project. + +The following screenshot shows some chat taking place: + +![image processing web UI](./images/running-ui.png) + +## 💡 Key ideas + +The key ideas on this page: + +* How a web client can read data from a Quix topic using Quix Streaming Reader API +* How a web client can write data to a Quix topic using Quix Streaming Writer API +* WebSockets as a way of streaming data into a web client +* Microsoft SignalR is the WebSockets technology used in the Streaming Reader API +* Access an external web application from the pipeline view + +## What it does + +The UI is an Angular web client written using Typescript. + +The key thing this service does is provide a UI that implements the chat interface, a typing indicator with sentiment value, sentiment value and emoticon for each message, and a real-time chart showing sentiment. + +The most important thing to understand is how this service reads and writes data to and from the Quix pipeline. + +This is done through use of two APIs: + +* [Quix Streaming Reader API](../../../apis/streaming-reader-api/intro.md) +* [Quix Streaming Writer API](../../../apis/streaming-writer-api/intro.md) + +The Streaming Writer is used to write both published and draft messages to the sentiment analysis service. Note the UI provides sentiment analysis of messages as they are being typed, that is, in the draft state, as they have not yet been sent to the chat room. + +The Streaming Reader is used to read the sentiment from the sentiment analysis service for both the sent messages and draft messages. + +The four topics involved are: + +* `chat-messages` - topic for the sent messages +* `drafts` - topic for draft messages +* `chat-with-sentiment` - topic with sentiment for sent messages +* `drafts_sentiment` - topic with sentiment for draft messages + +So, the web UI uses the Writer API to write to both `chat-messages` and `drafts` and the Reader API to read from both `chat-with-sentiment` and `drafts_sentiment`. + +The Streaming Reader API has both an HTTP and WebSockets interface you can use to interface with a Quix topic. This web client uses the WebSockets interface. This enables data to be streamed from the Quix topic into the web client with good performance. This is a more efficient method than using the request-response method of HTTP. + +The WebSockets interface uses Microsoft SignalR technology. You can read more about that in the [Quix SignalR documentation](../../../apis/streaming-reader-api/signalr.md) for the Reader API. + +In essence the code to read a topic needs to: + +1. Connect to Quix SignalR hub. +2. The web UI reads parameter data rather than event data, as that is the format used for inbound data in this case. +3. Handle "parameter data received" events using a callback. + +So, simplifying, after connection to the Quix topic, on a `ParameterDataReceived` event, the corresponding callback (event) handler is invoked. There are other events that can be subscribed to. You can read more about events and subscription in the [subscription and event documentation](../../../apis/streaming-reader-api/subscriptions.md). An example for the handler is shown here: + +``` typescript +// Listen for parameter data and emit +readerHubConnection.on("ParameterDataReceived", (payload: ParameterData) => { + this.paramDataReceived.next(payload); +}); +``` + +The Streaming Writer API is [used in a similar way](../../../apis/streaming-writer-api/signalr.md). + +## See also + +For more information refer to: + +* [Quix Streaming Reader API](../../../apis/streaming-reader-api/intro.md) - read about the API used by clients external to Quix to read data from a Quix topic. +* [Quix Streaming Writer API](../../../apis/streaming-writer-api/intro.md) - read about the API used by clients external to Quix to write data to a Quix topic. + +## 🏃‍♀️ Next step + +[Part 4 - Explore the sentiment analysis service :material-arrow-right-circle:{ align=right }](sentiment-analysis-service.md) diff --git a/docs/platform/tutorials/slack-alerting/slack-alerting.md b/docs/platform/tutorials/slack-alerting/slack-alerting.md index 8647567f..ac9e4b2e 100644 --- a/docs/platform/tutorials/slack-alerting/slack-alerting.md +++ b/docs/platform/tutorials/slack-alerting/slack-alerting.md @@ -18,9 +18,9 @@ By the end you will have: If you need any help, please sign up to the [Quix community forum](https://forum.quix.io/){target=_blank}. -## Project Architecture +## Application Architecture -![project architecture](architecture.png) +![application architecture](architecture.png) The solution has 2 main elements: @@ -59,7 +59,7 @@ To proceed with this tutorial you need: 6. You can now find your API Keys in the profile page. !!! tip - Check out the projects README.md later on in the tutorial if you need help creating a Slack WebHook + Check out the application's README.md later on in the tutorial if you need help creating a Slack WebHook ## Overview @@ -92,7 +92,7 @@ However, there is a much easier way to achieve the same outcome. ![TFL BikePoint sample tile](tfl-bikepoint-library-tile.png){width=300px} -3. Click `Setup & deploy`. +3. Click `Deploy`. 4. Paste your TFL API keys into the `tfl_primary_key` and `tfl_secondary_key` input fields. @@ -114,7 +114,7 @@ Ensure you’re logged into the [Slack web portal](https://api.slack.com/messagi 2. On the popup, select `From Scratch`. -3. Enter a name and choose your workspace. +3. Enter a name and choose your environment. 4. Click `Create App`. @@ -142,7 +142,7 @@ The time has come to actually connect Quix and Slack. Once again, with the help 2. Search for `Slack`. -3. Click `Setup & deploy`. +3. Click `Deploy`. 4. Ensure that the `input` is set to `tfl-bikepoint-data`. @@ -163,9 +163,9 @@ In this part of the tutorial you will replace the current Slack connector with a !!! note Begin by stopping the existing Slack connector from the home page. -### Slack connector project +### Slack connector application -Follow these steps to save the connector code to your workspace. +Follow these steps to save the connector code to your environment. 1. Navigate to the `Code Samples` and search for `Slack`. @@ -176,8 +176,8 @@ Follow these steps to save the connector code to your workspace. 4. Ensure that the `input` field is set to `tfl-bikepoint-data` and past your Slack WebHook URL into the appropriate field. -5. Click `Save as project`. - The code is now saved to your workspace and you can now edit the code and make any modifications you need. +5. Click `Save as Application`. + The code is now saved to your environment and you can now edit the code and make any modifications you need. ### Customize the message diff --git a/docs/platform/tutorials/train-and-deploy-ml/conclusion.md b/docs/platform/tutorials/train-and-deploy-ml/conclusion.md index f655a566..0083ea68 100644 --- a/docs/platform/tutorials/train-and-deploy-ml/conclusion.md +++ b/docs/platform/tutorials/train-and-deploy-ml/conclusion.md @@ -16,7 +16,7 @@ Here are some suggested next steps to continue on your Quix learning journey: * [Real-time Machine Learning (ML) predictions](../data-science/index.md) - In this tutorial you use data science to build a real-time bike availability pipeline. -What will you build? Let us know! We’d love to feature your project or use case in our [newsletter](https://www.quix.io/community/). +What will you build? Let us know! We’d love to feature your application or use case in our [newsletter](https://www.quix.io/community/). ## Getting help diff --git a/docs/platform/tutorials/train-and-deploy-ml/create-data.md b/docs/platform/tutorials/train-and-deploy-ml/create-data.md index da7b68a0..fc6f67b8 100644 --- a/docs/platform/tutorials/train-and-deploy-ml/create-data.md +++ b/docs/platform/tutorials/train-and-deploy-ml/create-data.md @@ -30,7 +30,7 @@ Now generate the actual data for use later in the tutorial by completing the fol 2. Find the `Demo Data` source. This service streams F1 Telemetry data into a topic from a recorded game session. -3. Click the `Setup & deploy` button in the `Demo Data` panel. +3. Click the `Deploy` button in the `Demo Data` panel. 4. You can leave `Name` as the default value. @@ -38,6 +38,6 @@ Now generate the actual data for use later in the tutorial by completing the fol Once this service is deployed it will run as a [job](../../glossary.md#job) and generate real-time data to the `f1-data`, and this will be persisted. -This data is retrieved later in this tutorial using Python code that uses the [Data Catalogue API](../../../apis/data-catalogue-api/intro.md), generated for you by Quix. +This data is retrieved later in this tutorial using Python code that uses the [Query API](../../../apis/query-api/intro.md), generated for you by Quix. [Import data into Jupyter Notebook :material-arrow-right-circle:{ align=right }](./import-data.md) diff --git a/docs/platform/tutorials/train-and-deploy-ml/deploy-ml.md b/docs/platform/tutorials/train-and-deploy-ml/deploy-ml.md index 78b2acaf..212817b2 100644 --- a/docs/platform/tutorials/train-and-deploy-ml/deploy-ml.md +++ b/docs/platform/tutorials/train-and-deploy-ml/deploy-ml.md @@ -26,21 +26,21 @@ Ensure you are logged into the Quix Portal, then follow these steps to create a 8. Leave output as `hard-braking` (its default value). -9. Click `Save as Project`. The code is now saved to your workspace. +9. Click `Save as Application`. The code is now saved to your environment. !!! tip - You can see a list of projects at any time by clicking `Projects` in the left-hand navigation. + You can see a list of applications at any time by clicking `Applications` in the left-hand navigation. ## Upload the model Now you need to upload your ML model and edit your transform code to run the model. -1. Click on `Projects` and select `Prediction Model` to display your project code. +1. Click on `Applications` and select `Prediction Model` to display your application code. 2. Click the `Upload File` icon at the top of the file list, as shown in the following screenshot: - ![Upload file to project](./images/upload-file-to-project.png) + ![Upload file to application](./images/upload-file-to-application.png) 3. Find the Pickle file containing your ML model. It's named `decision_tree_5_depth.sav` and is in the same directory as your Jupyter Notebook files. @@ -169,7 +169,7 @@ To see the output of your model in real time you can use the Data Explorer. To u 2. If it's not already selected click the `Live` data tab at the top. -3. Ensure the `hard-braking` topic is selected from the `Select a topic` drop-down list. +3. Ensure the `hard-braking` topic is selected from the `Select a topic` dropdown list. 4. Select a stream (you should only have one). diff --git a/docs/platform/tutorials/train-and-deploy-ml/images/connect-python.png b/docs/platform/tutorials/train-and-deploy-ml/images/connect-python.png index 839ff101..1c4bead2 100644 Binary files a/docs/platform/tutorials/train-and-deploy-ml/images/connect-python.png and b/docs/platform/tutorials/train-and-deploy-ml/images/connect-python.png differ diff --git a/docs/platform/tutorials/train-and-deploy-ml/images/upload-file-to-project.png b/docs/platform/tutorials/train-and-deploy-ml/images/upload-file-to-application.png similarity index 100% rename from docs/platform/tutorials/train-and-deploy-ml/images/upload-file-to-project.png rename to docs/platform/tutorials/train-and-deploy-ml/images/upload-file-to-application.png diff --git a/docs/platform/tutorials/train-and-deploy-ml/images/visualize-result.png b/docs/platform/tutorials/train-and-deploy-ml/images/visualize-result.png index 1c40257b..35aa668c 100644 Binary files a/docs/platform/tutorials/train-and-deploy-ml/images/visualize-result.png and b/docs/platform/tutorials/train-and-deploy-ml/images/visualize-result.png differ diff --git a/docs/platform/tutorials/train-and-deploy-ml/import-data.md b/docs/platform/tutorials/train-and-deploy-ml/import-data.md index edc11b62..9f4f69f5 100644 --- a/docs/platform/tutorials/train-and-deploy-ml/import-data.md +++ b/docs/platform/tutorials/train-and-deploy-ml/import-data.md @@ -1,6 +1,6 @@ # Import data into Jupyter Notebook -From a Jupyter Notebook, you retrieve the data that was generated in Quix in the [previous part](./create-data.md), and which was persisted into the [Quix Data Catalogue](../../../apis/data-catalogue-api/intro.md). +From a Jupyter Notebook, you retrieve the data that was generated in Quix in the [previous part](./create-data.md), and which was persisted into the [Quix data store](../../../apis/query-api/intro.md). ## Run Jupyter Notebook @@ -59,7 +59,7 @@ The Quix Portal has a code generator that can generate code to connect your Jupy 10. Select the `Code` tab. -11. Select `Python` from the the `LANGUAGE` drop-down. +11. Select `Python` from the the `LANGUAGE` dropdown. ![Generated code to retrieve data](./images/connect-python.png) @@ -77,6 +77,6 @@ The Quix Portal has a code generator that can generate code to connect your Jupy !!! tip - If you want to use this generated code for more than 30 days, replace the temporary token with a **PAT token**. See [authenticate your requests](../../../apis/data-catalogue-api/authenticate.md) for how to do that. + If you want to use this generated code for more than 30 days, replace the temporary token with a **PAT token**. See [authenticate your requests](../../../apis/query-api/authenticate.md) for how to do that. [Train your ML model :material-arrow-right-circle:{ align=right }](./train-ml.md) \ No newline at end of file diff --git a/docs/platform/what-is-quix.md b/docs/platform/what-is-quix.md index 0e1f1d2b..228c5477 100644 --- a/docs/platform/what-is-quix.md +++ b/docs/platform/what-is-quix.md @@ -1,34 +1,60 @@ -# What is Quix Platform? +# What is Quix? -Quix Platform is a complete system that enables you to develop, debug, and deploy real-time streaming data applications. Quix also provides an online IDE and an open source streams processing library called Quix Streams. Quix Streams is the client library that you use in your Python or C# code to develop custom elements of your processing pipeline. +Quix is a complete end-to-end solution for building, deploying, and monitoring event streaming applications. -Quix Platform was built on top of a message broker, specifically [Kafka](../client-library/kafka.md), rather than on top of a database, as databases introduce latency that can result in problems in real-time applications, and can also present scaling issues. Scaling databases for real-time applications also introduces more cost and complexity. Quix Platform helps abstract these issues, providing you with a scaleable and cost effective solution. +!!! important -Broker technology enables you to work with data in memory in real time, rather than data retrieved using complex queries from disk and then batch processed. This is much faster and therefore more suited to real-time applications. However, the problem with broker technologies is that they are more complex to use. Quix Platform and Quix Streams provide abstractions and tools so you can work directly with your data and not the underlying broker technology. + For recent significant changes to Quix Platform, please see the [changes documentation](../platform/changes.md). -Quix also treats Python developers as first-class citizens, making it easier for Python developers to work with real-time data using the abstractions and tools they are already familiar with, such as using the pandas library and data frame format. +Streaming data applications, where you need to process time series or event data in order to make decisions in real time, is what Quix is designed for. -## The Quix stack +With its roots in the demanding world of Formula 1 racing, where performance is paramount, Quix is built to deliver results. -Quix provides everything a developer needs to build low-latency real-time streaming applications. +Such intelligent real-time decision making has many use cases, including examples such as increasing engagement with social media and digital content, monitoring vast arrays of sensors, fraud prevention, and of course Formula 1 race car telemetry systems. -From the top-down, the Quix stack provides the following: +Quix has excellent synergy with Machine Learning (ML) systems too. You can quickly deploy your ML model and monitor its performance in real time, modify the model, and redeploy it with a single click. -* Quix Portal, the web-based Integrated Development Environment (IDE). [Sign up for free](https://portal.platform.quix.ai/self-sign-up). -* [REST and websocket APIs](../apis/index.md) -* [Quix Streams](../client-library-intro.md) +!!! tip + + [Sign up for free](https://portal.platform.quix.ai/self-sign-up){target=_blank}. + +## Reducing complexity + +Quix is also designed to remove as much complexity as possible from the process of creating, deploying, and monitoring your event streaming applications. + +Quix leverages industry-standard technologies, such as Kafka to provide the core functionality for streaming data, Kubernetes for scaling your deployments, InfluxDB and MongoDB for data persistence, Git for revision control, and Python as the main language for programming your solutions. + +The following sections take a look at the key components of creating your streaming data solutions: + +* Connecting your data to Quix +* Developing your application +* Deploying (and scaling) pipelines +* Monitoring and managing your data + +While this short introduction to Quix is intentionally brief, there are abundant links for more detailed information you can follow to increase your knowledge of Quix. Alternatively, simply drop into our [Community](https://forum.quix.io/){target=_blank} and ask any question you may have. + +## The Quix Platform + +The Quix Platform provides everything a developer needs to build industrial-strength event streaming applications. -These allow developers to: +The components that make up the Quix Platform enable developers to: -* Use a full web-based IDE with version control and logging, to build their applications. +* Use a full web-based IDE with version control and logging, to build, deploy, and monitor their event streaming applications. * Have abstracted access to underlying broker infrastructure, including fully-managed Kafka topics. -* Access the Quix serverless compute environment for hosting your web-based real-time streaming applications. +* Single-click deployment to the Quix serverless compute environment for hosting your web-based real-time streaming applications. * Connect existing web applications and IoT clients. -* Access the real-time data catalogue, which is a time-series database. -![640](images/about/Product.png) +In addition to providing a complete solution, Quix also enables you to leverage third-party providers if your use case requires it. For example, while Quix can host all your Git repositories, you can also configure your environments to use third-party providers for this purpose, such as GitHub, Bitbucket, and Azure DevOps. -## Quix Portal +Similarly, Quix provides Quix-hosted Kafka, but you can also use Confluent Cloud or self-hosted Kafka options. + +## Quix architecture + +This section describes the main technical components and architecture of Quix. + +![Quix Technical Architecture](./images/quix-technical-architecture.png) + +### Quix Portal **Quix Portal** strives to present an intuitive software experience that facilitates DevOps/MLOps best practices for development teams. The goals of Quix Portal are to: @@ -38,7 +64,7 @@ These allow developers to: To achieve these goals, Quix Portal includes the following features: -* **Online IDE**: Develop and run your real-time streaming applications directly in the browser without setting up a local environment. +* **Online IDE**: Develop and run your streaming applications directly in the browser without setting up a local environment. * **Code Samples**: Choose from the [prebuilt Code Samples](../platform/connectors/index.md) ready to run and deploy from the IDE. @@ -52,114 +78,108 @@ To achieve these goals, Quix Portal includes the following features: * **Data Explorer**: Explore live and historical data of your applications to test that your code is working as expected. -## APIs - -Quix provides four APIs to help you work with streaming data. These include: - -* [**Stream Writer API**](../apis/streaming-writer-api/intro.md): enables you to send any data to a Kafka topic in Quix using HTTP. This API handles encryption, serialization, and conversion to the Quix Streams format, ensuring efficiency and performance of down-stream processing regardless of the data source. - -* [**Stream Reader API**](../apis/streaming-reader-api/intro.md): enables you to push live data from a Quix topic to your application, ensuring low latency by avoiding any disk operations. - -* [**Data Catalogue API**](../apis/data-catalogue-api/intro.md): enables you to query historical data streams in the data catalogue, in order to train ML models, build dashboards, and export data to other systems. - -* [**Portal API**](../apis/portal-api.md): enables you to automate Quix Portal tasks such as creating workspaces, topics, and deployments. - -## Quix Streams - -Python is the dominant language for data science, data engineering, and machine learning, but it needs to be interfaced carefully with streaming technologies, such as [Kafka](../client-library/kafka.md), which are predominantly written in Java and Scala. - -[Quix Streams](../client-library-intro.md) provides Python and C# developers with a client library that abstracts the complexities of building streaming applications. +**Watch the video to see the Quix web-based IDE** -For Python developers, Quix Streams can provide streaming data packaged in a data frame, so you can write data processing logic and connect it directly to the abstracted broker. Developers can read about the most important streaming concepts in the [Quix Streams introduction](../client-library-intro.md). +### Git integration -## In-memory processing +Quix has the ability to create projects where all code and configuration is contained in a Git repository. This Git repository can be hosted by Quix (using Gitea), or on any third-party Git provider, such as GitHub, or Bitbucket, where you can configure the Quix SSH public key provided to you for Git provider authentication. This helps integrate Quix with your existing workflows. -Traditional architectures for applications that need to process data have always been very database-centric. Typically you write data to the database, retrieve it with complicated queries, process the data, and then write it back to a complex database schema. This approach does not scale to real-time uses cases, especially when large amounts of data are involved. +### Kafka integration -In use cases where you need results in seconds, and where you may potentially have large amounts of data (for example from thousands of IoT devices transmitting telemetry data), a real-time stream processing approach is required, and that is what Quix was designed for. +Quix requires Kafka to provide streaming infrastructure for your solutions. -![Traditional architecture for data processing](./images/in-memory-processing-legacy.png) +When you create a new Quix environment, there are three hosting options: -Quix uses an underlying message broker and it puts it at the very center of the application, enabling a new approach for processing data without the need to save and pass all the information through a database. By using in-memory processing, you can persist only the data you're really interested in keeping. +1. Quix Broker - Quix hosts Kafka for you. This is the simplest option as Quix provides hosting and configuration. +2. Self-Hosted Kafka - This is where you already have existing Kafka infrastructure that you use, and you want to enable Quix to provide the event stream processing platform on top of it. You can configure Quix to work with your existing Kafka infrastructure using this option. +3. Confluent Cloud - if you use Confluent Cloud for your Kafka infrastructure, then you can configure Quix to connect to your existing Confluent Cloud account. -![Quix approach for data processing](./images/in-memory-processing-quix.png) +This enables you to select a Kafka hosting option according to requirements. For example, your production environment may be hosted on your own Kafka infrastructure, while your develop environment is hosted by Quix. -This approach lowers the complexity and cost of real-time data processing and is the only possible approach when you need to process a huge amount of data per second, with low latency requirements. +!!! tip -## Serverless compute + To get your event streaming application to the testing stage as quickly as possible, the Quix-hosted Kafka option is recommended, as it requires zero configuration to get Kafka running. You can focus on your application code, without needing to do the up front work of creating a powerful scalable Kafka cluster, as that work has already been done for you by Quix. -Quix provides an easy way to run code in an elastic serverless compute environment. It automatically builds your code into a docker image, and deploys containers to Kubernetes. This usually complicated procedure is simplified using the Quix Portal UI. +### Docker integration -### Architecture +When you create an application from a template in Quix, a Dockerfile is provided for you in the `build` directory of the application. This uses a base image, and additions only need to be made in the case of special requirements, such as to include a library that is not part of the base image. The Dockerfile, combined with your code and configuration, is used to drive the Quix build service, which ensures the generated Docker image is registered with the Quix Docker registry. This image is then submitted to the Quix serverless engine, as part of the deployment process. All this is largely transparent to the developer - you simply click `Deploy` to build and deploy your application, and then monitor the deployment using the tools provided, such as the Data Explorer, logs, CPU monitor, and memory monitor. -![about/serverless-environment.png](images/about/serverless-environment.png) +### Infrastructure as code -### Git integration +When you develop your event streaming solution, you will build a pipeline of services that can be viewed in the Quix Portal. Each service in the pipeline is individually coded, using a standard component, a modified template, or even completely from scratch. Depending on the use case, pipelines can be quite complex, and in the past, this has made them time consuming to recreate. Now Quix supports infrastructure as code. Your entire pipeline can be defined by a single `quix.yaml` file, and this file can be used to completely and easily reconstruct a pipeline from its corresponding repository. -Source code for workspace projects (models, connectors and services) is hosted in Git repositories. Developers can check out these repositories and develop locally and collaborate using Git. +## Interfacing with Quix -Code is deployed to the Quix serverless environment using Git tags. Quix builds the selected Git tag or commit into a docker image which is then deployed. +There are [various ways](../platform/ingest-data.md) to connect your data to Quix. Quix provides a number of [connectors](../platform/connectors/index.md) that you can use with only some simple configuration. In addition, there are a range of [APIs](#apis), both REST and WebSockets that are available. There is also the [Quix Streams](#quix-streams) client library, that can be used to get data quickly and easily into Quix. -### Docker integration +For a simple example of getting data from your laptop into Quix, see the [Quickstart](../platform/quickstart.md). -Each code example included in the Quix [Code Samples](https://github.com/quixio/quix-samples) is shipped with a `Dockerfile` that is designed to work in the Quix serverless compute environment powered by Kubernetes. You can alter this file if necessary. When you deploy a service with Quix, a code reference to Git with a build request is sent to the build queue. The build service builds a docker image and saves it in the docker registry. This image is then deployed to Kubernetes. +Quix provides numerous standard [connectors](../platform/connectors/index.md) for both source, and destination functions. In addition a number of transforms are also available. !!! tip - If there is any problem with the docker build process, you can check the **build logs**. + To see available transforms, log into Quix. Open your project, and then an environment, and click on `Code Samples`. Then under `PIPELINE STAGE` select `Transformation`. -### Kubernetes integration +### APIs -Quix manages an elastic compute environment so you don’t need to worry about such details as servers, nodes, memory, and CPUs. Quix ensures that your container is deployed to the right server in the cluster. +Quix provides several APIs to help you work with streaming data. These include: -Quix provides the following integrations with Kubernetes: +* [**Stream Writer API**](../apis/streaming-writer-api/intro.md): enables you to send any data to a Kafka topic in Quix using HTTP. This API handles encryption, serialization, and conversion to the Quix Streams format, ensuring efficiency and performance of down-stream processing regardless of the data source. +* [**Stream Reader API**](../apis/streaming-reader-api/intro.md): enables you to push live data from a Quix topic to your application, ensuring low latency by avoiding any disk operations. +* [**Query API**](../apis/query-api/intro.md): enables you to query persisted data streams. This is provided primarily for testing purposes. +* [**Portal API**](../apis/portal-api.md): enables you to automate Quix Portal tasks such as creating environments, topics, and deployments. -* **Logs** from the container accessible in the portal or using the Portal API. +### Quix Streams -* **Environment variables** allows passing variables into the Docker image deployment. This enables code to be configured using these parameters. +As you will notice as you explore the various open source code samples and connectors that come with Quix, Quix also provides a complete client library, [Quix Streams](../client-library-intro.md), to reduce development times, and provide advanced features such as automatic scaling through Streams. -* **Replica** number for horizontal scale. +Python is the dominant language for data science, data engineering, and machine learning, but it needs to be interfaced carefully with streaming technologies, such as [Kafka](../client-library/kafka.md), which are predominantly written in Java and Scala. -* **CPU** limit. +[Quix Streams](../client-library-intro.md) provides Python and C# developers with a client library that abstracts the complexities of building streaming applications. -* **Memory** limit. +For Python developers, Quix Streams can provide streaming data packaged in a data frame, so you can write data processing logic and connect it directly to the abstracted broker. Developers can read about the most important streaming concepts in the [Quix Streams introduction](../client-library-intro.md). -* **Deployment type** - Options of a one-time job or a continuously running service, +## Building with Quix -* **Ingress** - Optional ingress mapped to port 80. +The basic flow that pipelines follow is ingestion, processing, and serving of data. These correlate to source, transform, and destination components within Quix. -!!! tip +![Stream Processing Architecture](./images/stream-processing-architecture.png) - If a deployment reference is already built and deployed to a service, the build process is skipped and the docker image from the container registry is used instead. +These are described in the following sections in more detail. -### DNS integration +### Pipelines -The Quix serverless environment offers DNS routing for services on port 80. That means that any API or frontend can be hosted in Quix with no extra complexity. Load balancing is achieved by increasing the replica count to provide resiliency to your deployed API or frontend. +Event stream processing is implemented by building pipelines consisting of a series of applications deployed to Kafka and Kubernetes clusters. These processing pipelines are now described by a single YAML file, `quix.yaml`. With just this file, you can reconstruct any pipeline. -!!! warning +Further, changes in this file in one environment can be merged into another environment, giving you the ability to test changes in one environment, before deploying into another, while the change history is retained in Git. - A newly deployed service with DNS routing takes up to 10 minutes to propagate to all DNS servers in the network. +An example pipeline is shown in the following screenshot: -## Managed Kafka topics +![Pipeline View](../platform/tutorials/image-processing/images/pipeline-overview-2.png) -Quix provides fully managed Kafka topics which are used to stream data and build data processing pipelines by daisy-chaining models together. +You can see that a typical pipeline is built from sources, transforms, and destinations. -Our topics are multi-tenant which means you don’t have to build and maintain an entire cluster. Instead, you can start quickly and cheaply by creating one topic for your application and only pay for the resources consumed when streaming that data. When your solution grows in data volume or complexity you can add more topics without concern for the underlying infrastructure, which is managed by Quix. +You can see how to build a simple pipeline in the [Quix Tour](../platform/quixtour/overview.md). You can also [watch the video](https://www.loom.com/share/5b0a88d2185c4cfea8fd2917d3898964?sid=b58b2b0c-5814-494a-82ea-2a2ba4d4dac0). -Together with [Quix Streams](../client-library-intro.md) and serverless compute, you can connect your models directly to Quix topics to read and write data using the pub/sub pattern. This keeps the data in-memory to deliver low-latency and cost-effective stream processing capabilities. +### Multiple environments -!!! note +In Quix you create a project to contain your event stream processing pipeline. A project corresponds to a Git repository, either hosted by Quix, or alternatively using an external Git provider such as GitHub. Within a project you can create multiple environments, containing your event stream processing pipelines. Each environment is associated with a Git branch, so that you can embrace the full Git workflow, having for example, production, staging and development branches. You can also configure your preferred Kafka hosting option for the environment too, for example you can choose Quix-hosted Kafka, self-hosted Kafka, or Confluent Cloud. - Quix also provides the ability to connect Quix Portal to external infrastructure components such as your own message broker infrastructure. +Environments are a new feature of Quix, and you can read more about them in the [documentation](../platform/changes.md#environments). -## Data Catalogue +### Monitoring and managing your data -Quix provides a data catalogue for long-term storage, analytics, and data science activities. +Quix provides a suite of tools to enable you to monitor and manage your data. These include: -The Quix data catalogue combines the best database technologies for each data type into a unified catalogue. There’s a timeseries database for recording your events and parameter values, blob storage for your binary data, and a NoSQL database for recording your metadata. +* Data Explorer - The Data Explorer enables you to view your data graphically in real time. Graph, table and messages views. +* Logs - Real-time logging information is displayed in a console tab. You also have the option of downloading your logs. +* CPU monitor - you can monitor the CPU utilization of your deployment in real time. +* Memory monitor - you can monitor the memory usage of the deployment in real time. -The Quix data catalogue technology has two advantages: +[See the Data Explorer in action](https://www.loom.com/share/0e3c24fb5f8c48038fe5cf02859b7ebc?sid=743fbdf7-fad5-4c26-831d-b6dad78b9b06). -1. It allocates each data type to the optimal database technology for that type. This increases read/write and query performance, which reduces operating costs. +## Next steps -2. It uses your metadata to record your context. This makes your data more accessible across your organization, as users only need to know your business context in order to navigate vast quantities of data. +* [Quickstart](../platform/quickstart.md) - get data into Quix and display it in less than 10 minutes +* [Quix Tour](../platform/quixtour/overview.md) - build a complete pipeline in less than 30 minutes +* Watch [a video]((https://www.youtube.com/watch?v=0cr19MfATfY)) on the art of the possible with Quix diff --git a/mkdocs.yml b/mkdocs.yml index 254d2b96..3537ba8c 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -14,6 +14,7 @@ edit_uri: tree/dev/docs nav: - Intro: 'index.md' + - 'Recent changes 🔥': 'platform/changes.md' - 'What is Quix?': 'platform/what-is-quix.md' - 'Quickstart 🚀': 'platform/quickstart.md' - 'Ingest data': 'platform/ingest-data.md' @@ -25,7 +26,10 @@ nav: - '3. Serve': 'platform/quixtour/serve-sms.md' - Platform: - 'How To': - - 'Get Workspace ID': 'platform/how-to/get-workspace-id.md' + - 'Create a project': 'platform/how-to/create-project.md' + - 'Project structure': 'platform/how-to/project-structure.md' + - 'Create an application': 'platform/how-to/create-application.md' + - 'Get Environment ID': 'platform/how-to/get-environment-id.md' - 'Get streaming token': 'platform/how-to/streaming-token.md' - 'Get personal access token (PAT)': 'platform/how-to/personal-access-token-pat.md' - 'Ingest from CSV': 'platform/how-to/ingest-csv.md' @@ -37,6 +41,8 @@ nav: - 'Create a Dead Letter Queue': 'platform/how-to/create-dlq.md' - 'Deploy public services': 'platform/how-to/deploy-public-page.md' - 'Add environment variables': 'platform/how-to/environment-variables.md' + - 'Testing using Quix data store': 'platform/how-to/testing-data-store.md' + - 'Configure deployments': 'platform/how-to/yaml-variables.md' - 'Tutorials': - 'platform/tutorials/index.md' - 'Real-time ML predictions': @@ -56,22 +62,23 @@ nav: - 'Conclusion': 'platform/tutorials/train-and-deploy-ml/conclusion.md' - 'Real-time image processing': - platform/tutorials/image-processing/index.md - - '1. Connect webcam video': platform/tutorials/image-processing/connect-video-webcam.md - - '2. Decode images': platform/tutorials/image-processing/decode.md - - '3. Object detection': platform/tutorials/image-processing/object-detection.md - - '4. Connect TfL video': platform/tutorials/image-processing/connect-video-tfl.md - - '5. Frame grabber': platform/tutorials/image-processing/tfl-frame-grabber.md - - '6. Stream merge': platform/tutorials/image-processing/stream-merge.md - - '7. Deploy the UI': platform/tutorials/image-processing/web-ui.md + - '1. Get the project': platform/tutorials/image-processing/get-project.md + - '2. TfL camera feed': platform/tutorials/image-processing/tfl-camera-feed.md + - '3. Frame grabber': platform/tutorials/image-processing/tfl-frame-grabber.md + - '4. Object detection': platform/tutorials/image-processing/object-detection.md + - '5. Web UI': platform/tutorials/image-processing/web-ui.md + - '6. Other services': platform/tutorials/image-processing/other-services.md + - '7. Add service': platform/tutorials/image-processing/add-service.md - '8. Summary': platform/tutorials/image-processing/summary.md - 'Sentiment analysis': - platform/tutorials/sentiment-analysis/index.md - - '1. Sentiment Demo UI': 'platform/tutorials/sentiment-analysis/sentiment-demo-ui.md' - - '2. Analyzing sentiment': 'platform/tutorials/sentiment-analysis/analyze.md' - - '3. Twitter data': 'platform/tutorials/sentiment-analysis/twitter-data.md' - - '4. Conclusion': 'platform/tutorials/sentiment-analysis/conclusion.md' - - 'Sentiment analysis microservice': 'platform/tutorials/sentiment-analysis/code-and-deploy-sentiment-service.md' - - 'Customize the UI': 'platform/tutorials/sentiment-analysis/customize-the-ui.md' + - '1. Get the project': platform/tutorials/sentiment-analysis/get-project.md + - '2. Try the UI': 'platform/tutorials/sentiment-analysis/try-the-ui.md' + - '3. UI service': 'platform/tutorials/sentiment-analysis/ui-service.md' + - '4. Sentiment analysis service': 'platform/tutorials/sentiment-analysis/sentiment-analysis-service.md' + - '5. Twitch service': 'platform/tutorials/sentiment-analysis/twitch-service.md' + - '6. Customize the UI': 'platform/tutorials/sentiment-analysis/customize-the-ui.md' + - '7. Summary': 'platform/tutorials/sentiment-analysis/summary.md' - 'Real-time event detection': - 'platform/tutorials/event-detection/index.md' - '1. Data acquisition': 'platform/tutorials/event-detection/data-acquisition.md' @@ -97,18 +104,18 @@ nav: - 'Client Library': '!import https://github.com/quixio/quix-streams?branch=main' - API: - 'Index': 'apis/index.md' - - 'Data Catalogue API': - - 'Introduction': 'apis/data-catalogue-api/intro.md' - - 'Authenticate': 'apis/data-catalogue-api/authenticate.md' - - 'Getting Swagger URL': 'apis/data-catalogue-api/get-swagger.md' - - 'Forming a request': 'apis/data-catalogue-api/request.md' - - 'Paged streams': 'apis/data-catalogue-api/streams-paged.md' - - 'Filtered streams': 'apis/data-catalogue-api/streams-filtered.md' - - 'Streams with models': 'apis/data-catalogue-api/streams-models.md' - - 'Raw data': 'apis/data-catalogue-api/raw-data.md' - - 'Aggregate data by time': 'apis/data-catalogue-api/aggregate-time.md' - - 'Aggregate data by tags': 'apis/data-catalogue-api/aggregate-tags.md' - - 'Tag filtering': 'apis/data-catalogue-api/filter-tags.md' + - 'Query API': + - 'Introduction': 'apis/query-api/intro.md' + - 'Authenticate': 'apis/query-api/authenticate.md' + - 'Getting Swagger URL': 'apis/query-api/get-swagger.md' + - 'Forming a request': 'apis/query-api/request.md' + - 'Paged streams': 'apis/query-api/streams-paged.md' + - 'Filtered streams': 'apis/query-api/streams-filtered.md' + - 'Streams with models': 'apis/query-api/streams-models.md' + - 'Raw data': 'apis/query-api/raw-data.md' + - 'Aggregate data by time': 'apis/query-api/aggregate-time.md' + - 'Aggregate data by tags': 'apis/query-api/aggregate-tags.md' + - 'Tag filtering': 'apis/query-api/filter-tags.md' - 'Streaming Writer API': - 'Introduction': 'apis/streaming-writer-api/intro.md' - 'Authenticate': 'apis/streaming-writer-api/authenticate.md' @@ -165,6 +172,9 @@ plugins: 'platform/tutorials/quick-start/quick-start.md': 'platform/quickstart.md' 'platform/how-to/use-sdk-token.md': 'platform/how-to/streaming-token.md' 'platform/how-to/connect-to-quix.md': 'platform/ingest-data.md' + 'apis/data-catalogue-api/intro.md': 'apis/query-api/intro.md' + 'platform/tutorials/image-processing/connect-video-tfl.md': 'platform/tutorials/image-processing/tfl-camera-feed.md' + theme: name: 'material'