Azure DevOps plugin
Visit our website to see the data that you can access if you use this plugin to add the data source to SquaredUp:
Monitor the Builds and Releases from your Azure DevOps environment. On this page you will find:
How to add an Azure DevOps data source
Using the Azure DevOps data streams
How to add an Azure DevOps data source
To add a data source click on the + next to Data Sources on the left-hand menu in SquaredUp. Search for the data source and click on it to open the Configure data source page.
-
Display Name:
Enter a name for your data source. This helps you to identify this data source in the list of your data sources.
-
Organization Name:
Enter the name of the Azure DevOps organization you want to use for this data source.
-
Personal Access Token:
Enter the personal access token you created in Azure.
If you need help creating a personal access token, please refer to the Azure documentation: https://docs.microsoft.com/en-us/azure/devops/organizations/accounts/use-personal-access-tokens-to-authenticateWhile most data streams will work with read-only permissions, some may additional permissions to work correctly. For example, the Agent Usage and Job Queues data streams fail when the Personal Access Token used in the configuration is set to read-only
-
Install Sample Dashboards:
Select whether you would like to install sample dashboards with the data source. By default, this is set to on.
If you chose to install sample dashboards, two out-of-the-box dashboards (My DevOps Organization and Pipeline Overview) are generated to help you get started. This short two minute video gives an overview of these starter dashboards:
-
Optionally, select whether you would like to restrict access to this data source instance. By default, restricted access is set to off.
The term data source here really means data source instance. For example, a user may configure two instances of the AWS data source, one for their development environment and one for production. In that case, each data source instance has its own access control settings.
By default, Restrict access to this data source is set to off. The data source can be viewed, edited and administered by anyone. If you would like to control who has access to this data source, switch Restrict access to this data source to on.
Use the Restrict access to this data source dropdown to control who has access to the workspace:
By default, the user setting the permissions for the data source will be given Full Control and the Everyone group will be given Link to workspace permissions.
Tailor access to the data source, as required, by selecting individual users or user groups from the dropdown and giving them Link to workspace or Full Control permissions.
If the user is not available from the dropdown, you are able to invite them to the data source by typing in their email address and then clicking Add. The new user will then receive an email inviting them to create an account on SquaredUp. Once the account has been created, they will gain access to the organization.
At least one user or group must be given Full Control.
Admin users can edit the configuration, modify the Access Control List (ACL) and delete the data source, regardless of the ACL chosen.
Access Level:
Link to workspace
- User can link the data source to any workspace they have at least Editor permissions for.
- Data from the data source can then be viewed by anyone with any access to the workspace.
User can share the data source data with anyone they want.
User cannot configure the data source in any way, or delete it.
Full Control - User can change the data source configuration, ACL, and delete the data source.
See Access control for more information.
-
Click Test and add to validate the data source configuration.
Testing passed – a success message will be displayed and then the configuration will be saved.
Testing passed with warnings – warnings will be listed and potential fixes suggested. You can still use the data source with warnings. Select Save with warnings if you believe that you can still use the data source as required with the warnings listed. Alternatively, address the issues listed and then select Rerun tests to validate the data source configuration again. If the validation now passes, click Save.
Testing Failed – errors will be listed and potential fixes suggested. You cannot use the data source with errors. You are able to select Save with errors if you believe that a system outside of SquaredUp is causing the error that you need to fix. Alternatively, address the issues listed and then select Rerun tests to validate the data source configuration again. If the validation now passes, click Save.
You can edit any data source configurations at any time from Settings > Data Sources.
You can also add a data source from Settings > Data Sources > Add data source, but sample dashboards are not added when using this method.
Using the Azure DevOps data streams
Data streams standardize data from all the different shapes and formats your tools use into a straightforward tabular format. While creating a tile you can tweak data streams by grouping or aggregating specific columns. Depending on the kind of data, SquaredUp will automatically suggest how to visualize the result, for example as a table or line graph.
Data streams can be either global or scoped:
Global data streams are unscoped and return information of a general nature (e.g. "Get the current number of unused hosts").
A scoped data stream gets information relevant to the specific set objects supplied in the tile scope (e.g. "Get the current session count for these hosts").
See Data Streams for more information.
Data streams installed with the data source.
Write a custom data stream (advanced use) see Writing a custom data stream (advanced users).
Every data stream can use a filter
object in the dataSourceConfig
for basic filtering. More information about the required structure of this object can be found here: GitHub: Expression. For this filter
functionality, there exists a user-defined function hasValue
, that uses the following syntax:
"hasValue": ["key", "value"]
This function should be used in cases where the value at the given key could be undefined
.
Some data streams can use scopeFilter
and stageFilter
objects for more advanced filtering. More information about these features can be found in the Writing a custom data stream (advanced users) section.
This data stream calls the /{{projectName}}/_apis/pipelines/{{buildId}}/runs
endpoint, and allows you to enter a custom filter.
In the tile editor, filter by the Azure DevOps data source, select Build Runs from the data stream list and then click Next.
Select + Build Runs from the data stream list
Select the objects that you want to use and then click Next.
-
Configuration Type:
Optionally, filter your data using one of the following methods:
-
Query: Use the following fields to adjust the API query before sending:
-
Top:
Returns a specific number of results. Useful if you only want the latest build, latest failed build, etc.
-
Trigger:
Returns builds run by a specific cause. For example, you may only want to retrieve builds run manually.
-
Result:
Returns builds with a specific result. For example, you may only want to retrieve failed builds.
-
Status:
Returns builds with a specific status. For example, you may only want to retrieve In Progress builds.
Some status filters cannot be used with Result filters. For example, an In Progress build will never have a result as it has not finished yet.
-
-
Filter: Enter a JSON Filter Expression, for example
{ "and": [ { "hasValue": ["result", "failed"] }, { "status": "completed" } ] }
-
This data stream This data source calls the analytics.dev.azure.com
service (for global calls this will resolve to https://analytics.dev.azure.com/{{org}}/_odata/v4.0-preview/{{endpoint}}/
) and is used to query data from the analytics service.
Generally, scoped access should normally be used over global, unless there is a specific reason not to (such as a necessity for data from across multiple projects). Scoped access is faster and less likely to run into permissions issues (for example, by accidentally trying to access a project you do not have access to).
In the tile editor, filter by the Azure DevOps data source, select Analytics (Global) from the data stream list and then click Next.
Select the objects that you want to use and then click Next.
-
Endpoint:
Enter the endpoint to call. For example,
/PipelineRuns
. This is effectively the table from which you want to start your query. It is the only mandatory field, however the API will give you a warning if you do not configure any query options as, in some cases, the response may be too large to handle. -
Query Style:
Select whether to be guided through query created or to manually write one. If you select Manual, the Query field displays where you can paste / enter your query. If Guided is selected, entry fields display allowing you to create the query using those parameters. For more information on creating a query see the Azure DevOps documentation.
Manual
Enter or paste your query in the Query field. You can use mustache to replace statically defined values like pipeline Id, and dates allowing you to peg queries quite tightly to a specific object or dashboard time frame.
A mustache parameter is a dynamic value, the actual value will be inserted to replace the field in curly braces. For example,
{{timeframe.start}}
will insert the start time based on the timeframe configured within the tile, or{{name}}
will insert the name of the object(s) in scope.The timeframe object has the following two main property style formats, which are the main two formats that Analytics uses for its dates:
ISO:
{{timeframe.start}}
or{{timeframe.end}}
resulting in something like2023-11-02T11:27:00.0000Z
YYYYMMDD:
{{timefram.startYMD}
} or{{timefram.endYMD}}
resulting in something like20231102
Guided
Select: Select properties you want returned from the results.
Filter: Filters the list of returned resources.
Order By: Applies sorting to the data.
Top: Returns the top X results from the expression (max 10,000).
Skip: Skip the top x results from the expression.
Apply: Applies transformations to the data.
Compute: Uses supported OData functions to compute properties for use in later expressions.
Expand: Expand related objects found via navigation properties.
This data stream This data source calls the analytics.dev.azure.com
service (for scoped calls this will resolve to https://analytics.dev.azure.com/{{org}}/{{projectName}}/_odata/v4.0-preview/{{endpoint}}/
) and is used to query data from the analytics service.
Scoped accesses a single project, and can be used to quickly find data from that project. When using scoped you also have access to properties from the object it has been scoped to use in your query.
Generally, scoped access should normally be used over global, unless there is a specific reason not to (such as a necessity for data from across multiple projects). Scoped access is faster and less likely to run into permissions issues (for example, by accidentally trying to access a project you do not have access to).
In the tile editor, filter by the Azure DevOps data source, select Analytics (Scoped) from the data stream list and then click Next.
Select the objects that you want to use and then click Next.
-
Endpoint:
Enter the endpoint to call. For example,
/PipelineRuns
. This is effectively the table from which you want to start your query. It is the only mandatory field, however the API will give you a warning if you do not configure any query options as, in some cases, the response may be too large to handle. -
Query Style:
Select whether to be guided through query created or to manually write one. If you select Manual, the Query field displays where you can paste / enter your query. If Guided is selected, entry fields display allowing you to create the query using those parameters. For more information on creating a query see the Azure DevOps documentation.
Manual
Enter or paste your query in the Query field. You can use mustache to replace statically defined values like pipeline Id, and dates allowing you to peg queries quite tightly to a specific object or dashboard time frame.
A mustache parameter is a dynamic value, the actual value will be inserted to replace the field in curly braces. For example,
{{timeframe.start}}
will insert the start time based on the timeframe configured within the tile, or{{name}}
will insert the name of the object(s) in scope.The timeframe object has the following two main property style formats, which are the main two formats that Analytics uses for its dates:
ISO:
{{timeframe.start}}
or{{timeframe.end}}
resulting in something like2023-11-02T11:27:00.0000Z
YYYYMMDD:
{{timefram.startYMD}
} or{{timefram.endYMD}}
resulting in something like20231102
Guided
Select: Select properties you want returned from the results.
Filter: Filters the list of returned resources.
Order By: Applies sorting to the data.
Top: Returns the top X results from the expression (max 10,000).
Skip: Skip the top x results from the expression.
Apply: Applies transformations to the data.
Compute: Uses supported OData functions to compute properties for use in later expressions.
Expand: Expand related objects found via navigation properties.
This data source calls the {{projectName}}/_apis/build/builds
and then the{{projectName}}/_apis/build/builds/{{buildId}}/changes
APIs and returns a list of commits found in build runs.
Because this data stream has the potential to make a large number of calls, we have implemented a limit on the number of builds that can be pulled in one go. We also offer the option to limit the number of builds pulled by filtering to various specific statuses.
Select the objects that you want to use and then click Next.
-
Use the following Query fields to adjust the API query before sending:
Top:
This field is mandatory. Returns a specific number of results. Useful if you only want the latest build, latest failed build, etc.
Trigger:
Returns builds run by a specific cause. For example, you may only want to retrieve builds run manually.
Result:
Returns builds with a specific result. For example, you may only want to retrieve failed builds.
Status:
Returns builds with a specific status. For example, you may only want to retrieve In Progress builds.
Some status filters cannot be used with Result filters. For example, an In Progress build will never have a result as it has not finished yet.
Returns a list of deployments for a release.
This data stream calls the /{{projectName}}/_odata/v4.0-preview/TaskAgentRequestSnapshots
endpoint, and allows you to enter custom query parameters and a custom filter.
In the tile editor, filter by the Azure DevOps data source, select Agent Pool Consumption from the data stream list and then click Next.
Select the objects that you want to use and then click Next.
-
Optionally, enter a set of OData Query Parameters, for example:
{ "$apply": "filter(IsRunning eq true AND IsHosted eq true)", "$orderby": "SamplingTime asc" }
-
Optionally, enter a JSON Filter Expression, for example:
{ "and": [ { "IsRunning": true }, { "IsHosted": true } ] }
This data stream calls the /{{projectName}}/_odata/v4.0-preview/PipelineRunActivityResults
endpoint, and allows you to enter custom query parameters and a custom filter.
In the tile editor, filter by the Azure DevOps data source, select Pipeline Run Activity Results from the data stream list and then click Next.
Select the objects that you want to use and then click Next.
-
Optionally, enter a set of OData Query Parameters, for example:
{ "$apply": "filter(Pipeline/PipelineId eq {{buildId}} and ActivityCompletedDate ge {{timeframe.start}} and (ActivityType eq null or ActivityType eq 'Task'))", "$order": "PipelineRunCompletedDateSK asc" }
-
Optionally, enter a JSON Filter Expression, for example:
{ "and": [ { "PipelineRunOutcome": "Failed" }, { "TaskOutcome": "Failed" } ] }
Rolls up the worst state of the environments underlying a release run.
Returns the same properties as you would see for a normal release run, however in addition it also contains the state of the worst environment, and the name of the environment it was first found in.
Writing a custom data stream (advanced users)
A custom data stream is a data stream that you, as an advanced user, can write yourself.
Any data stream you create can be edited by clicking the edit button (pencil) next to it in the tile editor, and also from Settings > Advanced > Data Streams.
Go to Settings > Advanced > Data Streams.
Click Add new Data Stream.
-
Enter a display name for your Data Stream.
Note: The display name is the name that you use to identify your Data Stream in SquaredUp. It has no technical impact and doesn't need to be referenced in the Data Stream's code.
-
Choose the Data Source this Data Stream is for.
After you've chosen the data source a new field Entry Point appears.
-
Entry point and code:
Each data stream uses an entry point, which can either be global (unscoped) or scoped, and this determines whether the data stream uses the tile scope.
Data streams can be either global or scoped:
Global data streams are unscoped and return information of a general nature (e.g. "Get the current number of unused hosts").
A scoped data stream gets information relevant to the specific set objects supplied in the tile scope (e.g. "Get the current session count for these hosts").
To find out which entry point to select and get code examples for the Code field, see the help below.
Click Save to save your Data Stream.
This entry point calls the /{{projectName}}/_apis/pipelines/{{buildId}}/runs
and /{{projectName}}/_apis/build/builds/{{id}}/timeline
endpoints. It allows you to display stages as additional columns in a table of build runs.
A good starter JSON for these features is:
{
"name": "buildsWithStages",
"dataSourceConfig": {},
"rowPath": [],
"matches": {
"sourceType": { "type": "oneOf", "values": ["Azure DevOps Build Pipeline", "Azure DevOps Build Folder", "Azure DevOps Project"] }
},
"metadata": [
{ "name": "result", "displayName": "State", "shape": ["state", { "map": { "success": ["succeeded"], "warning": ["canceled"], "error": ["failed"], "unknown": ["unknown"] } }] },
{ "name": "name", "visible": false, "shape": "string", "role": "label" },
{ "name": "_links.web.href", "displayName": "Name", "shape": ["url", { "label": "{{column.name}}" }] },
{ "name": "state", "displayName": "Build Run State", "shape": "string" },
{ "name": "createdDate", "displayName": "Created On", "shape": "date" },
{ "name": "finishedDate", "displayName": "Finished On", "shape": "date" },
{ "name": "pipeline.name", "displayName": "Pipeline Name", "shape": "string" },
{ "name": "id", "displayName": "ID", "shape": "string" }
]
}
This is the normal filter logic that works with every entry point. It can be added to the dataSourceConfig
object like so:
"dataSourceConfig": {
"filter": {
"and": [
{ "key1": "value1" },
{ "key2": "value2" }
]
}
}
This feature allows you to combine different stages from different release pipelines into a single column. Usually, if you, for example, scoped to two release pipelines and wanted to display one stage from each release pipeline and they each had different names, you’d have two separate columns - one for each stage. However, this would lead to one column always being empty because it doesn’t exist for a particular release run. With this feature, both stages can be displayed in the same column.
The configuration for this feature can be added to the dataSourceConfig
like so:
"dataSourceConfig": {
"stageFilter": [
{
"name": "Pre-Production",
"matchCase": false,
"contains": ["pre-prod"]
},
{
"name": "Production",
"matchCase": false,
"contains": ["prod"],
"notContains": ["pre-prod"]
}
]
}
One limitation of this logic is that you cannot combine multiple stages from the same release pipeline into a single column. The stage that appears in the column will be whichever stage comes last out of the stages that match the given criteria. For example, if you had two stages, Pre-Prod
and Prod
(in that order), and set up the criteria as "contains": ["prod"]
, it would match both stages - however, since Prod
occurs later than Pre-Prod
, then Prod
will be used.
To display stages as columns, they need to be added to the data stream’s metadata. For stages, they can be displayed two ways - you can display a string that simply displays the status of the stage, or you can display a colored dot that changes color depending on the status of the stage.
To display the result as a string, add the following object to the metadata
array:
{ "name": "environments.STAGE NAME.status", "displayName": "Stage Name", "shape": "string" }
To display the result as a colored dot, add the following object to the metadata
array:
{ "name": "environments.STAGE NAME.status", "displayName": "Stage Name", "shape": ["state", { "map": { "success": ["inProgress", "scheduled", "succeeded"], "warning": ["partiallySucceeded"], "error": ["rejected"], "unknown": ["canceled", "notStarted", "queued", "undefined"] } }] }
For both of the above objects, you must replace STAGE NAME
in the name
with the (case-sensitive) name of the stage you want to display in that column. If you included a stage filter, make sure that it matches one of the stage names that you defined. The displayName
can be set to any arbitrary string.
Optional, but recommended
The metadata
parameters are used to describe columns in order to tell SquaredUp what to do with them. You can do multiple things with the metadata
parameters:
-
Specify how SquaredUp should interpret the columns you return and - to an extent - how their content displayed. You do this by giving each column a shape.
The shape you assign to a column tells SquaredUp what the column contains (for example, a number, a date, a currency, a URL, etc.). Based on the shape SquaredUp decides how to display this column, for example to display a URL as a clickable link.
-
Filter out or just hide columns.
Only the columns you define in
metadata
will be returned in the results. This helps you to filter out columns you don't need. If you need the content of a column but don't want to display it, you can use thevisible
parameter. Give columns a nicely readable display name.
-
Assign a specific role to columns
The role you assign to a column tells SquaredUp the purpose of the column. For example, if you have two different columns that contain numbers, you need to assign the role
value
to the column that contains the actual value you want to use in your visualization.
Note: If you don't specify any metadata, all columns will be returned and SquaredUp will do its best to determine which columns should be used for which purpose. If you're returning pretty simple data, for example just a string and a number, this can work fine. But if you're returning two columns with numbers it gets trickier for SquaredUp to figure out which one is the value and which one is just an ID or some other number.
Parameters:
Tip: Before you start specifying metadata, leave them empty at first and get all the raw data with your new data stream once. In order to do this, finish creating your custom data stream without metadata and create a tile with this data stream. The Table visualization will show you all raw data.
This will give you an overview about all columns and their content and help you decide which columns you need and what their shapes and roles should be. It's also essential for getting the correct column name to reference in the name
parameter.
Use this information to go back to the data stream configuration and specifying the metadata.
name | Mandatory |
Enter the name of the column you are referencing here. To find the name of a column, get the data from this data stream once without any metadata. See the tip above for how to do that. You'll see the column name when you hover over the column in the Table. |
displayName | Optional | Here you can give the column a user-friendly name |
shape | Recommended |
The shape you assign to a column tells SquaredUp what the column contains (for example, a number, a date, a currency, a URL, etc.). Based on the shape SquaredUp decides how to display this column, for example to display a URL as a clickable link. Note: Please refer to the list of shapes below this table to see available shapes. |
role | Recommended |
The role you assign to a column tells SquaredUp the purpose of the column. For example, if you have two different columns that contain numbers, you need to assign the role Note: Please refer to the list of roles below this table to see available roles. |
visible | Optional |
Use this if you need a columns content but don't need to display the column itself. Example: Column A contains the full link to a ticket in your ticket system. Column B contains the ticket ID. You want to use the ticket ID as a label for the link, turning the long URL into a much nicer to read "Ticket 123". This is why you need the content of column B, to assign it as a label for column A. But since the URL is now displayed as the ticket ID, it would be redundant to still display column B. This is why you hide column B with |
There are many different shapes you can use for your columns and the list of possible shapes gets expanded constantly:
Basic types, like:
boolean
,date
,number
,string
Currency types that get displayed with two decimal values and their currency symbol (for example $23,45), like:
currency
(generic currency),eur
,gbp
,usd
Data types, like:
bytes
,kilobytes
,megabytes
Time types, like:
seconds
,milliseconds
,timespan
The status type :
state
-
Utility types, like:
customUnit
url
(will be displayed as a link)
Tip:
Some shapes can be configured.
If a shape is configurable, you can edit how the shape displays data in SquaredUp.
label | A column containing user-friendly names. Line Graphs use this role to group data into series. so each label will get its own line in the Line Graph. |
link | A column containing a link that can be used as a drilldown in Status Blocks. |
timestamp | A column containing a date to use on the X -axis of a Line Graph. |
unitLabel | A column containing user-friendly labels for data series, e.g. ‘Duration’. Line Graphs can use this role to label the Y-axis. |
value | A column containing the numeric value you want to use in your visualization. |
This entry point calls the /{{projectName}}/_apis/release/releases
endpoint. It allows you to display stages as additional columns in a table of release runs.
A good starter JSON for these features is:
{
"name": "releasesWithStages",
"dataSourceConfig": {},
"rowPath": [],
"matches": {
"sourceType": { "type": "oneOf", "values": ["Azure DevOps Release Pipeline", "Azure DevOps Release Folder", "Azure DevOps Project"] }
},
"metadata": [
{ "name": "name", "visible": false, "shape": "string", "role": "label" },
{ "name": "_links.web.href", "displayName": "Name", "shape": ["url", { "label": "{{column.name}}" }] },
{ "name": "description", "displayName": "Description", "shape": "string" },
{ "name": "releaseDefinition.name", "displayName": "Release Pipeline Name", "shape": "string" },
{ "name": "releaseDefinition.path", "displayName": "Release Pipeline Path", "shape": "string" },
{ "name": "reason", "displayName": "Reason", "shape": "string" },
{ "name": "createdOn", "displayName": "Created On", "shape": "date" },
{ "name": "createdBy.displayName", "displayName": "Created By", "shape": "string" },
{ "name": "createdBy.uniqueName", "displayName": "Created By (Unique Name)", "shape": "string" },
{ "name": "modifiedOn", "displayName": "Modified On", "shape": "date" },
{ "name": "createdFor.displayName", "displayName": "Created For", "shape": "string" },
{ "name": "craetedFor.uniqueName", "displayName": "Created For (Unique Name)", "shape": "string" },
{ "name": "id", "displayName": "ID", "shape": "string" }
]
}
This is the normal filter logic that works with every entry point. It can be added to the dataSourceConfig
object like so:
"dataSourceConfig": {
"filter": {
"and": [
{ "key1": "value1" },
{ "key2": "value2" }
]
}
}
This feature allows you to filter to only release runs where a stage associated with a given service ID has been run. It can be added to the dataSourceConfig
object like so:
"dataSourceConfig": {
"scopeFilter": {
"kubernetesServiceEndpoint": {
"equals": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX"
}
}
}
This feature allows you to combine different stages from different release pipelines into a single column. Usually, if you, for example, scoped to two release pipelines and wanted to display one stage from each release pipeline and they each had different names, you’d have two separate columns - one for each stage. However, this would lead to one column always being empty because it doesn’t exist for a particular release run. With this feature, both stages can be displayed in the same column.
The configuration for this feature can be added to the dataSourceConfig
like so:
"dataSourceConfig": {
"stageFilter": [
{
"name": "Pre-Production",
"matchCase": false,
"contains": ["pre-prod"]
},
{
"name": "Production",
"matchCase": false,
"contains": ["prod"],
"notContains": ["pre-prod"]
}
]
}
One limitation of this logic is that you cannot combine multiple stages from the same release pipeline into a single column. The stage that appears in the column will be whichever stage comes last out of the stages that match the given criteria. For example, if you had two stages, Pre-Prod
and Prod
(in that order), and set up the criteria as "contains": ["prod"]
, it would match both stages - however, since Prod
occurs later than Pre-Prod
, then Prod
will be used.
To display stages as columns, they need to be added to the data stream’s metadata. For stages, they can be displayed two ways - you can display a string that simply displays the status of the stage, or you can display a colored dot that changes color depending on the status of the stage.
To display the result as a string, add the following object to the metadata
array:
{ "name": "environments.STAGE NAME.status", "displayName": "Stage Name", "shape": "string" }
To display the result as a colored dot, add the following object to the metadata
array:
{ "name": "environments.STAGE NAME.status", "displayName": "Stage Name", "shape": ["state", { "map": { "success": ["inProgress", "scheduled", "succeeded"], "warning": ["partiallySucceeded"], "error": ["rejected"], "unknown": ["canceled", "notStarted", "queued", "undefined"] } }] }
For both of the above objects, you must replace STAGE NAME
in the name
with the (case-sensitive) name of the stage you want to display in that column. If you included a stage filter, make sure that it matches one of the stage names that you defined. The displayName
can be set to any arbitrary string.
Optional, but recommended
The metadata
parameters are used to describe columns in order to tell SquaredUp what to do with them. You can do multiple things with the metadata
parameters:
-
Specify how SquaredUp should interpret the columns you return and - to an extent - how their content displayed. You do this by giving each column a shape.
The shape you assign to a column tells SquaredUp what the column contains (for example, a number, a date, a currency, a URL, etc.). Based on the shape SquaredUp decides how to display this column, for example to display a URL as a clickable link.
-
Filter out or just hide columns.
Only the columns you define in
metadata
will be returned in the results. This helps you to filter out columns you don't need. If you need the content of a column but don't want to display it, you can use thevisible
parameter. Give columns a nicely readable display name.
-
Assign a specific role to columns
The role you assign to a column tells SquaredUp the purpose of the column. For example, if you have two different columns that contain numbers, you need to assign the role
value
to the column that contains the actual value you want to use in your visualization.
Note: If you don't specify any metadata, all columns will be returned and SquaredUp will do its best to determine which columns should be used for which purpose. If you're returning pretty simple data, for example just a string and a number, this can work fine. But if you're returning two columns with numbers it gets trickier for SquaredUp to figure out which one is the value and which one is just an ID or some other number.
Parameters:
Tip: Before you start specifying metadata, leave them empty at first and get all the raw data with your new data stream once. In order to do this, finish creating your custom data stream without metadata and create a tile with this data stream. The Table visualization will show you all raw data.
This will give you an overview about all columns and their content and help you decide which columns you need and what their shapes and roles should be. It's also essential for getting the correct column name to reference in the name
parameter.
Use this information to go back to the data stream configuration and specifying the metadata.
name | Mandatory |
Enter the name of the column you are referencing here. To find the name of a column, get the data from this data stream once without any metadata. See the tip above for how to do that. You'll see the column name when you hover over the column in the Table. |
displayName | Optional | Here you can give the column a user-friendly name |
shape | Recommended |
The shape you assign to a column tells SquaredUp what the column contains (for example, a number, a date, a currency, a URL, etc.). Based on the shape SquaredUp decides how to display this column, for example to display a URL as a clickable link. Note: Please refer to the list of shapes below this table to see available shapes. |
role | Recommended |
The role you assign to a column tells SquaredUp the purpose of the column. For example, if you have two different columns that contain numbers, you need to assign the role Note: Please refer to the list of roles below this table to see available roles. |
visible | Optional |
Use this if you need a columns content but don't need to display the column itself. Example: Column A contains the full link to a ticket in your ticket system. Column B contains the ticket ID. You want to use the ticket ID as a label for the link, turning the long URL into a much nicer to read "Ticket 123". This is why you need the content of column B, to assign it as a label for column A. But since the URL is now displayed as the ticket ID, it would be redundant to still display column B. This is why you hide column B with |
There are many different shapes you can use for your columns and the list of possible shapes gets expanded constantly:
Basic types, like:
boolean
,date
,number
,string
Currency types that get displayed with two decimal values and their currency symbol (for example $23,45), like:
currency
(generic currency),eur
,gbp
,usd
Data types, like:
bytes
,kilobytes
,megabytes
Time types, like:
seconds
,milliseconds
,timespan
The status type :
state
-
Utility types, like:
customUnit
url
(will be displayed as a link)
Tip:
Some shapes can be configured.
If a shape is configurable, you can edit how the shape displays data in SquaredUp.
label | A column containing user-friendly names. Line Graphs use this role to group data into series. so each label will get its own line in the Line Graph. |
link | A column containing a link that can be used as a drilldown in Status Blocks. |
timestamp | A column containing a date to use on the X -axis of a Line Graph. |
unitLabel | A column containing user-friendly labels for data series, e.g. ‘Duration’. Line Graphs can use this role to label the Y-axis. |
value | A column containing the numeric value you want to use in your visualization. |
Both the scope and stage filters support a set of different criteria for matching different values. Criteria options are:
Criteria | Type | Description | Default | Example |
matchCase | Boolean | Whether to match case-sensitively or not |
| "matchCase": false |
| Array of strings | Strings that the value should contain | [] | "contains": ["dev", "prod"]
|
notContains | Array of strings | Strings that the value should not contain | [] | "notContains": ["deploy", "pre-prod"]
|
equals | Array of strings | Strings that the value should equal | [] | "equals": ["development", "production"]
|
startsWith | Array of strings | Strings that the value should start with | [] | "startsWith": ["dev", "prod"]
|
endsWith | Array of strings | Strings that the value should end with | [] | "endsWith": ["vm", "testing"]
|
regex | Array of objects | Regular expressions that the value should match | [] | "regex": [{ "pattern": "", "flags": "g" }, { "pattern": "", "flags": "i" }]
|
All of the features that are used by adding objects to the dataSourceConfig
in a custom data stream can also be added to a tile itself. To do this, open the tile editor and switch to the Code tab. You can then add a custom dataSourceConfig
object to the dataStream
object in the editor. For example:
"dataStream": {
"id": "datastream-XXXXXXXXXXXXXXXXXXXX",
"dataSourceConfig": {
"scopeFilter": {
"kubernetesServiceEndpoint": {
"equals": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX"
}
},
"stageFilter": [{
"name": "My Stage",
"matchCase": false
"contains": ["prod"],
"notContains": ["pre-prod"]
}],
"filter": {
"and": [
{ "key1": "value1" },
{ "key2": "value2" }
]
}
}
}
One limitation of this approach is that if you’re using a custom data stream, the existing dataSourceConfig
gets entirely overwritten by the one on the tile, so, for example, if you have a stage filter in the custom data stream and a filter on the tile, only the filter will have any effect.