Azure DevOps plugin
For more information about what this plugin does and the data streams it retrieves, see:
Monitor the Builds and Releases from your Azure DevOps environment. On this page you will find:
How to add an Azure DevOps data source
How to add an Azure DevOps On-Premise data source
Using the Azure DevOps data streams
Writing a custom data stream (advanced users)
How to add the data source
To add a data source click on the + next to Data Sources on the left-hand menu in SquaredUp. Search for the data source and click on it to open the Configure data source page.
Configuring the data source
Display Name:
Enter a name for your data source. This helps you to identify this data source in the list of your data sources.- Product:
Select Azure DevOps Cloud. - Organization Name:
Enter the name of the Azure DevOps organization you want to use for this data source. - Authentication type:
You have two options to choose from for authentication, using the Microsoft Entra ID sign-in button or entering a Personal Access Token:- Microsoft Entra ID:
This will allow the data source instance to access Azure DevOps using a user account.
This gives less granular control over the permissions the data source runs with. This is not ideal from a "least privilege" point of view, but can be useful when experimenting with the data source against non-production environments.
For production environments, using a Personal Access Token, as described below, is recommended.
After you click the Sign in with Microsoft button you can choose to login as an administrator of the target tenant or a non-administrator:
See Microsoft: Manage consent to applications and evaluate consent requestsFor this feature to work, you must ensure that your AzureDevOps organization is connected on the Microsoft Entra tab of the Azure DevOps Settings page.
As an administrator you can either consent for just yourself or for everyone in the organization by clicking 'Consent of behalf of your organization', see User and admin consent in Microsoft Entra ID
The Azure data source will then use this administrator's credentials. With this in mind you may choose to Restrict access to this data source.
At the Approval required prompt you must enter justification for requesting access and request approval. In SquaredUp you will see an
'access_denied - (cancel)'
message until an administrator approves your request.An administrator of the target tenant can respond to the consent request in the Azure portal > Enterprise applications > Admin consent requests see Microsoft: Review admin consent requests.
After consent has been granted the non-administrator must return to the Azure data source configuration and click the Sign-in with Microsoft button again. This time after signing in the message
Logged in as <username>
will be shown.The Azure data source will then use this non-administrator's credentials.
When using OAuth to add an Azure data source, you can optionally limit the plugin imports to one or more subscription or management groups.
- Enter as many Subscription IDs as you require.
- Enter as many Management Group IDs as you require.
- Personal Access Token:
Enter the personal access token you created in Azure.
If you need help creating a personal access token, please refer to the Azure documentation: https://docs.microsoft.com/en-us/azure/devops/organizations/accounts/use-personal-access-tokens-to-authenticateThe following information lists the permissions required by the SquaredUp Azure DevOps plugin and where they are used.
While
Projects and Teams
are the only mandatory permissions for the plugin to function, the plugin will be essentially useless without additional permissions assigned.Analytics
are the second most necessary permissions, as they're used in places other than the analytics data streams. You can limit the assigned permissions toAnalytics
andProjects and Teams
, but you’ll have to write a lot of queries yourself.The remaining permissions are optional, however warnings will appear if they are not assigned. You can choose from as many of the following permission as you require as long as you have both
Projects and Teams
andAnalytics
permissions assigned.PermissionUsageProjects and Teams (Mandatory)Import:- Projects
AnalyticsData Streams:- Agent Pool Consumption
- Agent Runs
- Analytics (Global)
- Analytics (Scoped)
- Build Durations
- Build Runs with Stages
- Job Queues
- Pipeline Run Activity Results
- Task Failures
BuildImport:- Build Pipelines
- Build Folder
Data Streams:- Build Failures
- Build Runs
- Builds in progress
- Commits in Build Run
ReleaseImport:- Release
- Release Folder
Data Streams:- Deployments
- Release Run Environment State Roll-up
- Release Runs
CodeImport :- Repos
- Repo folder
- Branch
Data Streams:- Active Pull Requests
- All Pull Requests
- Branch Status
- Commits
- Completed Pull Requests
Deployment GroupsImport :- Deployment Groups
Data Streams:- Deployment Targets
EnvironmentsImport :- Environments
Data Streams:- Environment Deployment Build Runs
- Environment Deployment Records
PackagingImport:- Artifact Package
- Artifact Feed
Data Streams:- Artifact Package Versions
- Artifact Packages
Task GroupsImport:- Task Groups
Data Streams:- Task Group Contents
- Task Group References
- Task Group Revisions
Work ItemsImport:- Queries
Data Streams:- Query Work Items
- WIQL
- Microsoft Entra ID:
Install Sample Dashboards:
Select whether you would like to install sample dashboards with the data source. By default, this is set to on.If you chose to install sample dashboards, two out-of-the-box dashboards (My DevOps Organization and Pipeline Overview) are generated to help you get started. This short two minute video gives an overview of these starter dashboards:
Optionally, select whether you would like to restrict access to this data source instance. By default, restricted access is set to off.
The term data source here really means data source instance. For example, a user may configure two instances of the AWS data source, one for their development environment and one for production. In that case, each data source instance has its own access control settings.
By default, Restrict access to this data source is set to off. The data source can be viewed, edited and administered by anyone. If you would like to control who has access to this data source, switch Restrict access to this data source to on.
Use the Restrict access to this data source dropdown to control who has access to the workspace:
- By default, the user setting the permissions for the data source will be given Full Control and the Everyone group will be given Link to workspace permissions.
- Tailor access to the data source, as required, by selecting individual users or user groups from the dropdown and giving them Link to workspace or Full Control permissions.
- If the user is not available from the dropdown, you are able to invite them to the data source by typing in their email address and then clicking Add. The new user will then receive an email inviting them to create an account on SquaredUp. Once the account has been created, they will gain access to the organization.
- At least one user or group must be given Full Control.
- Admin users can edit the configuration, modify the Access Control List (ACL) and delete the data source, regardless of the ACL chosen.
Access Level:
Link to workspace- User can link the data source to any workspace they have at least Editor permissions for.
- Data from the data source can then be viewed by anyone with any access to the workspace.
- User can share the data source data with anyone they want.
- User cannot configure the data source in any way, or delete it.
Full Control- User can change the data source configuration, ACL, and delete the data source.
See Access control for more information.
Click Test and add to validate the data source configuration.
- Testing passed – a success message will be displayed and then the configuration will be saved.
- Testing passed with warnings – warnings will be listed and potential fixes suggested. You can still use the data source with warnings. Select Save with warnings if you believe that you can still use the data source as required with the warnings listed. Alternatively, address the issues listed and then select Rerun tests to validate the data source configuration again. If the validation now passes, click Save.
- Testing Failed – errors will be listed and potential fixes suggested. You cannot use the data source with errors. You are able to select Save with errors if you believe that a system outside of SquaredUp is causing the error that you need to fix. Alternatively, address the issues listed and then select Rerun tests to validate the data source configuration again. If the validation now passes, click Save.
You can edit any data source configurations at any time from Settings > Data Sources.
You can also add a data source from Settings > Data Sources > Add data source, but sample dashboards are not added when using this method.
How to add the on-prem data source
To add a data source click on the + next to Data Sources on the left-hand menu in SquaredUp. Search for the data source and click on it to open the Configure data source page.
This is an on-prem data source.
An on-prem data source connects a service running in your internal network to SquaredUp. They require an agent installed on a machine that has access to your internal network.
Before you start
Configuring and deploying an agent
If you have already created an agent in SquaredUp that you can use for this data source, you can skip this step and choose the agent group you want to use while adding the data source.
See one of the following, depending on your platform type:
Configuring the data source
Display Name:
Enter a name for your data source. This helps you to identify this data source in the list of your data sources.Agent Group:
Select the Agent Group that contains the agent(s) you want to use.- Product:
Select Azure DevOps Server. - Base URL:
Enter the base URL of the Azure DevOps server you want to use for this data source. - Collection Name:
Enter the name of the Azure DevOps server collection you want to use for this data source. - Personal Access Token:
Enter the personal access token you created in Azure.
If you need help creating a personal access token, please refer to the Azure documentation: https://docs.microsoft.com/en-us/azure/devops/organizations/accounts/use-personal-access-tokens-to-authenticateWhile most data streams will work with read-only permissions, some may additional permissions to work correctly. For example, the Agent Usage and Job Queues data streams fail when the Personal Access Token used in the configuration is set to read-only
Install Sample Dashboards:
Select whether you would like to install sample dashboards with the data source. By default, this is set to on.If you chose to install sample dashboards, two out-of-the-box dashboards (My DevOps Organization and Pipeline Overview) are generated to help you get started. This short two minute video gives an overview of these starter dashboards:
Optionally, select whether you would like to restrict access to this data source instance. By default, restricted access is set to off.
The term data source here really means data source instance. For example, a user may configure two instances of the AWS data source, one for their development environment and one for production. In that case, each data source instance has its own access control settings.
By default, Restrict access to this data source is set to off. The data source can be viewed, edited and administered by anyone. If you would like to control who has access to this data source, switch Restrict access to this data source to on.
Use the Restrict access to this data source dropdown to control who has access to the workspace:
- By default, the user setting the permissions for the data source will be given Full Control and the Everyone group will be given Link to workspace permissions.
- Tailor access to the data source, as required, by selecting individual users or user groups from the dropdown and giving them Link to workspace or Full Control permissions.
- If the user is not available from the dropdown, you are able to invite them to the data source by typing in their email address and then clicking Add. The new user will then receive an email inviting them to create an account on SquaredUp. Once the account has been created, they will gain access to the organization.
- At least one user or group must be given Full Control.
- Admin users can edit the configuration, modify the Access Control List (ACL) and delete the data source, regardless of the ACL chosen.
Access Level:
Link to workspace- User can link the data source to any workspace they have at least Editor permissions for.
- Data from the data source can then be viewed by anyone with any access to the workspace.
- User can share the data source data with anyone they want.
- User cannot configure the data source in any way, or delete it.
Full Control- User can change the data source configuration, ACL, and delete the data source.
See Access control for more information.
Click Test and add to validate the data source configuration.
- Testing passed – a success message will be displayed and then the configuration will be saved.
- Testing passed with warnings – warnings will be listed and potential fixes suggested. You can still use the data source with warnings. Select Save with warnings if you believe that you can still use the data source as required with the warnings listed. Alternatively, address the issues listed and then select Rerun tests to validate the data source configuration again. If the validation now passes, click Save.
- Testing Failed – errors will be listed and potential fixes suggested. You cannot use the data source with errors. You are able to select Save with errors if you believe that a system outside of SquaredUp is causing the error that you need to fix. Alternatively, address the issues listed and then select Rerun tests to validate the data source configuration again. If the validation now passes, click Save.
You can edit any data source configurations at any time from Settings > Data Sources.
You can also add a data source from Settings > Data Sources > Add data source, but sample dashboards are not added when using this method.
Using the Azure DevOps Data Streams
Data streams standardize data from all the different shapes and formats your tools use into a straightforward tabular format. While creating a tile you can tweak data streams by grouping or aggregating specific columns. Depending on the kind of data, SquaredUp will automatically suggest how to visualize the result, for example as a table or line graph.
Data streams can be either global or scoped:
- Global data streams are unscoped and return information of a general nature (e.g. "Get the current number of unused hosts").
- A scoped data stream gets information relevant to the specific set objects supplied in the tile scope (e.g. "Get the current session count for these hosts").
See Data Streams for more information.
Data streams
The following data streams are installed with this plugin.
This data stream This data source calls the analytics.dev.azure.com
service (for global calls this will resolve to https://analytics.dev.azure.com/{{org}}/_odata/v4.0-preview/{{endpoint}}/
) and is used to query data from the analytics service.
Generally, scoped access should normally be used over global, unless there is a specific reason not to (such as a necessity for data from across multiple projects). Scoped access is faster and less likely to run into permissions issues (for example, by accidentally trying to access a project you do not have access to).
- In the tile editor, filter by the Azure DevOps data source, select Analytics (Global) from the data stream list and then click Next.
- Entity Type (Endpoint):
Specify the endpoint to call. For example,PipelineRuns
. This is effectively the table from which you want to start your query. It is the only mandatory field, however the API will give you a warning if you do not configure any query options as, in some cases, the response may be too large to handle. - Query:
Enter or paste your query in the Query field. You can use mustache to replace statically defined values like pipeline Id, and dates allowing you to peg queries quite tightly to a specific object or dashboard time frame.A mustache parameter is a dynamic value, the actual value will be inserted to replace the field in curly braces. For example,
{{timeframe.start}}
will insert the start time based on the timeframe configured within the tile, or{{name}}
will insert the name of the object(s) in scope.For more information on creating a query see the Azure DevOps documentation.The timeframe object has the following two main property style formats, which are the main two formats that Analytics uses for its dates:
- ISO:
{{timeframe.start}}
or{{timeframe.end}}
resulting in something like2023-11-02T11:27:00.0000Z
- YYYYMMDD:
{{timefram.startYMD}
} or{{timefram.endYMD}}
resulting in something like20231102
- ISO:
- Select:
Optionally, select the names of the columns you want to display. This filters results and displays only the columns you have selected. - Top:
Optionally, enter the top X results to return from the expression (max 10,000). - Skip:
Optionally, enter the top x results to skip from the expression.
This data stream This data source calls the analytics.dev.azure.com
service (for scoped calls this will resolve to https://analytics.dev.azure.com/{{org}}/{{projectName}}/_odata/v4.0-preview/{{endpoint}}/
) and is used to query data from the analytics service.
Scoped accesses a single project, and can be used to quickly find data from that project. When using scoped you also have access to properties from the object it has been scoped to use in your query.
Generally, scoped access should normally be used over global, unless there is a specific reason not to (such as a necessity for data from across multiple projects). Scoped access is faster and less likely to run into permissions issues (for example, by accidentally trying to access a project you do not have access to).
- In the tile editor, filter by the Azure DevOps data source, select Analytics (Scoped) from the data stream list and then click Next.
- Select the objects that you want to use and then click Next.
- Entity Type (Endpoint):
Specify the endpoint to call. For example,PipelineRuns
. This is effectively the table from which you want to start your query. It is the only mandatory field, however the API will give you a warning if you do not configure any query options as, in some cases, the response may be too large to handle. - Query:
Enter or paste your query in the Query field. You can use mustache to replace statically defined values like pipeline Id, and dates allowing you to peg queries quite tightly to a specific object or dashboard time frame.A mustache parameter is a dynamic value, the actual value will be inserted to replace the field in curly braces. For example,
{{timeframe.start}}
will insert the start time based on the timeframe configured within the tile, or{{name}}
will insert the name of the object(s) in scope.For more information on creating a query see the Azure DevOps documentation.The timeframe object has the following two main property style formats, which are the main two formats that Analytics uses for its dates:
- ISO:
{{timeframe.start}}
or{{timeframe.end}}
resulting in something like2023-11-02T11:27:00.0000Z
- YYYYMMDD:
{{timefram.startYMD}
} or{{timefram.endYMD}}
resulting in something like20231102
- ISO:
- Select:
Optionally, select the names of the columns you want to display. This filters results and displays only the columns you have selected. - Top:
Optionally, enter the top X results to return from the expression (max 10,000). - Skip:
Optionally, enter the top x results to skip from the expression.
This data stream calls the /{{projectName}}/_odata/v4.0-preview/TaskAgentRequestSnapshots
endpoint, and allows you to enter custom query parameters and a custom filter.
- In the tile editor, filter by the Azure DevOps data source, select Agent Pool Consumption from the data stream list and then click Next.
- Select the objects that you want to use and then click Next.
- Optionally, enter a set of OData Query Parameters, for example:
{ "$apply": "filter(IsRunning eq true AND IsHosted eq true)", "$orderby": "SamplingTime asc" }
- Optionally, enter a JSON Filter Expression, for example:
{ "and": [ { "IsRunning": true }, { "IsHosted": true } ] }
This data stream calls the /{{projectName}}/_apis/pipelines/{{buildId}}/runs
endpoint, and allows you to enter a custom filter.
- In the tile editor, filter by the Azure DevOps data source, select Build Runs from the data stream list and then click Next.
- Select + Build Runs from the data stream list
- Select the objects that you want to use and then click Next.
- Top:
Returns a specific number of results. Useful if you only want the latest build, latest failed build, etc. - Trigger:
Returns builds run by a specific cause. For example, you may only want to retrieve builds run manually. - Result:
Returns builds with a specific result. For example, you may only want to retrieve failed builds. - Status:
Returns builds with a specific status. For example, you may only want to retrieve In Progress builds.Some status filters cannot be used with Result filters. For example, an In Progress build will never have a result as it has not finished yet.
- Build IDs:
Optionally, enter a comma-separate list of build IDs to filter by.The Build IDs filter can only be used with compatible filter parameters. Attempting to use an incompatible filter results in an error that says
The buildIds filter may not be used with other filter parameters
. Incompatible filter parameters includemaxTime
andminTime
, therefore filtering by Build IDs disables timeframe functionality. - Branch:
Optionally, specify a branch to filter by.The specified branch must start with the prefix
refs/heads/
to function correctly. - Optionally, further filter, group or sort the results on the Shaping tab.
This data source calls the {{projectName}}/_apis/build/builds
and then the{{projectName}}/_apis/build/builds/{{buildId}}/changes
APIs and returns a list of commits found in build runs.
Because this data stream has the potential to make a large number of calls, we have implemented a limit on the number of builds that can be pulled in one go. We also offer the option to limit the number of builds pulled by filtering to various specific statuses.
- Select the objects that you want to use and then click Next.
- Use the following Query fields to adjust the API query before sending:
Top:
This field is mandatory. Returns a specific number of results. Useful if you only want the latest build, latest failed build, etc.
Trigger:
Returns builds run by a specific cause. For example, you may only want to retrieve builds run manually.
Result:
Returns builds with a specific result. For example, you may only want to retrieve failed builds.
Status:
Returns builds with a specific status. For example, you may only want to retrieve In Progress builds.Some status filters cannot be used with Result filters. For example, an In Progress build will never have a result as it has not finished yet.
Returns a list of deployments for a release.
This data stream calls the /{{projectName}}/_odata/v4.0-preview/PipelineRunActivityResults
endpoint, and allows you to enter custom query parameters and a custom filter.
- In the tile editor, filter by the Azure DevOps data source, select Pipeline Run Activity Results from the data stream list and then click Next.
- Select the objects that you want to use and then click Next.
- Optionally, enter a set of OData Query Parameters, for example:
{ "$apply": "filter(Pipeline/PipelineId eq {{buildId}} and ActivityCompletedDate ge {{timeframe.start}} and (ActivityType eq null or ActivityType eq 'Task'))", "$order": "PipelineRunCompletedDateSK asc" }
- Optionally, enter a JSON Filter Expression, for example:
{ "and": [ { "PipelineRunOutcome": "Failed" }, { "TaskOutcome": "Failed" } ] }
This configurable data stream calls the /{{projectName}}/_apis/release/releases
endpoint. You can use the Parameters to configure additional columns.
- In the tile editor, filter by the Azure DevOps data source, select Release Runs from the data stream list and then click Next.
- Select the objects that you want to use and then click Next.
- Configure the following Parameters as required:
- Top:
Returns a specific number of results. Useful if you only want the latest release etc. - Show stages:
Toggles whether the stage columns display. You can then select one of the following column styles:- Detail: Displays stage columns with headers reflecting the stage name and displays the status is the column cell.
- Summary: Displays stage columns with numerical headers for each stage and displays the status is the column cell. This option is useful when there are release runs in the data with different stages, as it can significantly reduce the number of columns. Hover over a status cell in the column to view the stage name.
- Show artifacts:
Toggles whether the artifacts columns display. - Show tags:
Toggles whether the Tags column displays. This parameter is enabled by default.
- Top:
Filtering
Every data stream can use a filter
object in the dataSourceConfig
for basic filtering. More information about the required structure of this object can be found here: GitHub: Expression. For this filter
functionality, there exists a user-defined function hasValue
, that uses the following syntax:
"hasValue": ["key", "value"]
undefined
.Some data streams can use scopeFilter
and stageFilter
objects for more advanced filtering. More information about these features can be found in the Using custom data streams with the Azure DevOps data source section.
Writing a custom data stream (advanced users)
A custom data stream is a data stream that you, as an advanced user, can write yourself.
Any data stream you create can be edited by clicking the edit button (pencil) next to it in the tile editor, and also from Settings > Advanced > Data Streams.
- In SquaredUp, browse to Settings > Advanced > Data Streams.
- Click Add custom data stream.
- Add your custom data stream by entering the following settings:
- Name:
Enter a display name for your data stream.The display name is the name that you use to identify your data stream in SquaredUp. It has no technical impact and doesn't need to be referenced in the data stream's code.
- Data source:
Choose the data source this data stream is for.
After you've chosen the data source the Entry Point field displays. - Entry Point:
Specify the data stream entry point and enter the Code below.To find out which entry point to select and get code examples for the Code field, see the help below.Each data stream uses an entry point, which can either be global (unscoped) or scoped, and this determines whether the data stream uses the tile scope.
Data streams can be either global or scoped:
- Global data streams are unscoped and return information of a general nature (e.g. "Get the current number of unused hosts").
- A scoped data stream gets information relevant to the specific set objects supplied in the tile scope (e.g. "Get the current session count for these hosts").
- Name:
- Click Save to save your data stream.
This entry point calls the /{{projectName}}/_apis/pipelines/{{buildId}}/runs
and /{{projectName}}/_apis/build/builds/{{id}}/timeline
endpoints. It allows you to display stages as additional columns in a table of build runs.
A good starter JSON for these features is:
{
"name": "buildsWithStages",
"dataSourceConfig": {},
"rowPath": [],
"matches": {
"sourceType": { "type": "oneOf", "values": ["Azure DevOps Build Pipeline", "Azure DevOps Build Folder", "Azure DevOps Project"] }
},
"metadata": [
{ "name": "result", "displayName": "State", "shape": ["state", { "map": { "success": ["succeeded"], "warning": ["canceled"], "error": ["failed"], "unknown": ["unknown"] } }] },
{ "name": "name", "visible": false, "shape": "string", "role": "label" },
{ "name": "_links.web.href", "displayName": "Name", "shape": ["url", { "label": "{{column.name}}" }] },
{ "name": "state", "displayName": "Build Run State", "shape": "string" },
{ "name": "createdDate", "displayName": "Created On", "shape": "date" },
{ "name": "finishedDate", "displayName": "Finished On", "shape": "date" },
{ "name": "pipeline.name", "displayName": "Pipeline Name", "shape": "string" },
{ "name": "id", "displayName": "ID", "shape": "string" }
]
}
This is the normal filter logic that works with every entry point. It can be added to the dataSourceConfig
object like so:
"dataSourceConfig": {
"filter": {
"and": [
{ "key1": "value1" },
{ "key2": "value2" }
]
}
}
This feature allows you to combine different stages from different release pipelines into a single column. Usually, if you, for example, scoped to two release pipelines and wanted to display one stage from each release pipeline and they each had different names, you’d have two separate columns - one for each stage. However, this would lead to one column always being empty because it doesn’t exist for a particular release run. With this feature, both stages can be displayed in the same column.
The configuration for this feature can be added to the dataSourceConfig
like so:
"dataSourceConfig": {
"stageFilter": [
{
"name": "Pre-Production",
"matchCase": false,
"contains": ["pre-prod"]
},
{
"name": "Production",
"matchCase": false,
"contains": ["prod"],
"notContains": ["pre-prod"]
}
]
}
One limitation of this logic is that you cannot combine multiple stages from the same release pipeline into a single column. The stage that appears in the column will be whichever stage comes last out of the stages that match the given criteria. For example, if you had two stages, Pre-Prod
and Prod
(in that order), and set up the criteria as "contains": ["prod"]
, it would match both stages - however, since Prod
occurs later than Pre-Prod
, then Prod
will be used.
To display stages as columns, they need to be added to the data stream’s metadata. For stages, they can be displayed two ways - you can display a string that simply displays the status of the stage, or you can display a colored dot that changes color depending on the status of the stage.
To display the result as a string, add the following object to the metadata
array:
{ "name": "environments.STAGE NAME.status", "displayName": "Stage Name", "shape": "string" }
To display the result as a colored dot, add the following object to the metadata
array:
{ "name": "environments.STAGE NAME.status", "displayName": "Stage Name", "shape": ["state", { "map": { "success": ["inProgress", "scheduled", "succeeded"], "warning": ["partiallySucceeded"], "error": ["rejected"], "unknown": ["canceled", "notStarted", "queued", "undefined"] } }] }
For both of the above objects, you must replace STAGE NAME
in the name
with the (case-sensitive) name of the stage you want to display in that column. If you included a stage filter, make sure that it matches one of the stage names that you defined. The displayName
can be set to any arbitrary string.
Optional, but recommended
The metadata
parameters are used to describe columns in order to tell SquaredUp what to do with them. You can do multiple things with the metadata
parameters:
- Specify how SquaredUp should interpret the columns you return and - to an extent - how their content displayed. You do this by giving each column a shape.
The shape you assign to a column tells SquaredUp what the column contains (for example, a number, a date, a currency, a URL, etc.). Based on the shape SquaredUp decides how to display this column, for example to display a URL as a clickable link.
- Filter out or just hide columns.
Only the columns you define inmetadata
will be returned in the results. This helps you to filter out columns you don't need. If you need the content of a column but don't want to display it, you can use thevisible
parameter. - Give columns a nicely readable display name.
- Assign a specific role to columns .
The role you assign to a column tells SquaredUp the purpose of the column. For example, if you have two different columns that contain numbers, you need to assign the role
value
to the column that contains the actual value you want to use in your visualization.
If you don't specify any metadata, all columns will be returned and SquaredUp will do its best to determine which columns should be used for which purpose. If you're returning pretty simple data, for example just a string and a number, this can work fine. But if you're returning two columns with numbers it gets trickier for SquaredUp to figure out which one is the value and which one is just an ID or some other number.
Parameters:
Before you start specifying metadata, leave them empty at first and get all the raw data with your new data stream once.
In order to do this, finish creating your custom data stream without metadata and create a tile with this data stream. The Table visualization will show you all raw data.
This will give you an overview about all columns and their content and help you decide which columns you need and what their shapes and roles should be. It's also essential for getting the correct column name to reference in the name
parameter.
Use this information to go back to the data stream configuration and specifying the metadata.
name | Mandatory | Enter the name of the column you are referencing here. To find the name of a column, get the data from this data stream once without any metadata. See the tip above for how to do that. You'll see the column name when you hover over the column in the Table. |
displayName | Optional | Here you can give the column a user-friendly name |
shape | Recommended | The shape you assign to a column tells SquaredUp what the column contains (for example, a number, a date, a currency, a URL, etc.). Based on the shape SquaredUp decides how to display this column, for example to display a URL as a clickable link. Note: Please refer to the list of shapes below this table to see available shapes. |
role | Recommended | The role you assign to a column tells SquaredUp the purpose of the column. For example, if you have two different columns that contain numbers, you need to assign the role Note: Please refer to the list of roles below this table to see available roles. |
visible | Optional | true or false Use this if you need a columns content but don't need to display the column itself. Example: Column A contains the full link to a ticket in your ticket system. Column B contains the ticket ID. You want to use the ticket ID as a label for the link, turning the long URL into a much nicer to read "Ticket 123". This is why you need the content of column B, to assign it as a label for column A. But since the URL is now displayed as the ticket ID, it would be redundant to still display column B. This is why you hide column B with false . |
There are many different shapes you can use for your columns and the list of possible shapes gets expanded constantly:
- Basic types, like:
boolean
,date
,number
,string
- Currency types that get displayed with two decimal values and their currency symbol (for example $23,45), like:
currency
(generic currency),eur
,gbp
,usd
- Data types, like:
bytes
,kilobytes
,megabytes
- Time types, like:
seconds
,milliseconds
,timespan
- The status type :
state
- Utility types, like:
customUnit
url
(will be displayed as a link)
Tip:
Some shapes can be configured.
If a shape is configurable, you can edit how the shape displays data in SquaredUp.
id | Used by data streams feeding the aggregate health stream to identify their Id column |
label | A column containing user-friendly names. Line Graphs use this role to group data into series. so each label will get its own line in the Line Graph. |
link | A column containing a link that can be used as a drilldown in Status Blocks. |
timestamp | A column containing a date to use on the X -axis of a Line Graph. |
unitLabel | A column containing user-friendly labels for data series, e.g. ‘Duration’. Line Graphs can use this role to label the Y-axis. |
value | A column containing the numeric value you want to use in your visualization. |
This entry point calls the /{{projectName}}/_apis/release/releases
endpoint. It allows you to display stages as additional columns in a table of release runs.
A good starter JSON for these features is:
{
"name": "releasesWithStages",
"dataSourceConfig": {},
"rowPath": [],
"matches": {
"sourceType": { "type": "oneOf", "values": ["Azure DevOps Release Pipeline", "Azure DevOps Release Folder", "Azure DevOps Project"] }
},
"metadata": [
{ "name": "name", "visible": false, "shape": "string", "role": "label" },
{ "name": "_links.web.href", "displayName": "Name", "shape": ["url", { "label": "{{column.name}}" }] },
{ "name": "description", "displayName": "Description", "shape": "string" },
{ "name": "releaseDefinition.name", "displayName": "Release Pipeline Name", "shape": "string" },
{ "name": "releaseDefinition.path", "displayName": "Release Pipeline Path", "shape": "string" },
{ "name": "reason", "displayName": "Reason", "shape": "string" },
{ "name": "createdOn", "displayName": "Created On", "shape": "date" },
{ "name": "createdBy.displayName", "displayName": "Created By", "shape": "string" },
{ "name": "createdBy.uniqueName", "displayName": "Created By (Unique Name)", "shape": "string" },
{ "name": "modifiedOn", "displayName": "Modified On", "shape": "date" },
{ "name": "createdFor.displayName", "displayName": "Created For", "shape": "string" },
{ "name": "craetedFor.uniqueName", "displayName": "Created For (Unique Name)", "shape": "string" },
{ "name": "id", "displayName": "ID", "shape": "string" }
]
}
This is the normal filter logic that works with every entry point. It can be added to the dataSourceConfig
object like so:
"dataSourceConfig": {
"filter": {
"and": [
{ "key1": "value1" },
{ "key2": "value2" }
]
}
}
This feature allows you to filter to only release runs where a stage associated with a given service ID has been run. It can be added to the dataSourceConfig
object like so:
"dataSourceConfig": {
"scopeFilter": {
"kubernetesServiceEndpoint": {
"equals": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX"
}
}
}
This feature allows you to combine different stages from different release pipelines into a single column. Usually, if you, for example, scoped to two release pipelines and wanted to display one stage from each release pipeline and they each had different names, you’d have two separate columns - one for each stage. However, this would lead to one column always being empty because it doesn’t exist for a particular release run. With this feature, both stages can be displayed in the same column.
The configuration for this feature can be added to the dataSourceConfig
like so:
"dataSourceConfig": {
"stageFilter": [
{
"name": "Pre-Production",
"matchCase": false,
"contains": ["pre-prod"]
},
{
"name": "Production",
"matchCase": false,
"contains": ["prod"],
"notContains": ["pre-prod"]
}
]
}
One limitation of this logic is that you cannot combine multiple stages from the same release pipeline into a single column. The stage that appears in the column will be whichever stage comes last out of the stages that match the given criteria. For example, if you had two stages, Pre-Prod
and Prod
(in that order), and set up the criteria as "contains": ["prod"]
, it would match both stages - however, since Prod
occurs later than Pre-Prod
, then Prod
will be used.
To display stages as columns, they need to be added to the data stream’s metadata. For stages, they can be displayed two ways - you can display a string that simply displays the status of the stage, or you can display a colored dot that changes color depending on the status of the stage.
To display the result as a string, add the following object to the metadata
array:
{ "name": "environments.STAGE NAME.status", "displayName": "Stage Name", "shape": "string" }
To display the result as a colored dot, add the following object to the metadata
array:
{ "name": "environments.STAGE NAME.status", "displayName": "Stage Name", "shape": ["state", { "map": { "success": ["inProgress", "scheduled", "succeeded"], "warning": ["partiallySucceeded"], "error": ["rejected"], "unknown": ["canceled", "notStarted", "queued", "undefined"] } }] }
For both of the above objects, you must replace STAGE NAME
in the name
with the (case-sensitive) name of the stage you want to display in that column. If you included a stage filter, make sure that it matches one of the stage names that you defined. The displayName
can be set to any arbitrary string.
Optional, but recommended
The metadata
parameters are used to describe columns in order to tell SquaredUp what to do with them. You can do multiple things with the metadata
parameters:
- Specify how SquaredUp should interpret the columns you return and - to an extent - how their content displayed. You do this by giving each column a shape.
The shape you assign to a column tells SquaredUp what the column contains (for example, a number, a date, a currency, a URL, etc.). Based on the shape SquaredUp decides how to display this column, for example to display a URL as a clickable link.
- Filter out or just hide columns.
Only the columns you define inmetadata
will be returned in the results. This helps you to filter out columns you don't need. If you need the content of a column but don't want to display it, you can use thevisible
parameter. - Give columns a nicely readable display name.
- Assign a specific role to columns .
The role you assign to a column tells SquaredUp the purpose of the column. For example, if you have two different columns that contain numbers, you need to assign the role
value
to the column that contains the actual value you want to use in your visualization.
If you don't specify any metadata, all columns will be returned and SquaredUp will do its best to determine which columns should be used for which purpose. If you're returning pretty simple data, for example just a string and a number, this can work fine. But if you're returning two columns with numbers it gets trickier for SquaredUp to figure out which one is the value and which one is just an ID or some other number.
Parameters:
Before you start specifying metadata, leave them empty at first and get all the raw data with your new data stream once.
In order to do this, finish creating your custom data stream without metadata and create a tile with this data stream. The Table visualization will show you all raw data.
This will give you an overview about all columns and their content and help you decide which columns you need and what their shapes and roles should be. It's also essential for getting the correct column name to reference in the name
parameter.
Use this information to go back to the data stream configuration and specifying the metadata.
name | Mandatory | Enter the name of the column you are referencing here. To find the name of a column, get the data from this data stream once without any metadata. See the tip above for how to do that. You'll see the column name when you hover over the column in the Table. |
displayName | Optional | Here you can give the column a user-friendly name |
shape | Recommended | The shape you assign to a column tells SquaredUp what the column contains (for example, a number, a date, a currency, a URL, etc.). Based on the shape SquaredUp decides how to display this column, for example to display a URL as a clickable link. Note: Please refer to the list of shapes below this table to see available shapes. |
role | Recommended | The role you assign to a column tells SquaredUp the purpose of the column. For example, if you have two different columns that contain numbers, you need to assign the role Note: Please refer to the list of roles below this table to see available roles. |
visible | Optional | true or false Use this if you need a columns content but don't need to display the column itself. Example: Column A contains the full link to a ticket in your ticket system. Column B contains the ticket ID. You want to use the ticket ID as a label for the link, turning the long URL into a much nicer to read "Ticket 123". This is why you need the content of column B, to assign it as a label for column A. But since the URL is now displayed as the ticket ID, it would be redundant to still display column B. This is why you hide column B with false . |
There are many different shapes you can use for your columns and the list of possible shapes gets expanded constantly:
- Basic types, like:
boolean
,date
,number
,string
- Currency types that get displayed with two decimal values and their currency symbol (for example $23,45), like:
currency
(generic currency),eur
,gbp
,usd
- Data types, like:
bytes
,kilobytes
,megabytes
- Time types, like:
seconds
,milliseconds
,timespan
- The status type :
state
- Utility types, like:
customUnit
url
(will be displayed as a link)
Tip:
Some shapes can be configured.
If a shape is configurable, you can edit how the shape displays data in SquaredUp.
id | Used by data streams feeding the aggregate health stream to identify their Id column |
label | A column containing user-friendly names. Line Graphs use this role to group data into series. so each label will get its own line in the Line Graph. |
link | A column containing a link that can be used as a drilldown in Status Blocks. |
timestamp | A column containing a date to use on the X -axis of a Line Graph. |
unitLabel | A column containing user-friendly labels for data series, e.g. ‘Duration’. Line Graphs can use this role to label the Y-axis. |
value | A column containing the numeric value you want to use in your visualization. |
Both the scope and stage filters support a set of different criteria for matching different values. Criteria options are:
Criteria | Type | Description | Default | Example |
matchCase | Boolean | Whether to match case-sensitively or not | false | "matchCase": false |
contains | Array of strings | Strings that the value should contain | [] | "contains": ["dev", "prod"] |
notContains | Array of strings | Strings that the value should not contain | [] | "notContains": ["deploy", "pre-prod"] |
equals | Array of strings | Strings that the value should equal | [] | "equals": ["development", "production"] |
startsWith | Array of strings | Strings that the value should start with | [] | "startsWith": ["dev", "prod"] |
endsWith | Array of strings | Strings that the value should end with | [] | "endsWith": ["vm", "testing"] |
regex | Array of objects | Regular expressions that the value should match | [] | "regex": [{ "pattern": "", "flags": "g" }, { "pattern": "", "flags": "i" }] |
All of the features that are used by adding objects to the dataSourceConfig
in a custom data stream can also be added to a tile itself. To do this, open the tile editor and switch to the Code tab. You can then add a custom dataSourceConfig
object to the dataStream
object in the editor. For example:
"dataStream": {
"id": "datastream-XXXXXXXXXXXXXXXXXXXX",
"dataSourceConfig": {
"scopeFilter": {
"kubernetesServiceEndpoint": {
"equals": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX"
}
},
"stageFilter": [{
"name": "My Stage",
"matchCase": false
"contains": ["prod"],
"notContains": ["pre-prod"]
}],
"filter": {
"and": [
{ "key1": "value1" },
{ "key2": "value2" }
]
}
}
}
One limitation of this approach is that if you’re using a custom data stream, the existing dataSourceConfig
gets entirely overwritten by the one on the tile, so, for example, if you have a stage filter in the custom data stream and a filter on the tile, only the filter will have any effect.
Troubleshooting
Tiles can occasionally display a warning as a result of an issue when trying to fetch data from the data source. There are several standard warning messages shown for common issues:
Version mismatch
This Azure DevOps version does not support the requested analytics version
When this error is shown, you should verify the Azure DevOps server version. Some server versions do not ship with certain analytics versions.
Entity type not available
Endpoint not available in this analytics version
This error is shown when handling changes between versions, misspellings and similar issues.