The Web API plugin allows you to query data from any HTTP API that returns JSON, and then visualize that data. This is particularly useful if there isn't a SquaredUp plugin available for your data source.
For more information about what this plugin does and the data streams it retrieves, see:
To add a data source click on the + next to Data Sources on the left-hand menu in SquaredUp. Search for the data source and click on it to open the Configure data source page.
You can also add a data source by clicking Add data source on the Settings > Data Sources page, but pre-built dashboards are not added when using this method.
The Web API plugin is a "hybrid" plugin, meaning it is available in SquaredUp as both a cloud and an on-prem plugin.
Use the cloud plugin if your API is available on the internet. You do not need to configure a relay agent.
Use the on-prem plugin to access an API on your internal network. You will need to configure a relay agent before you configure the Web API on-prem plugin.
An on-prem data source uses a relay agent to connect SquaredUp to a data source running on your internal network.
A relay agent is installed on a server on your internal network, and has access to your data source.
Using a relay agent means that you don't need to open your firewall to allow access.
For an on-prem plugin you will need a relay agent that can access the server hosting your on-prem data source. You do not need a relay agent for cloud plugins.
If you have already created a relay agent in SquaredUp that can access this data source, then you can skip this step and choose the agent group you want to use while Configuring the data source.
See one of the following, depending on your platform type:
For testing you could use the JSON Placeholder API. This is a JSON REST API that will show you some sample data: http://jsonplaceholder.typicode.com
Display Name: Enter a name for your data source. This helps you to identify this data source in the list of your data sources.
For example: JSON Placeholder
Agent Group: Select the Agent Group that contains the agent(s) you want to use.
This field will only appear if you are adding the on-prem plugin.
Base URL: Enter the base URL of the API to be used for requests. For example you could use http://jsonplaceholder.typicode.com For this sample API the provider needs no further configuration so just click Add and then go to Show data on a tile.
Query arguments: Optionally, add any parameters that should be added to the Base URL.
Authentication:
None: No authentication required.
Basic: You must enter a Username and Password.
OAuth 2: Token-based authentication according to the OAuth 2.0 standard. Many APIs use OAuth 2.0 for authorization, and will require an OAuth provider to include the additional information about how to authorize against the service. If this option is selected, follow the steps in Configure OAuth 2.0 below.
Specify the details for your required OAuth 2.0 flow. For more detailed information see OAuth 2.0 Configuration.
Token URL: Enter the URL of the third-party authentication server you are using. For example, https://oauth2.googleapis.com/token.
Client ID: Enter the ID provided by your client application.
Client Secret: Enter the secret provided by your client application.
Authorization Scope: Enter your required access scopes for the third-party application server you are using. For example, if you are using the Web API plugin to access the https://www.googleapis.com/ API and you wish to use the drive/v3/files endpoint, you can use the authorization scope https://www.googleapis.com/auth/drive.readonly. If you require more than one scope, you add them all here separated by spaces.
The Web API plugin requires offline access to be requested, which requires the correct scope to be entered, dependent on the authentication server being used:
Atlassian: Request the offline_access scope.
Google APIs: Offline access isn't requested via a scope at all. Instead, you must add the access_type = offline and include_granted_scopes = true query arguments to the Authorization URL.
How to send credentials to the Token URL: Select where you want the client credentials you entered above to be included in the OAuth 2.0 request.
Grant Type: Select the OAuth 2.0 flow you need to use. Choose from:
Authorization Code: Enter the Authorization URL of the authorization server you are using (for example, https://www.googleapis.com/) and then click the Sign In button below. You are redirected to the authorization sign in page. Upon returning to SquaredUp , if the request was successful, the Sign In button shows you as logged in.
Client Credentials: No additional details are required.
Password: You must enter a Username and Password.
If you need to send the authorization token in the query URL, select Send authorization with query.
Headers can be added if required.
Ignore Certificate errors:
If you activate this checkbox the data source will ignore certificate errors when accessing the server. This is useful if you have self-signed certificates.
Test endpoint: Optionally, you can tick Test endpoint to run a test request and see an example payload.
Enter the details of an endpoint you'd like to run a test against to see what is returned. The information entered here is only used for the test.
Endpoint path to test: Enter an endpoint path.
Additional headers for the test: Enter any additional header names and values to be used for the test.
HTTP method for the test: GET or POST
Query arguments for the test GET: If you chose GET you can optionally add any query arguments to be used for the test.
Body for test POST: If you chose POST you can optionally enter a JSON string representing the body of the POST request.
Click Send.
The Result box will show the resulting payload.
Restrict access to this data source: You can enable this option if you only want certain users or groups to have access to the data source, or the permission to link it to new workspaces. See data source access control for more information.
The term data source here really means data source instance. For example, a user may configure two instances of the AWS data source, one for their development environment and one for production. In that case, each data source instance has its own access control settings.
By default, Restrict access to this data source is set to off. The data source can be viewed, edited and administered by anyone. If you would like to control who has access to this data source, switch Restrict access to this data source to on.
Use the Restrict access to this data source dropdown to control who has access to the workspace:
By default, the user setting the permissions for the data source will be given Full Control and the Everyone group will be given Link to workspace permissions.
Tailor access to the data source, as required, by selecting individual users or user groups from the dropdown and giving them Link to workspace or Full Control permissions.
If the user is not available from the dropdown, you are able to invite them to the data source by typing in their email address and then clicking Add. The new user will then receive an email inviting them to create an account on SquaredUp. Once the account has been created, they will gain access to the organization.
At least one user or group must be given Full Control.
Admin users can edit the configuration, modify the Access Control List (ACL) and delete the data source, regardless of the ACL chosen.
Access level
Permissions
Link to workspace
User can link the data source to any workspace they have at least Editor permissions for.
Data from the data source can then be viewed by anyone with any access to the workspace.
User can share the data source data with anyone they want.
User cannot configure the data source in any way, or delete it.
Full Control
User can change the data source configuration, ACL, and delete the data source.
On a dashboard click + and then Data to add a new data tile.
Data Stream tab: Click on HTTP Request.
If HTTP Request isn't listed then perhaps the data source was added for the organization but not this workspace. Click the Data Source menu > Add new data source and look through the data sources listed. If you still can't see the data source perhaps it doesn't exist at the organization level, so click the link at the bottom of the page to add a new data source.
Click on the Objects tab (or click Next).
Objects tab: Click on the name you gave your Data Source to tick it. If you are using the JSON Placeholder API for sample data then this might be JSON placeholder. Click on the Parameters tab (or click Next).
Parameters tab: You will see the Base URL which you configured in the Data Source setup. Depending on your API, you may see data at this point, or you may need to carry out some further configuration. Endpoint path: Optionally, enter an endpoint path. For the JSON Placeholder API you might enter /posts or /todos
A mustache parameter is a dynamic value, the actual value will be inserted to replace the field in curly braces. For example, {{timeframe.start}} will insert the start time based on the timeframe configured within the tile, or {{name}} will insert the name of the object(s) in scope.
This data stream supports timeframe parameters:
Parameter
Replacement value
Type
{{timeframe.start}}
2022-03-13T19:45:00.000Z
string
{{timeframe.unixStart}}
1647200700
number
{{timeframe.end}}
2022-03-14T19:45:00.000Z
string
{{timeframe.unixEnd}}
1647287100
number
{{timeframe.enum}}
last24hours
string
{{timeframe.interval}}
PT15M
string
{{timeframe.durationSeconds}}
86400
number
{{timeframe.durationMinutes}}
1440
number
{{timeframe.durationHours}}
24
number
For the JSON placeholder API a suitable visualization is shown at this point, and further configuration is optional. Additional headers for this request: Optionally, enter any additional header names and values to be used. You can choose to encrypt a Header Value by clicking the encrypt icon . This turns the field into a password field, so that the value is hidden and the data is stored as encrypted.
A mustache parameter is a dynamic value, the actual value will be inserted to replace the field in curly braces. For example, {{timeframe.start}} will insert the start time based on the timeframe configured within the tile, or {{name}} will insert the name of the object(s) in scope.
This data stream supports timeframe parameters:
Parameter
Replacement value
Type
{{timeframe.start}}
2022-03-13T19:45:00.000Z
string
{{timeframe.unixStart}}
1647200700
number
{{timeframe.end}}
2022-03-14T19:45:00.000Z
string
{{timeframe.unixEnd}}
1647287100
number
{{timeframe.enum}}
last24hours
string
{{timeframe.interval}}
PT15M
string
{{timeframe.durationSeconds}}
86400
number
{{timeframe.durationMinutes}}
1440
number
{{timeframe.durationHours}}
24
number
HTTP method: Select GET or POST Query arguments for GET: If you chose GET you can optionally add any query arguments to be used.
A mustache parameter is a dynamic value, the actual value will be inserted to replace the field in curly braces. For example, {{timeframe.start}} will insert the start time based on the timeframe configured within the tile, or {{name}} will insert the name of the object(s) in scope.
This data stream supports timeframe parameters:
Parameter
Replacement value
Type
{{timeframe.start}}
2022-03-13T19:45:00.000Z
string
{{timeframe.unixStart}}
1647200700
number
{{timeframe.end}}
2022-03-14T19:45:00.000Z
string
{{timeframe.unixEnd}}
1647287100
number
{{timeframe.enum}}
last24hours
string
{{timeframe.interval}}
PT15M
string
{{timeframe.durationSeconds}}
86400
number
{{timeframe.durationMinutes}}
1440
number
{{timeframe.durationHours}}
24
number
Body for POST: If you chose POST you can optionally enter a JSON string representing the body of the POST request.
A mustache parameter is a dynamic value, the actual value will be inserted to replace the field in curly braces. For example, {{timeframe.start}} will insert the start time based on the timeframe configured within the tile, or {{name}} will insert the name of the object(s) in scope.
This data stream supports timeframe parameters:
Parameter
Replacement value
Type
{{timeframe.start}}
2022-03-13T19:45:00.000Z
string
{{timeframe.unixStart}}
1647200700
number
{{timeframe.end}}
2022-03-14T19:45:00.000Z
string
{{timeframe.unixEnd}}
1647287100
number
{{timeframe.enum}}
last24hours
string
{{timeframe.interval}}
PT15M
string
{{timeframe.durationSeconds}}
86400
number
{{timeframe.durationMinutes}}
1440
number
{{timeframe.durationHours}}
24
number
Click Send. The Result box will show an example of the resulting payload (you may need to scroll down).
The Result box shows you a preview of the data returned. Path to data: This is where you enter the location of the results set that is returned. Check the Result box for the location of the data returned, and use that in the Path to data. Expand inner objects: Objects inside the requested data path will be used to make extra columns of data, for example if you have columns that show [object Object].
If Path to data points to an array, so values like arr, obj.sub1Arr, the data stream is going to return one row for each element (with no limit of 10 involved here).
arr will return rows with a single column called value with all the integers from 1 to 12 (incl.)
value
1
2
...
11
12
obj.sub1Arr will return two rows and three columns, like:
If Path to data points to a scalar, the data stream will return a single row with a single column called result and the value will be the scalar, for example, obj.b2 will return:
result222
If Path to data points to an object, the data stream will return a single row with columns for each of the properties, for example, obj will return:
Also, nested arrays will be expanded (but only to a maximum of ten elements.
Click Next or click on the Timeframe tab.
Timeframe tab: Optionally, you can specify a timeframe here, but you may have specified a set timeframe in the query or endpoint path, or used mustache parameters to use the dashboard timeframe.
At this point you might like to change the visualization, see Visualization Settings. For the JSON placeholder sample data you could choose Donut. The Shaping and Columns sections can help you configure the visualization as you need.
Shaping tab: Shaping allows you to perform filtering, grouping and sorting operations on the data you retrieve, although you may also do this using your query.
Data can be filtered according to whether data in a column meets or does not meet specified text or numerical value conditions.
Multiple filters
You are able to add multiple filter conditions using the following operators:
AND: All conditions must be satisfied (e.g. Status-Equals-Closed ANDType-Equals-Question).
OR: Any condition can be satisfied (e.g. Status-Equals-Pending ORStatus-Equals-Closed).
Available filters
The following options are available when filtering data, which ones display depends on the column type.
Option
Description
Equals
Checks if the value of a field is the same as the specified value. For example,a Status of Active will return results where the status is Active.
Not equals
Checks if the value of a field is not equal to the specified value. It returns true if the values are different. For example, a status of Active would return results where the category is notActive.
Contains
Returns data if the specified value exists within the field value.
For example, example: URL Contains projects will return results where the URL includes the word "projects" anywhere in the string.
Doesn't contain
Returns data if the specified value doesn't exist within the field value.
For example, example: URL Doesn't contain projects will return results where the URL doesn't include the word "projects" anywhere in the string.
Less than
Checks if the value of a field is below the specified value. It is used for numerical or date values. For example, IncidentsLess than50 would return results where the number of incidents is below 50.
Greater than
Checks if the value of a field is over a specified value. It is used for numerical or date values. For example, Incidents Greater than 50 would return results where the number of incidents is over 50.
Is more than
Available when working with a date / time column. Checks if the value of a field is older than a given time period. You must additionally specify a time quantity and period, and whether to measure ago or from now.
For example, Due Is more than 100 days from now will return results where the due date is later than the current day + 100 days.
Similarly, Due Is more than 100 days ago will return results where the submitted date is earlier than the current day.
Within last
Filter records that fall within a specific time range from before the current date and time. You must additionally specify a time quantity and period.
For example, Submitted Within last 100 days will return results where the submitted date is between the current day and 100 days ago.
Within next
Filter records that are within a specified time range after the current date and time. You must additionally specify a time quantity and period.
For example, Event date Within next 7 days would return results where the event date is within the next week.
Is empty
Returns all data without a date value. Useful for identifying records where data is missing.
Is not empty
Returns all data with a date value.
You can group and aggregate data by column.
For example, for AWS cost data you might configure the following settings to display a table or bar chart of cost per label:
Group by:label
Aggregation type: Total
Aggregation column: Amount
Which columns are available depends on the data stream you chose.
Configuring grouping enables different visualizations to be displayed, such as bar chart and donut. For example, grouping tickets by channel allows you to show a donut of how many tickets were logged by email vs web form.
Bucket by
If you group by a time column, and further grouping is possible, the Bucket by dropdown appears. Use this field to control how the time data is grouped, for example by hour, day, month etc.
Aggregation type and column
Use this dropdown to choose how to summarize your data, for example as a count, average or total. For example, you could do the following:
When creating a Bar Chart of ticket data you might configure the following settings to show a graph of tickets per day:
Group by:Date created
Bucketby:Day
Aggregation type: Count
When creating Bar Chart of Azure Resource Group cost you could configure the following settings:
Group by:Timestamp
Bucketby:Day
Aggregation type:Total
Aggregate column: Cost
The Sort section allows you to select one or more columns to sort your date by, in either ascending or descending order.
While this sets the default sort order of data, but you can always click on a column heading to sort the data table on the fly.
To sort by multiple columns, click Add sort by to add a new row of sort fields to the list. This allows you perform more complex sorts, such as sorting data by the data it was created, then sorting those results alphabetically.
Enabling the Top toggle allows you to specify the top n rows of data to display.
For a JSON placeholder sample donut you could use: Filter: Id Less than 30
Columns tab:
Use the Columns tab of the tile editor to format the columns of the table on the Data tab.The Columns tab with a formatted Data tableSquaredUp automatically defines the metadata retrieved from data streams so the data is assigned the correct data type, however in some circumstances you may want to override this.
For example, when retrieving data using the Web API plugin, scripting, or custom query data streams (such as Splunk Enterprise plugin), the assigned data type may not be quite correct or as you expect.
Formatting columns
Use the following options to format your columns.
Option
Description
Name
To rename a column, click the current Name value and enter a new one. Columns that can be renamed display the Rename column icon when hovered over.
Type
Select an option from the Type dropdown to change the data type of the column. If any additional options are available to configure, the dropdown is expanded below.
Value
Displays the original value of the column.
Formatted
Displays the formatted value of the column.
Add a copy of this column
Click to duplicate a column. The copied field displays Copy of [field name] above its Name.
Remove this column
Click to delete a cloned column.
Comparison columns
Comparison columns are used to compare two values, for example you may want to compare the number of tickets raised this month to the number of tickets raised last month. You can choose to show the value as an absolute change (for example, 12 more tickets) or as a percentage change (for example, a 28% increase).
When a column has a Type of Number, the Add comparison
button displays at the end of the row, which you can click to open the Add comparison window.From this window, if you have multiple columns with a Type of Number, you can create a comparison column by doing the following:
Column A: Select the first column to compare against. Automatically populated with the column of which you clicked Add comparison.
Column B: Select the second column to compare against.
Output: Select how to display the comparison value. This value is displayed in the Preview field. Choose from:
Absolute: Show the numerical value of Column A - Column B.
Percentage: Show the ratio of Column B to Column A as a percent.
Click Update to create the comparison. A new column is added to the Output table. The Output table with a comparison column added
Additional options
Some data types have advanced settings that can be configured in the options section, which is displayed whenever you change the data Type or by clicking expand
next to the column Name.
Option
Description
Output Format
Enter a custom format to display date values as a string. Any specified output format is supported. For example, dd/mm/yy, dd/mm/yyyy or d/M/Y.
By default, dates and times are displayed in your local timezone to ensure the data makes sense to you.
Input Format
Enter the format that corresponds to the inputted date. For example, if your data has values such as 05/27/24 01:44 PM, then the Input Format should be set to dd/mm/yy hh:mm aa.
Any input format is supported, however if you have a custom input format that is missing any time zone information, the input is always assumed to be UTC.
This field is required if the data string for the column are not ISO-8601 formatted. For example, 2024-09-09T13:52:25.281Z.
Currency
Select the currency to display the value in. This does not convert the currency value.
Decimal Places
Formats the number of decimal places for a supported data type. Enter a value between 0 and 20.
Link Text
Specify the text of the URL links in the column.
Format as duration
Toggle between displaying the time value in minutes and seconds or seconds.
Map Values to States
Define the values that trigger states. Select a value for each of the corresponding dropdowns:
A custom data stream is a data stream that you, as an advanced user, can write yourself.
Any data stream you create can be edited by clicking the edit button (pencil) next to it in the tile editor, and also from Settings > Advanced > Data Streams.
You may wish to create your own custom data stream for an HTTP Request using the information below. When writing your own data stream you can choose either a global or scoped entry point. You will need to write your own custom data stream if you want a scoped data stream, because the configurable data stream HTTP Request can only create a global data stream.
In SquaredUp, browse to Settings > Advanced > Data Streams.
Click Add custom data stream.
Add your custom data stream by entering the following settings:
Name: Enter a display name for your data stream.
The display name is the name that you use to identify your data stream in SquaredUp. It has no technical impact and doesn't need to be referenced in the data stream's code.
Data source: Choose the data source this data stream is for. After you've chosen the data source the Entry Point field displays.
Entry Point: Specify the data stream entry point and enter the Code below.
Each data stream uses an entry point, which can either be global (unscoped) or scoped, and this determines whether the data stream uses the tile scope.
Data streams can be either global or scoped:
Global data streams are unscoped and return information of a general nature (e.g. "Get the current number of unused hosts").
A scoped data stream gets information relevant to the specific set objects supplied in the tile scope (e.g. "Get the current session count for these hosts").
To find out which entry point to select and get code examples for the Code field, see the help below.
Click Save to save your data stream.
Creating a custom data stream allows you to created a scoped data stream, i.e. a data stream that makes use of objects in the scope.
Which entry point do I have to select from the dropdown?
HTTP Request (scoped)
Each data stream uses an entry point, which can either be global (unscoped) or scoped, and this determines whether the data stream uses the tile scope.
Data streams can be either global or scoped:
Global data streams are unscoped and return information of a general nature (e.g. "Get the current number of unused hosts").
A scoped data stream gets information relevant to the specific set objects supplied in the tile scope (e.g. "Get the current session count for these hosts").
You should set the matches statement so that your custom data stream appears when you select the appropriate object types in the tile editor. You can use additional mustache constructs when using the scoped entry point: targetNodes, sourceId, sourceIds in selected dataSourceConfig parameters as shown below. The separator inserted between sourceId values in the sourceIds replacement string can be changed by setting idSeparator in dataSourceConfig.
Parameters
Mandatory
The internal name of the data stream. Can be used the refer to this data stream in a tile's JSON instead of using the data stream's internal ID.
endpointPath
Mandatory
(string) - the endpoint path relative to the data source config's base URL to be queried (supports mustache parameters)
httpMethod
Mandatory
(string) - "get" or "post"
headers
Optional
(array of key, value pairs) - any additional headers for this request (supports mustache parameters in value only)
getArgs
Optional
(array of key, value pairs) - any additional query args (supports mustache parameters in value only)
postBody
Optional
(string) – only if httpMethod is "post", a JSON string defining the body (supports mustache parameters)
idSeparator
Optional
(string) - this controls how the replacement value "sourceIds" is generated
pathToData
Optional
(string) - where in the returned payload, the desired data is to be found
expandInnerObjects
Optional
(boolean) - whether to expand inner objects and arrays in the desired data
Note: Defining the matches parameter is mandatory.
With the matches parameter you define for which objects the data stream will be shown in SquaredUp. It works like this: When you configure a tile, you have to choose its scope. If this scope contains objects you specified here in the matches parameter, the data stream will be shown in SquaredUp under Data Streams. If the scope doesn't contain objects specified here, the data stream will be hidden. This keeps things clean and simple since you'll only see the data stream when it's relevant for the scope you chose. As a best practice you should limit the data stream to objects that make sense for the specific use case of this data stream.
Format for matches:
//If you want to specify only one value of an object property//"matches": {
"ObjectProperty": {
"type": "equals",
"value": "ValueOfTheObjectProperty" }
},
//If you want to specify multiple values for an object property//"matches": {
"ObjectProperty": {
"type": "oneOf",
"values": ["ValueOfTheObjectProperty1", "ValueOfTheObjectProperty2", "ValueOfTheObjectProperty3"]
}
},
Example for limiting a data stream to objects:
If you are using multiple values for the object property, you can decide if you want the data stream to be visible for objects that match all of the criteria or at least one of the criteria.
Lets say you have two values you want objects to have in order for the data stream to be visible for them:
a SourceName property with the value AppDynamics (meaning objects that come from the AppDynamics data source)
a type property with the value app (meaning application objects)
If you want the data stream to be visible only for objects that match both of the criteria, your code would look like this:
Note: If you run into errors when configuring the matches parameter, check if you're dealing with a global entry point.
Data streams can be either global or scoped:
Global data streams are unscoped and return information of a general nature (e.g. "Get the current number of unused hosts").
A scoped data stream gets information relevant to the specific set objects supplied in the tile scope (e.g. "Get the current session count for these hosts").
Global entry points can't use specific objects in the matches parameter. You can identify global entry points by their name, they have "Global", "No Scope" or "Unscoped" added to their name.
There are two possible options for the matches parameter for global entry points:
"matches": "none",
When creating a tile, the Data Stream will be shown as long as no scope is selected. As soon as a scope is selected, the Data Stream will be hidden.
"matches": "all",
When creating a tile, the Data Stream will be shown as soon as any scope is selected.
Optional
SquaredUp expects data in table form, and here's where you define how the table with your return data will be structured.
The rowpath (Path to data) will tell SquaredUp which items you want to convert into rows.
Now it depends on what data you want to base your table on, do you want rows per object or per tag?
If you want to see which objects have which tags, your rowpath would be results, and your table would look like this:
name
tags
object 1
tag 1, tag 2, tag 3
object 2
tag 1, tag 4
If you want to turn each tag into a row and see to which objects they are applied, your rowpath would be results.tags, and your table would look like this:
tags
name
tag 1
object 1
tag 1
object 2
tag 2
object 1
tag 3
object 1
tag 4
object 2
As you can see in the example, each parameter gets turned into a column and the items of the parameter you chose as the rowpath will be turned into rows.
Optional, but recommended
The metadata parameters are used to describe columns in order to tell SquaredUp what to do with them. You can do multiple things with the metadata parameters:
Specify how SquaredUp should interpret the columns you return and - to an extent - how their content displayed. You do this by giving each column a shape.
The shape you assign to a column tells SquaredUp what the column contains (for example, a number, a date, a currency, a URL, etc.). Based on the shape SquaredUp decides how to display this column, for example to display a URL as a clickable link.
Filter out or just hide columns. Only the columns you define in metadata will be returned in the results. This helps you to filter out columns you don't need. If you need the content of a column but don't want to display it, you can use the visible parameter.
Give columns a nicely readable display name.
Assign a specific role to columns .
The role you assign to a column tells SquaredUp the purpose of the column. For example, if you have two different columns that contain numbers, you need to assign the role value to the column that contains the actual value you want to use in your visualization.
If you don't specify any metadata, all columns will be returned and SquaredUp will do its best to determine which columns should be used for which purpose. If you're returning pretty simple data, for example just a string and a number, this can work fine. But if you're returning two columns with numbers it gets trickier for SquaredUp to figure out which one is the value and which one is just an ID or some other number.
Parameters:
Before you start specifying metadata, leave them empty at first and get all the raw data with your new data stream once.
In order to do this, finish creating your custom data stream without metadata and create a tile with this data stream. The Table visualization will show you all raw data.
This will give you an overview about all columns and their content and help you decide which columns you need and what their shapes and roles should be. It's also essential for getting the correct column name to reference in the name parameter.
Use this information to go back to the data stream configuration and specifying the metadata.
name
Mandatory
Enter the name of the column you are referencing here.
To find the name of a column, get the data from this data stream once without any metadata. See the tip above for how to do that. You'll see the column name when you hover over the column in the Table.
displayName
Optional
Here you can give the column a user-friendly name
shape
Recommended
The shape you assign to a column tells SquaredUp what the column contains (for example, a number, a date, a currency, a URL, etc.). Based on the shape SquaredUp decides how to display this column, for example to display a URL as a clickable link.
Note: Please refer to the list of shapes below this table to see available shapes.
role
Recommended
The role you assign to a column tells SquaredUp the purpose of the column. For example, if you have two different columns that contain numbers, you need to assign the role value to the column that contains the actual value you want to use in your visualization.
Note: Please refer to the list of roles below this table to see available roles.
visible
Optional
true or false
Use this if you need a columns content but don't need to display the column itself.
Example: Column A contains the full link to a ticket in your ticket system. Column B contains the ticket ID. You want to use the ticket ID as a label for the link, turning the long URL into a much nicer to read "Ticket 123". This is why you need the content of column B, to assign it as a label for column A. But since the URL is now displayed as the ticket ID, it would be redundant to still display column B. This is why you hide column B with false.
There are many different shapes you can use for your columns and the list of possible shapes gets expanded constantly:
Basic types, like: boolean, date, number, string
Currency types that get displayed with two decimal values and their currency symbol (for example $23,45), like: currency (generic currency), eur, gbp, usd
Data types, like: bytes, kilobytes, megabytes
Time types, like: seconds, milliseconds, timespan
The status type : state
Utility types, like: customUniturl (will be displayed as a link)
Tip:
Some shapes can be configured.
If a shape is configurable, you can edit how the shape displays data in SquaredUp.