Random Procrastination

Darren Gosbell

Author: Darren Gosbell (page 1 of 5)

Setting a dynamic default slicer value in Power BI

Currently Power BI does not support a way for dynamically setting a default value for a slicer using an expression. One common example of where this sort of capability would be really useful is if you have a dashboard that you want to default to show the current day’s data by default, but you want the user to be able to select a custom date filter if they so desire.

While I could go into my report and set a slicer to filter it for today’s date of 15 May 2023. When I open the report tomorrow this slicer will still have the hard coded value of 15 May 2023. You could potentially create measures that use something like: CALCULATE([Sales], 'Calendar'[Date] = Today() ) there are a number of problems with this. While it will automatically show the Sales amount for the value of Today() – the problem is that on the Power BI Service “Today” is set based on the UTC time. So depending on what timezone you are in the day can change part way through your working hours.

While there currently is no built-in way of configuring this within a slicer itself this there are workarounds and I’m going to walk you through one approach that I’ve used in the past. This approach has a couple of moving parts. The first one is that as part of a nightly data load process I update a number of columns in a shared “Calendar” table.

Implementation

In the example below I’ve added 3 indicator columns for IsToday, IsYesterday and IsCurrentMonth. This post was published on 15 May 2023 so for that date the IsToday column has a value of True.

NOTE: I’ve simulated this in a simple Power BI example using a calculated table, but you need to be very careful using a calculated table in a production scenario since the Power BI service runs in UTC time so depending on when your data transforms get run your indicator columns could be updated incorrectly.

Once I’ve built out the body of my report, adding visuals and slicers I create 4 bookmarks:

  1. Custom Range – this has no report level filters and has my date slicer set as visible
  2. Today – this has a report level filter for IsToday=True and sets the date slicer to hidden
  3. Yesterday – this has a report level filter for IsYesterday=True and sets the date slicer to hidden
  4. Current Month – this has a report level filter for IsCurrentMonth=True and sets the date slicer to hidden

Then I add add 4 buttons to my report, one for each of the bookmarks above. Then as I click on each button it changes the filtering appropriately.

If I save and publish my report with the Today bookmark selected this means that each night when my data load routine is run, and my data model is refreshed the IsToday column is updated. Then the next morning when my users open the report they see the data automatically filtered for the current date. And if they wish to view some other date, I have a set of handy short cuts for common date filters, or they click on the Custom Range option to set their own custom filter.

Limitations

Where this approach falls down a bit is when you have multiple pages in your report, and you want the date filters to affect all the pages. For the indicator columns it’s easy enough to set the filters linked to your bookmarks as report level filters. And you can setup your “custom range” slicer as a sync’ed slicer so that it affects multiple pages. The tricky bit comes with the showing and hiding the slicer as you can only show and hide a visual on the current page with a bookmark.

The approach I chose to take was to make the “Custom Range” bookmark have the “Current page” option set so that the user was always returned to the first page in the report if they selected that option. It’s not ideal, but otherwise you would need different “Custom Range” bookmarks per page and it just gets a bit messy.

Logging Parameters for a Paginated Report

If you look at the Power BI audit logs for paginated reports you can see which user ran which report and when, but you cannot see any parameter values. This is by-design since parameter values are considered “customer content” and no content like this is included in the Power BI logs.

But if your company has a requirement to log which parameter values were used when running a report then this is something you will need to build yourself.

One possible way of doing this is to add a dataset to your report which does an insert command or calls a stored procedure to insert a record into a custom audit table.

I did this using Azure SQL and creating the following table:

CREATE TABLE [dbo].[ReportExecutionLog](
	[ReportName] [nvarchar](255) NULL,
	[UserName] [nvarchar](255) NULL,
	[Parameters] [nvarchar](max) NULL,
	[DateTime] [datetime] NULL
)

And the following Stored Procedure:

CREATE PROC [dbo].[sprInsertLogRecord]( @reportName NVARCHAR(255), 
                                        @userName NVARCHAR(255), 
                                        @params NVARCHAR(MAX))
as
INSERT INTO [dbo].[ReportExecutionLog]
           ([ReportName]
           ,[UserName]
           ,[Parameters]
           ,[DateTime])
     VALUES
           (@reportName
           ,@userName
           ,@params
           ,getdate())

Then I added a connection to my Azure SQL database in Report Builder and created a new dataset called “Logging” and configured it to call this stored procedure.

Then in the parameters section I set it up to pass in the global ReportName and UserID values

Note: Unfortunately there is currently no way to get the ReportID or WorkspaceID, hopefully this gets added at some point in the future. For the time being this means that you need to make sure that any reports you want to add logging to must have unique names. You could possibly look at adding some unique report ID to your report name to ensure this.

In the screenshot above you can see that there is an expression for the parameter values that I want to capture, this is calling a small custom function. The code in the expression is as follows and I’m just passing in an array of the parameter name that I want to log:

The GetParameterValues function is as follows:

Public Function GetParameterValues( ByVal paramsToLog As Array) As String
	Dim params as String
	Dim i as Integer
	Dim pName as String

	For Each pName in paramsToLog
		Dim p as Parameter
		p = Report.Parameters(pName)
		Dim pVal as String
		If (p.Count > 1) Then
			pVal = Join(p.Value, ",")
		Else
			pVal = p.Value
		End If	

		params = params & pName & "=" & pVal & vbCrLf	
	Next pName

	Return params
End Function

I entered this in the “Code” property for the report by clicking in the dark gray background area to expose the report level properties

Then when you click on the … button to edit the code value I paste the code into the “Custom code” area.

This then produces output like the following.

Note that in my function I am using a newline between each parameter which SSMS does not show in it’s grid output, but you can see them if you build a report over this log table. If you want to a different delimiter between the parameters you could simply replace the vbCrLf value in the function above with something else.

Setting MaxParallelismPerQuery on a Power BI Dataset using Tabular Editor

If saw this announcement last month – Query parallelization helps to boost Power BI dataset performance in DirectQuery mode | Microsoft Power BI Blog | Microsoft Power BI – about the new MaxParallelismPerQuery setting and you were interested in testing, but you were not sure how to run the sample code then read on.

The sample code in the blog post above is a full .net program that you could compile and run from Visual Studio or from the command line compiler. But that is something that not all BI developers are comfortable doing. If you are able to use Tabular Editor then there is a much simpler way to change this setting.

  1. Launch Tabular Editor and connect to the XMLA endpoint for your workspace (so this requires a premium workspace).
  2. Select the dataset you wish to test.
  3. Paste the code below into the “C# Script” tab in TE2 or open a new C# Script document in TE3. The sample script sets the parallelism to 10, you can experiment with different values by changing the value in the last line of the script.
  4. Then click the run button.
  5. Finally click the save button to save this change back to Power BI

If you cannot see the code above you can download it directly from here: TabularEditorSetMaxParallelism.csx (github.com)

And once you have updated the compatibility setting of the database you can also change the Max Parallelism Per Query setting directly from the model explorer in Tabular Editor

DAX Studio 3.0.7 Released!

Today I am pleased to announce that the next update to DAX Studio is now available. It is a minor point release that includes one or two small updates and a number of fixes including what is hopefully a definitive fix to the “xmlReader in use” errors some of you have been seeing in 3.0.6

Check out the detailed release notes here v3.0.7 Release | DAX Studio

DAX Studio 3.0.6 released!

I am pleased to announce that v3.0.6 of DAX Studio has just been released. You can download this version from https://daxstudio.org and for more details on what’s new in blog post here v3.0.6 Release | DAX Studio

Some of you may have already seen the video from SQLBI showing the new timeline feature in Server Timings – if not, you should definitely watch this as it has some great information on this new feature and how to use it.

Generating Measure descriptions with ChatGPT – part 2

In my previous post I had a short 40 line script for Tabular Editor (free | paid) which could generate descriptions for the measures in your tabular model. There were a number of interesting questions in the comments to that post which I thought would make a follow up post worthwhile.

The things I want to cover in the post are

How to run the script using TE2

Below is a very brief tutorial on running a script using Tabular Editor 2

  1. If you launched Tabular Editor from the External Tools ribbon in Power BI desktop you can skip this step, otherwise click on this icon to connect to your data model.
  2. Click on the C# Script tab and paste in the script from my earlier post
  3. There are instructions in my earlier post for requesting your own API key from Open AI you need to paste the key into this line between the quotes (“)
  4. Optional: if you are targeting users that speak a language other than English you can update the question text on line 10 to your desired language (from what I understand ChatGPT understands about 100 different languages so you might need to test if this works in your language)
  5. Click the play button to run the script, this step could take a while depending on the number of measures and whether you hit throttling (more on this below)
  6. Once the script has run you can check some of the descriptions by clicking on a measure in the model explorer. At this point all the descriptions are still local and have not been saved back to your data model.
  7. The text generated by ChatGPT will appear in the Description property for the measure.
  8. If you are happy with the generated descriptions clicking the Save Changes button will save the new descriptions back to your data model.

How to run the script using TE3

The steps for Tabular Editor 3 are almost the same with the main difference being at step 2 because TE3 allows you to have multiple C# scripts open at the same time.

  1. If you launched Tabular Editor from the External Tools ribbon in Power BI desktop you can skip this step, otherwise click on this icon to connect to your data model.
  2. Click on the New C# Script button to create a C# script document and paste in the script from my earlier post
  3. There are instructions in my earlier post for requesting your own API key from Open AI you need to paste the key into this line between the quotes (“)
  4. Optional: if you are targeting users that speak a language other than English you can update the question text on line 10 to your desired language (from what I understand ChatGPT understands about 100 different languages so you might need to test if this works in your language)
  5. Click the play button to run the script, this step could take a while depending on the number of measures and whether you hit throttling (more on this below)
  6. Once the script has run you can check some of the descriptions by clicking on a measure in the model explorer. At this point all the descriptions are still local and have not been saved back to your data model.
  7. The text generated by ChatGPT will appear in the Description property for the measure.
  8. If you are happy with the generated descriptions clicking the Save Changes button will save the new descriptions back to your data model.

Dealing with Rate Limiting

If you have not worked with API calls before you may not have come across the term “rate limiting” before. Basically, it is a mechanism that APIs use to prevent users from monopolizing the resources on a service by limiting the number of calls that can be made within a given timeframe.

In the case of the Open AI APIs they document their rate limits here and at the time of writing they only allow 20 calls per minute for a free account. Once you hit that limit the API will return a 429 error code which is a common code meaning “Too many requests”. There are 2 approaches that you can use to work around this.

  1. You can add pauses in the code to wait until the next minute when you can then make another 20 calls.
  2. You can upgrade to a paid account which has much a higher limit.

If you would like to see an example of an updated script which will skip measures which already have descriptions (so if you’ve manually updated some or if there was an issue part way through running a previous script

Update Tabular Model Descriptions from ChatGPT with rate limiting logic (github.com)

A Final Warning

The Large Language Model (LLM) behind ChatGPT is an amazing piece of technology and it’s only going to get better over time. But I’ve also seen it described as “a B grade intern who hallucinates occasionally”. In it’s simplest form all these AI models do is to break a piece of text into a series of tokens and then predict what tokens are likely to come next based on a corpus of training material.

In my testing I’ve seen a number of examples where the prediction has missed some important detail. And in a couple of cases, I’ve even seen it include a modification to the original measure and then it has described the modified measure, not the original one!

So, while this is definitely a huge time saver, just be aware that it’s not perfect. Before publishing these descriptions into production, I would suggest doing a review of them to ensure that they make sense and include all the important details.

If you want to see all the descriptions at once you can use a tool like DAX Studio to run the following query to list them.

SELECT [Name], [Description], [Expression] FROM $SYSTEM.TMSCHEMA_MEASURES

Automatically generating measure descriptions for Power BI and Analysis Services with ChatGPT and Tabular Editor

Note: Make sure to go on and read part 2 after reading this which has some improvements to the script.

It seems like everyone is playing with ChatGPT at the moment and doing everything from getting it to write code for them to asking it to write ode’s to their favourite visuals. Writing DAX code is interesting from the perspective of someone building a data model. But one thing that has always been hard from the perspective of consumer of a data model is understanding the logic behind those DAX calculations.

As you’ve probably already guessed from the title of this post rather than generating DAX I wanted to see if we could go the other way and get ChatGPT to provide simple descriptions of what a DAX expression was doing.

To test this I did not want to use a basic measure, so I found a moderately complex expression that someone else had actually originally had ChatGPT generate for them and ask it to explain the calculation. The first time I tried this it produced a very long description breaking down each of the function calls. But by revising the way I asked the question I eventually got to a point where it was giving me a nice concise explanation.

When I posted the following question to ChatGPT:

Explain the following calculation in a few sentences in simple business terms without using DAX function names:
CALCULATE (
   SELECTEDVALUE ( 'Suppliers'[CompanyName] ),
   TOPN (
      1,
      SUMMARIZE (
         Products,
         'Suppliers' [CompanyName],
         "@StockValue", SUMX( 'Products'[UnitsInStock] * 'Products'[UnitPrice] ),
      [@stockValue],
       DESC
   )
)

It came back with the following answer which I was pretty happy with:

This calculation is finding the company name of the supplier with the highest total value of stock for a given product. It does this by first multiplying the units in stock by the unit price for each product, then summarizing that total value by supplier. It then finds the top supplier by that total value and returns their company name.

This was great for a single measure, but not all of the consumers of your data model will have access to view the DAX expression. And even if they did it would be a pain to have to keep jumping out to an external website to understand what every measure is doing.

So this made me wonder if there might be some way of injecting these descriptions into the data model. And as it turns out ChatGPT already has a REST API. All you need to do is to sign up for an account at https://platform.openai.com (if you’ve been experimenting with ChatGPT you probably already have an account) and then generate an API key for your account and you can make requests of ChatGPT from another program.

View and Create API keys from the account link in the top right corner

Note: free accounts are limited to 20 calls per minute (see Rate Limits – OpenAI API ). So for large models you would either need to add logic to include a 1 minute delay every 20 measures or upgrade to a paid plan.

From there I setup the following Tabular Editor script which will loop through all of the measures in your data model and update the description with the one generated by ChatGPT. All you need to do to run this is to paste your own API key into the apiKey constant on line 8.

For illustration purposes I’ve also included the original DAX expression, but you don’t need to keep that if you don’t want to.

Note: the script has been tested in both TE2 (free) and TE3 (paid)
and is available as a gist on github https://gist.github.com/dgosbell/f3253c7ec52efe441b80596ffddea07c

The updated script including logic for dealing with rate limiting is included in my following post here

Before running the script hovering over a measure in my data model only shows the measure name

the default tooltip showing just the measure name

After running the script, you get a much more informative tooltip

the expanded tooltip showing the new auto-populated description

Interestingly the descriptions do seem to change a little bit from one run to the next and in spite of asking it not to use DAX function names in the description sometimes it still sneaks one in. The description below was particularly bad where it just described the SUMX function.

But the advantage of this technique is that you can re-run it for single measures if you like or you could manually edit the description field if the description needed a small improvement.


Update: 16 Feb 2023 – added note about API rate limiting

Update: 17 Feb 2023 – See part-2 here for short tutorials on how to run the script and an updated version which deals with the API rate limiting

Power BI: Hacking a bar chart into table

I was working on report with a business user and one of the outputs he wanted to see was a list of over 100 tasks that needed to be performed on specified groups of equipment.

One of the initial views they wanted to see is list of these tasks with about 7-8 descriptive columns so a table visual made the most sense. However, for each of those tasks there would be 1 or more pieces of equipment and for each piece of equipment it could be in 1 of 4 states depending on the last time the task was performed.

The states that a piece of equipment could be in are:

  • Compliant (the task was performed before the due date)
  • Due Now (the due date has passed, but each task has a certain amount of tolerance and we are within that tolerance)
  • Overdue (the task is past the due date and tolerance)
  • Missing (cannot find any historic occurrence of this task)

The above information is just for some context, but they key take away is that we had a table and 4 percentages that added up to 100% across each row.

The problem was that the information was very text heavy and it was difficult to see which tasks were OK and which ones needed attention without reading each row carefully one at a time. I initially tried adding some conditional formatting of the background colours and that helped a little bit, but it still was not clear.

What would have been really nice was to use a 100% stacked bar chart, but we had too many descriptive columns to use one of these visuals on its own and we had too many rows to be able to just position one next to the table.

If you are interested in seeing an implementation of each of the approaches outlined below a copy of the PBIX file I used for the images in this blog post will be linked at the end of this post

Approach 1: Using an SVG Measure

What would have been ideal would be to use a measure to generate an SVG image. There are a number of blog posts available that show how you can generate SVG strings in DAX. Unfortunately due to the limitation that all images in tables and matrices are forced to be square and you cannot specify a rectangle that is wider than it is high.

If you would like to see the limitation removed that forces images to have the same width and height please vote for this idea

I did do a quick test to confirm that this restriction was an issue and it produced output like the following which was no good as it forced each row to be too tall.

At this point I reached out to some friends to see if anyone else had some ideas for solving this issue or whether I would need to consider a different design for the report. This ended up in a great 3-way conversation between myself, Daniel Marsh-Patrick ( b | t ) and Kerry Kolosko ( b | t )

Approach 2: Using a Measure with extended unicode characters

After explaining the issue and what approaches I had already considered Daniel quickly came up with the idea of using extended unicode characters to approximate a bar chart. If you scroll down to the bottom of this page which shows some of the extended unicode characters you can see that there are some characters which are colored squares.

There are only 7 different colours (and a black and white square) but that is enough to build a simple bar chart.

Using the following measure we are able to generate a series of coloured squares to represent each of our percentages. (note you should see different coloured squares in the code below, if you don’t keep reading for an explanation of this)

Unicode Chart = 
    VAR purple = "🟪" 
    VAR brown  = "🟫" 
    VAR green  = "🟩" 
    VAR red    = "🟥" 
    VAR orange = "🟧" 
    VAR yellow = "🟨" 
    VAR blue   = "🟦" 
    VAR white  = "⬜" 
    VAR black  = "⬛" 
    VAR Scale = 10
    VAR Compliant = COALESCE(MROUND([Compliant %], (1/Scale)) * Scale,0)
    VAR DueNow = COALESCE(MROUND([Due Now %], (1/Scale)) * Scale, 0)
    VAR Overdue = COALESCE(MROUND([Overdue %], (1/Scale)) * Scale, 0)
    VAR Missing = COALESCE(MROUND([Missing %], (1/Scale)) * Scale, 0)
    RETURN 
        IF (HASONEVALUE('Tasks'[Task]), 
                REPT(green, Compliant) & 
                REPT(white, DueNow) &
                REPT(yellow, Overdue) &
                REPT(red, Missing)
                , BLANK()
                )

I selected a scale value of 10 which builds a simplistic bar chart where each square represents 10%. I This chart uses the following colour coding

And this produces a result like the following, which is really cool since all of the logic is contained in a single measure:

Unicode bar chart

However, you will notice that there are a couple of issues with the bar charts above. This is because I have specifically chosen the amounts I used in the test data to highlight some of the problems with this sort of approach.

  • Task 1: appears to be 100% compliant, but is actually only 99%
  • Task 2: has 11 squares due to multiple categories being rounded up
  • Task 3: has only 9 squares due to multiple categories being rounded down
  • Task 4: has 12 squares due to multiple categories being rounded up

In my specific circumstance, since the goal is to get every task to a status of 100% compliant, the issue with Task 1 was actually the most critical. I did not want something to appear to be 100% compliant if it was not actually at 100%. While annoying from an esthetics point of view the issue of there not always being exactly 10 squares was not a deal breaker.

However we did find one deal breaker with this approach and that is that it appears as follows on some machines, with just a series of white squares on every row:

I think this is probably because Power BI uses the fonts from the client machines when rendering text. So maybe there were different versions of the standard fonts. But even on some machines supposedly running the identical version of Windows 10 we still saw instances of this issue. This is pretty much a deal breaker for this approach as there was no way to guarantee that the necessary extended characters would be available on all the client machines.

Approach 3: A creative use of conditional formatting

It was at this point that Kerry came up with the inspired suggestion of creating 10 columns and using conditional background colours to simulate the bar chart.

I started off by creating the first measure using the following expression.

Status 01 = 
var block = 1   // the number of the current block
var scale = 10  // controls the number of blocks
var comp =   MROUND([Compliant %],(1/ scale)) * scale
var dueNow =  comp + (MROUND([Due Now %],1/ scale) * scale)
var overdue =  dueNow + (MROUND([Overdue %],1/ scale) * scale)
var missing =  overdue + (MROUND([Missing %],1/ scale) * scale)
var result = SWITCH(TRUE(),
    comp >= block , 1,
    dueNow >= block , 2,
    overdue >= block , 3,
    missing >= block , 4,
    blank()
)
return result

I setup all the key components as variables so then I just had to copy paste the code 9 more times just changing the name and the value of the “block” variable. (you can see these 10 measures in the report linked at the bottom of this post)

If you drop these 10 measures into a table you get output like the following:

By renaming each of the measures so they are just a single digit from 1 to 0 we can shrink down the table so that it easily fits on the page without horizontal scroll bars

In my first pass I started off creating rules for the background colour. This involved creating 4 rules, one for each colour and doing this 10 times (once for each column). You can see the rules for column 1 below.

After doing a couple of columns this got a bit tedious. The technique was working, but it was taking a lot of clicks per column which made me wonder if there was a better way. But you can see from the image below how the rules were working.

If you look at the options for the conditional formatting instead of using a rules based approach you could use the value coming from a column or measure. So I created 10 measures like the following which returned the background colour as a hex string instead of returning a numeric value between 1-4 that I could use in the rules.

Status Colour 01 = 
var block = 1
var scale = 10
var compliant =   MROUND([Compliant %],(1/ scale)) * scale
var dueNow =  compliant + (MROUND([Due Now %],1/ scale) * scale)
var overdue =  dueNow + (MROUND([Overdue %],1/ scale) * scale)
var missing =  overdue + (MROUND([Missing %],1/ scale) * scale)
var compliantColour = "#8BC34A"
var dueNowColour = "#CCCCCC"
var overdueColour = "#F69647"
var missingColour = "#FD625E"
var result = SWITCH(TRUE(),
    compliant >= block , compliantColour,
    dueNow >= block , dueNowColour,
    overdue >= block , overdueColour,
    missing >= block , missingColour,
    blank()
)
return result

This meant I could change the conditional formatting settings to look as follows which just required me to change the Format Style to “Field value” and then pick the relevant measure.

This was a much simpler approach which took much less time to configure and once I configured both the background and foreground colours it resulted in an output like the following.

The only remaining issue was the 1-0 column headings, but I “fixed” that by positioning a text box with a white background over the top of these.

What about other options like Charticulator or the HTML Content custom visual?

I’m sure there are many of you wondering if this approach of hacking a table was the best option and what other approaches might have worked. Below are some of the other options which I considered and subsequently chose not to pursue.

Create a Custom Visual – I have dabbled a bit with coding custom visuals and I could have potentially built my own, but it would have taken days and days to do this not hours. And then there is the requirement to maintain and update the custom visual over time which adds extra burdens to this approach which makes it unfeasible for visuals with a limited re-use potential

Charticulator – I have not used this very much so there may be an approach I missed (and I’m more than happy to be proven wrong if someone wants to have a go at implementing this). But from what I could see there is no way to force the individual text glyphs to be constrained to a given column size so if some of the description fields were longer than the available space they would overlap with the next “column” instead of wrapping like they do with a table.

HTML Content Visual – I actually got really close to the original concept for this report with the HTML Content Visual, but one key bit of functionality that I needed to support was the ability to drill through from the table to a detail page. Unfortunately the HTML Content Visual does not currently support any way of configuring drillthrough.

Download the sample file

If you are interested to play with the example file used in this post you can get it from here

Building a Bump Chart in Power BI using Deneb

Bump charts / Slope charts are an interesting way to show changes in rankings over time, but Power BI does not currently have a good built-in chart for this sort of visualization.

See the source image
Example of a bump chart

I saw an interesting example of one of these charts recently where the circles in the bump chart also change size based on the relative value of the metric.

There is a video on the Curbal channel showing how you can get pretty close with a line chart, but the circles are all the same size.

And Kerry Kolosko also has an example using deneb on her site. And while the circles are a constant size in her example, I was pretty sure that with a little tweaking I could make Deneb produce the sort of output I wanted.

I started using a copy of the data set from Curbal’s video. I used the existing [Rank bump chart] measure for the rankings and I created a simple Percent of Year measure using the following expression for the labels and sizing of the circles.

Percent of Year = DIVIDE([Sales], CALCULATE([Sales] , all(), values('Calendar'[Year])))

And after a bit of experimenting and reading through the Vega and Vega-Lite documentation to figure out how to specify the size encoding for the circles I ended up with the following result which I was pretty pleased with.

Final Bump Chart

Below is the Deneb specification for the above chart

{
  "$schema": "https://vega.github.io/schema/vega-lite/v5.json",
  "usermeta": {
    "deneb": {
      "build": "1.4.0.0",
      "metaVersion": 1,
      "provider": "vegaLite",
      "providerVersion": "5.4.0"
    },
    "interactivity": {
      "tooltip": true,
      "contextMenu": true,
      "selection": false,
      "highlight": false,
      "dataPointLimit": 50
    },
    "information": {
      "name": "Bump Chart",
      "description": "[No Description Provided]",
      "author": "Darren Gosbell",
      "uuid": "3416d242-19c2-4c88-b369-63d3f8ca302e",
      "generated": "2022-12-09T07:05:41.633Z"
    },
    "dataset": [
      {
        "key": "__0__",
        "name": "Year",
        "description": "This is the column for the X-Axis",
        "type": "numeric",
        "kind": "column"
      },
      {
        "key": "__1__",
        "name": "CategoryName",
        "description": "This is the column for the Y-Axis",
        "type": "text",
        "kind": "column"
      },
      {
        "key": "__2__",
        "name": "Rank bump chart",
        "description": "This is the ranking measure",
        "type": "numeric",
        "kind": "measure"
      },
      {
        "key": "__3__",
        "name": "Percent of Year",
        "description": "This is the value to display",
        "type": "numeric",
        "kind": "measure"
      }
    ]
  },
  "config": {
    "view": {"stroke": "transparent"},
    "font": "Segoe UI",
    "arc": {},
    "area": {
      "line": true,
      "opacity": 0.6
    },
    "bar": {},
    "line": {
      "strokeWidth": 3,
      "strokeCap": "round",
      "strokeJoin": "round"
    },
    "path": {},
    "point": {
      "filled": true,
      "size": 75
    },
    "rect": {},
    "shape": {},
    "symbol": {
      "strokeWidth": 1.5,
      "size": 50
    },
    "text": {
      "font": "Segoe UI",
      "fontSize": 12,
      "fill": "#666666"
    },
    "axis": {
      "ticks": false,
      "grid": false,
      "domain": false,
      "labelColor": "#666666",
      "labelFontSize": 12,
      "titleFont": "wf_standard-font, helvetica, arial, sans-serif",
      "titleColor": "#070f25",
      "titleFontSize": 16,
      "titleFontWeight": "normal"
    },
    "axisQuantitative": {
      "tickCount": 3,
      "grid": true,
      "gridColor": "#C8C8C8",
      "gridDash": [1, 5],
      "labelFlush": false
    },
    "axisX": {"labelPadding": 5},
    "axisY": {"labelPadding": 10},
    "header": {
      "titleFont": "wf_standard-font, helvetica, arial, sans-serif",
      "titleFontSize": 16,
      "titleColor": "#070f25",
      "labelFont": "Segoe UI",
      "labelFontSize": 13.333333333333332,
      "labelColor": "#666666"
    },
    "legend": {
      "titleFont": "Segoe UI",
      "titleFontWeight": "bold",
      "titleColor": "#666666",
      "labelFont": "Segoe UI",
      "labelFontSize": 13.333333333333332,
      "labelColor": "#666666",
      "symbolType": "circle",
      "symbolSize": 75
    }
  },
  "data": {"name": "dataset"},
  "encoding": {
    "x": {
      "field": "__0__",
      "type": "nominal"
    },
    "y": {
      "field": "__2__",
      "type": "nominal",
      "axis": null
    },
    "order": {
      "field": "__0__",
      "type": "nominal"
    }
  },
  "layer": [
    {
      "name": "Lines Layer",
      "mark": {
        "type": "line",
        "point": false,
        "interpolate": "monotone"
      },
      "encoding": {
        "color": {
          "field": "__1__",
          "legend": false,
          "scale": {
            "scheme": "pbiColorNominal"
          }
        }
      }
    },
    {
      "name": "Circle Layer",
      "mark": {
        "type": "circle",
        "tooltip": true,
        "opacity": 1
      },
      "encoding": {
        "color": {
          "field": "__1__",
          "legend": false
        },
        "size": {
          "field": "__3__",
          "type": "quantitative",
          "scale": {
            "range": [200, 2000]
          },
          "legend": null
        }
      }
    },
    {
      "mark": {
        "type": "text",
        "color": "white"
      },
      "encoding": {
        "text": {
          "field": "__3__",
          "format": "0%",
          "formatType": "pbiFormat"
        }
      }
    },
    {
      "transform": [
        {
          "window": [
            {"op": "rank", "as": "rank"}
          ],
          "sort": [
            {
              "field": "__0__",
              "order": "descending"
            }
          ]
        },
        {"filter": "datum.rank === 1"}
      ],
      "mark": {
        "type": "text",
        "align": "left",
        "baseline": "middle",
        "dx": 25,
        "fontWeight": "bold",
        "fontSize": 12
      },
      "encoding": {
        "text": {
          "field": "__1__",
          "type": "nominal"
        },
        "color": {
          "field": "__1__",
          "scale": {
            "scheme": "pbiColorNominal"
          },
          "legend": false
        }
      }
    },
    {
      "transform": [
        {
          "window": [
            {"op": "rank", "as": "rank"}
          ],
          "sort": [
            {
              "field": "__0__",
              "order": "ascending"
            }
          ]
        },
        {"filter": "datum.rank === 1"}
      ],
      "mark": {
        "type": "text",
        "align": "right",
        "baseline": "middle",
        "dx": -25,
        "fontWeight": "bold",
        "fontSize": 12
      },
      "encoding": {
        "text": {
          "field": "__1__",
          "type": "nominal"
        },
        "color": {
          "field": "__1__",
          "scale": {
            "scheme": "pbiColorNominal"
          },
          "legend": false
        }
      }
    }
  ]
}

If you prefer the look of the straight lines between the data points you can simply remove the "interpolate": "monotone" from the lines layer and you end up with a chart like the following one.

Power BI – Using Field Parameters in Paginated Reports

Field Parameters are a relatively new feature in Power BI which are still in preview and they add a lot of flexibility to an interactive reporting solution, but can they be used in paginated reports?

The short answer is yes, but the experience is not as simple as it is in Power BI Desktop. The reason for this is that Power BI Desktop does not just pass field parameters as filters to other visuals, they actually change the way the underlying DAX queries are generated.

The following example of an Adventure Works based report with a field parameter that allows the user to choose between showing either the Total Sales measure or the Total Quantity measure.

If we use the Performance Analyzer in Power BI Desktop to capture all the queries for this page we find 2 sets of queries.

The first one for the slicer looks as follows and just gets a list of values from our Field parameter table:

// DAX Query
DEFINE
  VAR __DS0Core = 
    SUMMARIZE(
      VALUES('Measures Parameter'),
      'Measures Parameter'[Measures Parameter Fields],
      'Measures Parameter'[Measures Parameter Order],
      'Measures Parameter'[Measures Parameter]
    )

  VAR __DS0PrimaryWindowed = 
    TOPN(
      101,
      __DS0Core,
      'Measures Parameter'[Measures Parameter Order],
      1,
      'Measures Parameter'[Measures Parameter],
      1,
      'Measures Parameter'[Measures Parameter Fields],
      1
    )

EVALUATE
  __DS0PrimaryWindowed

ORDER BY
  'Measures Parameter'[Measures Parameter Order],
  'Measures Parameter'[Measures Parameter],
  'Measures Parameter'[Measures Parameter Fields]

To start integrating our Field Parameters into our paginated report we can create a new paginated report using Power BI Report Builder and we can copy and paste this query into a new dataset. I’ve called this dataset “MeasureParameter” in the screenshot below:

We can then create a new report Parameter linked to this dataset. I’ve called my parameter “MeasureName” and I’ve linked up the available values as follows:

Note that I’m using the field for the Value property which returns the full DAX reference for the measure (eg. ‘Sales'[Total Sales]) and I’m using the display name of the measure for the label field (eg. “Total Sales”)

This will generate a parameter like the following when the paginated report is run to allow the user to select which measure to use:

If we then look at the second set of queries in the Performance Analyzer, we can see that the there are actually 2 queries.

  1. Gets the data for the chart and this query is dynamically generated
  2. Gets the name of the currently selected Field Parameter which is used to populate the chart title and axis labels

We can ignore query 2 as we already have the information about the field parameters in our MeasureParameters dataset, but how do we dynamically generate our query in the same way that Power BI does?

The answer to this is using expressions. Paginated Reports are extremely powerful in this regard and almost all of the properties of the objects in a report can set using expressions – even the query text and that is what we are going to do in this case.

If we look at the text of the first query we captured from Performance Analyzer, it looks as follows:

Where you can see on lines 4 and 7 that the ‘Sales'[Total Sales] measure is referenced (which I’ve highlighted in yellow. And on lines 7, 10 and 16 that this is given a name of [Total_Sales] in the output from the query. The [Total_Sales] is just a column name in the results of the query and we could leave it as end users of our report will not see this reference, but I prefer to make it clear that this is may not always be the Sales measure, so I renamed it to [Measure_Value]

Then I created a new Dataset called “Dataset1” and pasted the above static query in and clicked “OK” this saves our dataset and generates the field mappings for us. (you can manually create the field mappings from the resultset to the paginated dataset, but I find this method of starting with a static query easier)

Then to make this query dynamic I clicked the “fx” button next to the Query property to open the expression editor.

I then made the following changes to the query text:

  1. First, I started the expression with an = sign, this tells the report engine that this value needs to be evaluated as an expression. Then I added a double quote character as this is the start of a string expression.
  2. Then I added a closing double quote character at the end of the query.
  3. I then went through the rest of the query and doubled up any existing double quote characters to escape them.
  4. Finally, I inserted some concatenation expressions to add in the Value property from our MeasureName parameter which we added earlier using the Parameters!MeasureName.Value reference.
configuring the dynamic query

So, we now have a dynamic query which will inject the measure to be used based on the parameter chosen by the user at run time.

To test this we can put a simple table on our report canvas and link up the Color and Measure value fields

Configuring the column references for the table visual

This gives us a table which looks like the following

The design time report layout

And when we run it and choose a measure from the parameter we get the data we expect, but we cannot easily tell from the report which measure was selected by the user since the column header just says “ID Measure Value”. So if this report was exported to a PDF or sent as part of an emails subscription it might be hard to tell which measure was selected.

The basic report with a dynamic query

To fix this we can right click on the “ID Measure Value” header and turn this column header into an expression instead of being static text.

Editing the measure expression for the column header

Then we can use an expression like the following to use the label of the selected parameter value as column header

=Parameters!MeasureName.Label

If we also bold the column headers, now when we run our report, we get an experience almost identical to that in Power BI Desktop and our table now shows which measure was selected in the header for the measure column.

Final Report

This same technique of using an expression-based query could be adapted to work with field parameters that reference columns instead of measures. The expression for the query would just need to be adjusted differently, but I will leave this as an exercise for the reader.

« Older posts