Darren Gosbell

Category: DAX

Generating Measure descriptions with ChatGPT – part 2

In my previous post I had a short 40 line script for Tabular Editor (free | paid) which could generate descriptions for the measures in your tabular model. There were a number of interesting questions in the comments to that post which I thought would make a follow up post worthwhile.

The things I want to cover in the post are

How to run the script using TE2

Below is a very brief tutorial on running a script using Tabular Editor 2

  1. If you launched Tabular Editor from the External Tools ribbon in Power BI desktop you can skip this step, otherwise click on this icon to connect to your data model.
  2. Click on the C# Script tab and paste in the script from my earlier post
  3. There are instructions in my earlier post for requesting your own API key from Open AI you need to paste the key into this line between the quotes (“)
  4. Optional: if you are targeting users that speak a language other than English you can update the question text on line 10 to your desired language (from what I understand ChatGPT understands about 100 different languages so you might need to test if this works in your language)
  5. Click the play button to run the script, this step could take a while depending on the number of measures and whether you hit throttling (more on this below)
  6. Once the script has run you can check some of the descriptions by clicking on a measure in the model explorer. At this point all the descriptions are still local and have not been saved back to your data model.
  7. The text generated by ChatGPT will appear in the Description property for the measure.
  8. If you are happy with the generated descriptions clicking the Save Changes button will save the new descriptions back to your data model.

How to run the script using TE3

The steps for Tabular Editor 3 are almost the same with the main difference being at step 2 because TE3 allows you to have multiple C# scripts open at the same time.

  1. If you launched Tabular Editor from the External Tools ribbon in Power BI desktop you can skip this step, otherwise click on this icon to connect to your data model.
  2. Click on the New C# Script button to create a C# script document and paste in the script from my earlier post
  3. There are instructions in my earlier post for requesting your own API key from Open AI you need to paste the key into this line between the quotes (“)
  4. Optional: if you are targeting users that speak a language other than English you can update the question text on line 10 to your desired language (from what I understand ChatGPT understands about 100 different languages so you might need to test if this works in your language)
  5. Click the play button to run the script, this step could take a while depending on the number of measures and whether you hit throttling (more on this below)
  6. Once the script has run you can check some of the descriptions by clicking on a measure in the model explorer. At this point all the descriptions are still local and have not been saved back to your data model.
  7. The text generated by ChatGPT will appear in the Description property for the measure.
  8. If you are happy with the generated descriptions clicking the Save Changes button will save the new descriptions back to your data model.

Dealing with Rate Limiting

If you have not worked with API calls before you may not have come across the term “rate limiting” before. Basically, it is a mechanism that APIs use to prevent users from monopolizing the resources on a service by limiting the number of calls that can be made within a given timeframe.

In the case of the Open AI APIs they document their rate limits here and at the time of writing they only allow 20 calls per minute for a free account. Once you hit that limit the API will return a 429 error code which is a common code meaning “Too many requests”. There are 2 approaches that you can use to work around this.

  1. You can add pauses in the code to wait until the next minute when you can then make another 20 calls.
  2. You can upgrade to a paid account which has much a higher limit.

If you would like to see an example of an updated script which will skip measures which already have descriptions (so if you’ve manually updated some or if there was an issue part way through running a previous script

Update Tabular Model Descriptions from ChatGPT with rate limiting logic (github.com)

A Final Warning

The Large Language Model (LLM) behind ChatGPT is an amazing piece of technology and it’s only going to get better over time. But I’ve also seen it described as “a B grade intern who hallucinates occasionally”. In it’s simplest form all these AI models do is to break a piece of text into a series of tokens and then predict what tokens are likely to come next based on a corpus of training material.

In my testing I’ve seen a number of examples where the prediction has missed some important detail. And in a couple of cases, I’ve even seen it include a modification to the original measure and then it has described the modified measure, not the original one!

So, while this is definitely a huge time saver, just be aware that it’s not perfect. Before publishing these descriptions into production, I would suggest doing a review of them to ensure that they make sense and include all the important details.

If you want to see all the descriptions at once you can use a tool like DAX Studio to run the following query to list them.

SELECT [Name], [Description], [Expression] FROM $SYSTEM.TMSCHEMA_MEASURES

Automatically generating measure descriptions for Power BI and Analysis Services with ChatGPT and Tabular Editor

Note: Make sure to go on and read part 2 after reading this which has some improvements to the script.

It seems like everyone is playing with ChatGPT at the moment and doing everything from getting it to write code for them to asking it to write ode’s to their favourite visuals. Writing DAX code is interesting from the perspective of someone building a data model. But one thing that has always been hard from the perspective of consumer of a data model is understanding the logic behind those DAX calculations.

As you’ve probably already guessed from the title of this post rather than generating DAX I wanted to see if we could go the other way and get ChatGPT to provide simple descriptions of what a DAX expression was doing.

To test this I did not want to use a basic measure, so I found a moderately complex expression that someone else had actually originally had ChatGPT generate for them and ask it to explain the calculation. The first time I tried this it produced a very long description breaking down each of the function calls. But by revising the way I asked the question I eventually got to a point where it was giving me a nice concise explanation.

When I posted the following question to ChatGPT:

Explain the following calculation in a few sentences in simple business terms without using DAX function names:
CALCULATE (
   SELECTEDVALUE ( 'Suppliers'[CompanyName] ),
   TOPN (
      1,
      SUMMARIZE (
         Products,
         'Suppliers' [CompanyName],
         "@StockValue", SUMX( 'Products'[UnitsInStock] * 'Products'[UnitPrice] ),
      [@stockValue],
       DESC
   )
)

It came back with the following answer which I was pretty happy with:

This calculation is finding the company name of the supplier with the highest total value of stock for a given product. It does this by first multiplying the units in stock by the unit price for each product, then summarizing that total value by supplier. It then finds the top supplier by that total value and returns their company name.

This was great for a single measure, but not all of the consumers of your data model will have access to view the DAX expression. And even if they did it would be a pain to have to keep jumping out to an external website to understand what every measure is doing.

So this made me wonder if there might be some way of injecting these descriptions into the data model. And as it turns out ChatGPT already has a REST API. All you need to do is to sign up for an account at https://platform.openai.com (if you’ve been experimenting with ChatGPT you probably already have an account) and then generate an API key for your account and you can make requests of ChatGPT from another program.

View and Create API keys from the account link in the top right corner

Note: free accounts are limited to 20 calls per minute (see Rate Limits – OpenAI API ). So for large models you would either need to add logic to include a 1 minute delay every 20 measures or upgrade to a paid plan.

From there I setup the following Tabular Editor script which will loop through all of the measures in your data model and update the description with the one generated by ChatGPT. All you need to do to run this is to paste your own API key into the apiKey constant on line 8.

For illustration purposes I’ve also included the original DAX expression, but you don’t need to keep that if you don’t want to.

View this gist on GitHub

Note: the script has been tested in both TE2 (free) and TE3 (paid)
and is available as a gist on github https://gist.github.com/dgosbell/f3253c7ec52efe441b80596ffddea07c

The updated script including logic for dealing with rate limiting is included in my following post here

Before running the script hovering over a measure in my data model only shows the measure name

the default tooltip showing just the measure name

After running the script, you get a much more informative tooltip

the expanded tooltip showing the new auto-populated description

Interestingly the descriptions do seem to change a little bit from one run to the next and in spite of asking it not to use DAX function names in the description sometimes it still sneaks one in. The description below was particularly bad where it just described the SUMX function.

But the advantage of this technique is that you can re-run it for single measures if you like or you could manually edit the description field if the description needed a small improvement.


Update: 16 Feb 2023 – added note about API rate limiting

Update: 17 Feb 2023 – See part-2 here for short tutorials on how to run the script and an updated version which deals with the API rate limiting

Power BI – Using Field Parameters in Paginated Reports

Field Parameters are a relatively new feature in Power BI which are still in preview and they add a lot of flexibility to an interactive reporting solution, but can they be used in paginated reports?

The short answer is yes, but the experience is not as simple as it is in Power BI Desktop. The reason for this is that Power BI Desktop does not just pass field parameters as filters to other visuals, they actually change the way the underlying DAX queries are generated.

The following example of an Adventure Works based report with a field parameter that allows the user to choose between showing either the Total Sales measure or the Total Quantity measure.

If we use the Performance Analyzer in Power BI Desktop to capture all the queries for this page we find 2 sets of queries.

The first one for the slicer looks as follows and just gets a list of values from our Field parameter table:

// DAX Query
DEFINE
  VAR __DS0Core = 
    SUMMARIZE(
      VALUES('Measures Parameter'),
      'Measures Parameter'[Measures Parameter Fields],
      'Measures Parameter'[Measures Parameter Order],
      'Measures Parameter'[Measures Parameter]
    )

  VAR __DS0PrimaryWindowed = 
    TOPN(
      101,
      __DS0Core,
      'Measures Parameter'[Measures Parameter Order],
      1,
      'Measures Parameter'[Measures Parameter],
      1,
      'Measures Parameter'[Measures Parameter Fields],
      1
    )

EVALUATE
  __DS0PrimaryWindowed

ORDER BY
  'Measures Parameter'[Measures Parameter Order],
  'Measures Parameter'[Measures Parameter],
  'Measures Parameter'[Measures Parameter Fields]

To start integrating our Field Parameters into our paginated report we can create a new paginated report using Power BI Report Builder and we can copy and paste this query into a new dataset. I’ve called this dataset “MeasureParameter” in the screenshot below:

We can then create a new report Parameter linked to this dataset. I’ve called my parameter “MeasureName” and I’ve linked up the available values as follows:

Note that I’m using the field for the Value property which returns the full DAX reference for the measure (eg. ‘Sales'[Total Sales]) and I’m using the display name of the measure for the label field (eg. “Total Sales”)

This will generate a parameter like the following when the paginated report is run to allow the user to select which measure to use:

If we then look at the second set of queries in the Performance Analyzer, we can see that the there are actually 2 queries.

  1. Gets the data for the chart and this query is dynamically generated
  2. Gets the name of the currently selected Field Parameter which is used to populate the chart title and axis labels

We can ignore query 2 as we already have the information about the field parameters in our MeasureParameters dataset, but how do we dynamically generate our query in the same way that Power BI does?

The answer to this is using expressions. Paginated Reports are extremely powerful in this regard and almost all of the properties of the objects in a report can set using expressions – even the query text and that is what we are going to do in this case.

If we look at the text of the first query we captured from Performance Analyzer, it looks as follows:

Where you can see on lines 4 and 7 that the ‘Sales'[Total Sales] measure is referenced (which I’ve highlighted in yellow. And on lines 7, 10 and 16 that this is given a name of [Total_Sales] in the output from the query. The [Total_Sales] is just a column name in the results of the query and we could leave it as end users of our report will not see this reference, but I prefer to make it clear that this is may not always be the Sales measure, so I renamed it to [Measure_Value]

Then I created a new Dataset called “Dataset1” and pasted the above static query in and clicked “OK” this saves our dataset and generates the field mappings for us. (you can manually create the field mappings from the resultset to the paginated dataset, but I find this method of starting with a static query easier)

Then to make this query dynamic I clicked the “fx” button next to the Query property to open the expression editor.

I then made the following changes to the query text:

  1. First, I started the expression with an = sign, this tells the report engine that this value needs to be evaluated as an expression. Then I added a double quote character as this is the start of a string expression.
  2. Then I added a closing double quote character at the end of the query.
  3. I then went through the rest of the query and doubled up any existing double quote characters to escape them.
  4. Finally, I inserted some concatenation expressions to add in the Value property from our MeasureName parameter which we added earlier using the Parameters!MeasureName.Value reference.
configuring the dynamic query

So, we now have a dynamic query which will inject the measure to be used based on the parameter chosen by the user at run time.

To test this we can put a simple table on our report canvas and link up the Color and Measure value fields

Configuring the column references for the table visual

This gives us a table which looks like the following

The design time report layout

And when we run it and choose a measure from the parameter we get the data we expect, but we cannot easily tell from the report which measure was selected by the user since the column header just says “ID Measure Value”. So if this report was exported to a PDF or sent as part of an emails subscription it might be hard to tell which measure was selected.

The basic report with a dynamic query

To fix this we can right click on the “ID Measure Value” header and turn this column header into an expression instead of being static text.

Editing the measure expression for the column header

Then we can use an expression like the following to use the label of the selected parameter value as column header

=Parameters!MeasureName.Label

If we also bold the column headers, now when we run our report, we get an experience almost identical to that in Power BI Desktop and our table now shows which measure was selected in the header for the measure column.

Final Report

This same technique of using an expression-based query could be adapted to work with field parameters that reference columns instead of measures. The expression for the query would just need to be adjusted differently, but I will leave this as an exercise for the reader.

Learning to write queries in DAX – Demo Script from DataANZ 2021

If you were in my recent session at the DataANZ 2021 conference I mentioned that I would make a copy of the queries I created during that session so here they are!

In my demos I used the Adventure Works 2020 pbix file from here https://aka.ms/dax-docs-samples

And I added the following simple measure to the Sales table:

Total Sales = SUM( Sales[Sales Amount])

You can download the script of the demo queries from my OneDrive

DAX Studio and Tabular Editor – better together

I love love the way Phil Seamark likes to push the art of the possible with the Microsoft BI platform. And I really liked his recent article on using VS Code to query a Power BI file and generate measures. But as I was reading that blog post I could not help but think that it was going to be outside of the comfort zone of a lot of Power BI users.

I know a lot of BI developers that have strong skills in DAX, Power Query and SQL, but don’t often use C# and many of them don’t use VS Code and have not heard of nuget. So this made me wonder if there was a another way of doing the same thing with the tools that many BI developers already use like DAX Studio and Tabular Editor

I actually figured out a couple of approaches to achieving the same result as Phil. The first one uses the following 2 pieces of information.

  1. Tabular Editor has a brilliant feature called Advanced Scripting which lets you run little pieces of C# code and is an excellent way of automating changes in your models. And in addition to being able to use the standard properties and methods Daniel has build a series of helpful “helper” methods like AddMeasure which has parameters for <Name>, <Expression> and <Folder>.
  2. As I showed in my previous post where I generated part of a DAX query using another DAX query. It is possible with some careful string manipulation to use DAX to generate other code such as the C# using in the Tabular Editor Advance Scripting feature.

If we combine the two pieces of information above we end up with the following query which generates a Tabular Editor AddMeasure() call for each value in Product[Color].

EVALUATE 
ADDCOLUMNS(
VALUES('Product'[Color])
,"ScriptExpression"
,var _color = 'Product'[Color] 
var _name = "Sum of " & _color & " Sales Amount"
var _expression = "CALCULATE(SUM('Sales'[Sales Amount]) ,'Product'[Color] = \""" & _color & "\"")"                     
return "Model.Tables[""Sales""].AddMeasure( """ & _name & """, """ & _expression & """, ""AutoMeasures"");"
)

When you open DAX Studio from the External Tools menu and run this query you get output that looks like the following and you can selected the “ScriptExpression” column and copy that.

Then you open Tabular Editor from the External Tools menu. Click on the Advanced Scripting tab and paste in the output from the “ScriptExpression” column. Note this may include the “ScriptExpression” column header at the top which you will need to delete.

(note to self, I should add a “copy without headers” option to DAX Studio, there is an option for this, but it would be nice to add it to the right-click menu on the results)

Then when you click “run” (1) on the advance script, you will see a folder with all your new measures appear (2). You can then check that the expression has been entered correctly and click save (3) to make these appear back in Power BI Desktop.

Stay tuned for the next post in this series where I will show another technique for doing this.

The easy way to Generate a DAX query for all measures

I had an interesting request from a Data Scientist in our team recently. He’d been extracting some data from one of our tabular models, however he was having some trouble getting his predictive model working reliably.

We already had a query in the form similar to the following where we had a couple of group by columns, one or two filters and a handful of measures:

EVALUATE
SUMMARIZECOLUMNS(
   'Table1'[Column1],
   'Table2'[Column2],
   FILTER(ALL('Table1'[Column3]), 'Table1'[Column3] = "Value",
   "Measure1", [Measure1],
   "Measure2", [Measure2]
) 

Up until this point we had been manually adding measures that we though may influence the behaviour we were trying to predict, but this was a slow, trial and error based process. So the Data Scientist rang me and said “You know what? Why don’t you just give me an extract with all the measures?”. “You do realise we have over 1,000 measures” I said, “because we have lots of time intelligence variations like Current Month, Previous Month, Month over Month variance, etc.” and . “That’s fine he replied, I can always ignore any that I don’t want or that are not significant – you can just do something like a SELECT * right?”.

So if you’ve ever written your own DAX queries you would know that you can do a query like the following to get all the columns in a single table

EVALUATE 'Table'

But that will not get you any measures, you have to list out the measures manually one, by one. At this point I knew that I really did not want to spend hours to hand type a query with over 1,000 measures so I starting thinking what options I might have for generating this query.

I knew I could probably build some sort of foreach loop in Powershell using AMO/TOM. Or I could maybe use the Advanced Scripting in Tabular Editor. But I also knew that I could easily get a list of all the visible measures by querying the $SYSTEM.TMSCHEMA_MEASURES or $SYSTEM.MDSCHEMA_MEASURES DMV’s using DAX Studio

After a bit of experimenting I ended up with the following expression which builds a list of all the visible measures in the model in the "Name", Expression format that is needed for SUMMARIZECOLUMNS

SELECT	'"' + [Name] + '", ' as [Caption], '[' +  [Name] + '],' as [Name]
FROM $SYSTEM.TMSCHEMA_MEASURES
WHERE NOT [IsHidden]
ORDER BY [Name]

Then I was simply able to paste in the output from this query after the filters in our existing query and run it – Job done.

Power BI: How to make the Gantt chart show events in progress

I had a colleague approach me at work with an interesting problem. He had a Power BI report using the Gantt chart custom visual however when he used a date slicer to select a date range it was only showing events which started on that date range and he wanted to see any events that were in progress for that date range.

I figured out a way to get this working and I thought it might be helpful to not just show the solution, but also to walk through the process I used to develop it. Let’s start by looking at an example of the issue he was dealing with.

Given the following data, what he wanted was to filter the data for dates between Mar-20 to Apr-10 so that the Gantt chart would show the section in Yellow below:

But he was getting output like the following, where it was only showing the bottom 3 tasks from the image above with a start date between Mar-20 and Apr-10. It was not showing tasks which were already in progress like the first two (Division… and Functional…).

To figure out what options we had to change this default behaviour I turned on the Performance Profiler in Power BI Desktop (I could also have used the All Queries trace in DAX Studio). This captured the following query for the Gantt visual:

// DAX Query
DEFINE VAR __DS0FilterTable = 
  FILTER(
    KEEPFILTERS(VALUES('Table1'[Start Date])),
    AND('Table1'[Star Date] >= DATE(2016, 3, 20), 'Table1'[Start Date] < DATE(2016, 4, 11))
  )

EVALUATE
  TOPN(
    1002,
    SUMMARIZECOLUMNS(
      'Table1'[Task Name],
      'Table1'[Task ID],
      'Table1'[Start Date],
      'Table1'[Resource],
      __DS0FilterTable,
      "MinEnd_Date", CALCULATE(MIN('Table1'[End Date]))
    ),
    'Table1'[Task ID],
    1,
    'Table1'[Task Name],
    1,
    'Table1'[Start],
    1,
    'Table1'[Resource],
    1
  )

ORDER BY
  'Table1'[Task ID], 'Table1'[Task Name], 'Table1'[Start], 'Table1'[Resource]

There are two important parts to notice from the above query.

First is that the filter is currently explicitly filtering for start dates based on the values selected in the slicer. Obviously this is going to cause an issue as events that are already in-progress will have a start date before the earliest date in the slicer.

To fix this we need to create a separate date table that does not have a relationship to our main fact table. In the demo file I simply created a calculated table using Date Slicer = CALENDARAUTO() but you can use whatever method you like to create this table.

If we replace the 'Table1'[Start Date] field used in the slicer that “fixes” our issue of start dates earlier than those in the slicer being filtered out, but now our slicer is not filtering the data at all, but all is not lost, we will fix that next.

The second interesting thing that I noticed from the captured query is that [Start Date] is being used as a grouping column in the SUMMARIZECOLUMNS() function, but [End Date] is getting the earliest end using CALCULATE(MIN('Table1'[End Date])). What is happening is that the Gantt chart is creating an implied measure when we pass in the [End Date] column. So instead of letting the Gantt chart create an implied measure we can create our own measure and use that instead.

Below is the measure I developed

Gantt End Date = 
VAR _maxDate =
    MAX ( 'Date Slicer'[Date] )
VAR _minDate =
    MIN ( 'Date Slicer'[Date] )
VAR _tasks = VALUES(Table1[Task ID])
RETURN
    MAXX (
        CALCULATETABLE (
            SUMMARIZE( Table1, Table1[Start Date], Table1[End Date] ),
            Table1[Start Date] <= _maxDate,
            Table1[End Date] >= _minDate,
            _tasks
        ),
        Table1[End Date]
    )

This gives us the following

If you look at the output of this measure in a table all it does is the following:

Note that I’ve force the display of all rows by including a simple row count measure. This lets us see that the [Gantt End Date] only returns values where the End date is after the start of the selected date range and the start is before the end of the selected date range, otherwise it returns a blank and SUMMARIZECOLUMNS does not return rows where all the measures return blank.

If you want to look at the Power BI file I used in the screenshots for this post you can download it from here

Do the ALL() and REMOVEFILTERS() functions always work correctly in DAX?

I was recently re-reading through Matt Allington’s post that he did back when the REMOVEFILTERS() function was first introduced into the DAX language and I saw this post in the comments:

If a report is set up using the filter panel instead of slicers, will these filtering functions (ALL, REMOVEFILTERS, FILTER, ALLSELECTED, etc.) work as expected? For example if reporting percentages will the denominator calculate correctly?

John Thomas

Now it does not really matter if filter conditions are set using slicers or the filter panel. At the end of the day they get injected into the DAX query for a visual in the same way. You can check this yourself by creating a test file with 2 pages with the same visual, one with a slicer and the other with a page filter, then use the Performance Analyzer or the All Queries trace in DAX Studio to see the DAX query generated by both pages for that visual.

So the simple answer to this question is “yes” – if the filters work with slicers, they will work with the filter panel.

BUT, the astute among you may have noticed that I’ve qualified my answer by adding if the filter conditions work with slicers”. Which might lead you to wonder- “Are there scenarios where the filter conditions don’t work?”. And when we are talking about filter modifiers like ALL() and REMOVEFILTERS() there are some scenarios where the results may be unexpected due to the way the the SUMMARIZECOLUMNS() function correlates filters from the same table together. If you want to find out more I suggest the you read through this article by Greg Baldini over on antifound.com which contains an in-depth analysis of this issue.

Building custom Data Bars in Power BI using SVG measures

So a while ago Power BI enabled the ability to display SVG images in tables and matrix visuals.  SVG is an XML based language and is actually what the majority of Power BI visual use to render their charts so this technique works really well in Power BI and gives you a way of drawing custom elements in your reports without having to go down the path of building a full blown custom visual. There have been some interesting examples of using this feature such as the sparkline measures created by David Eldersveld (blog) and Reed Haven (blog) and even this funky elephant on hatfullofdata.blog. .

However recently a friend of mine was wanting a way to just build some simple custom data bars with dynamic coloring. So I pulled together an example which produces the following output:

Basically I’m using a text element to output the measure value and drawing a small rectangle under the text calculating the length of the rectangle based of the percentage of the max value. There is also a conditional statement to make amounts less than 50 appear in red.

The code to produce this is relatively simple and I’ve broken it down into a bunch of different variables to hopefully make it easier to understand.

DataBar = 
    var _barMaxValue = MAXX(all(Sales[Category]), calculate(SUM(Sales[Amount])))
    var _barValue    = SUM(Sales[Amount])
    var _svgWidth    = 200
    var _svgHeight   = 200
    var _barHeight   = 30
    var _barWidth    = INT( (_barValue / _barMaxValue) * _svgWidth )
    var _fill        = IF( _barValue > 50, "blue", "red")
    var _svg_font    = "normal 100px sans-serif"
    var _svg_start   = "data:image/svg+xml;utf8,<svg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 " & _svgWidth & " " & _svgHeight & "'>"
    var _svg_end     = "</svg>"
    var _svg_rect    = "<rect x='0' y='135' width='" & _barWidth & "' height='" & _barHeight & "' style='fill:" & _fill & ";fill-opacity:0.4' />"
    var _svg_text    = "<text x='5' y='120' witdh='" & _svgWidth & "' height='" & _svgHeight & "' style='font:" & _svg_font & "'>" & _barValue & "</text>"
return
    _svg_start &  _svg_rect & _svg_text  & _svg_end

The only “trick” to getting these SVG images to display correctly in the Table and Matrix visuals is to set their Data Category to ImageUrl. If you don’t do this the measure will just display the SVG as text (which could be useful for debugging more complex measures)

If you want to see a working example you can download an example pbix file from my OneDrive.

This was all relatively simple to do since I’ve worked with SVG before so it was not too hard to pull together something simple like this. The biggest problem that I had though was that Power BI restricts ImageUrl’s to only display inside a square, where as to build a nice custom data bar or sparkline using this technique you really want to work in a rectangular space that is 3-4 times wider than it is high.

So I’ve actually added and idea here to ideas.powerbi.com requesting that they change this in Power BI. Please vote for this if you think this would be a good idea.