Darren Gosbell

Author: Darren Gosbell (Page 2 of 5)

Power BI: Hacking a bar chart into table

I was working on report with a business user and one of the outputs he wanted to see was a list of over 100 tasks that needed to be performed on specified groups of equipment.

One of the initial views they wanted to see is list of these tasks with about 7-8 descriptive columns so a table visual made the most sense. However, for each of those tasks there would be 1 or more pieces of equipment and for each piece of equipment it could be in 1 of 4 states depending on the last time the task was performed.

The states that a piece of equipment could be in are:

  • Compliant (the task was performed before the due date)
  • Due Now (the due date has passed, but each task has a certain amount of tolerance and we are within that tolerance)
  • Overdue (the task is past the due date and tolerance)
  • Missing (cannot find any historic occurrence of this task)

The above information is just for some context, but they key take away is that we had a table and 4 percentages that added up to 100% across each row.

The problem was that the information was very text heavy and it was difficult to see which tasks were OK and which ones needed attention without reading each row carefully one at a time. I initially tried adding some conditional formatting of the background colours and that helped a little bit, but it still was not clear.

What would have been really nice was to use a 100% stacked bar chart, but we had too many descriptive columns to use one of these visuals on its own and we had too many rows to be able to just position one next to the table.

If you are interested in seeing an implementation of each of the approaches outlined below a copy of the PBIX file I used for the images in this blog post will be linked at the end of this post

Approach 1: Using an SVG Measure

What would have been ideal would be to use a measure to generate an SVG image. There are a number of blog posts available that show how you can generate SVG strings in DAX. Unfortunately due to the limitation that all images in tables and matrices are forced to be square and you cannot specify a rectangle that is wider than it is high.

If you would like to see the limitation removed that forces images to have the same width and height please vote for this idea

I did do a quick test to confirm that this restriction was an issue and it produced output like the following which was no good as it forced each row to be too tall.

At this point I reached out to some friends to see if anyone else had some ideas for solving this issue or whether I would need to consider a different design for the report. This ended up in a great 3-way conversation between myself, Daniel Marsh-Patrick ( b | t ) and Kerry Kolosko ( b | t )

Approach 2: Using a Measure with extended unicode characters

After explaining the issue and what approaches I had already considered Daniel quickly came up with the idea of using extended unicode characters to approximate a bar chart. If you scroll down to the bottom of this page which shows some of the extended unicode characters you can see that there are some characters which are colored squares.

There are only 7 different colours (and a black and white square) but that is enough to build a simple bar chart.

Using the following measure we are able to generate a series of coloured squares to represent each of our percentages. (note you should see different coloured squares in the code below, if you don’t keep reading for an explanation of this)

Unicode Chart = 
    VAR purple = "🟪" 
    VAR brown  = "🟫" 
    VAR green  = "🟩" 
    VAR red    = "🟥" 
    VAR orange = "🟧" 
    VAR yellow = "🟨" 
    VAR blue   = "🟦" 
    VAR white  = "⬜" 
    VAR black  = "⬛" 
    VAR Scale = 10
    VAR Compliant = COALESCE(MROUND([Compliant %], (1/Scale)) * Scale,0)
    VAR DueNow = COALESCE(MROUND([Due Now %], (1/Scale)) * Scale, 0)
    VAR Overdue = COALESCE(MROUND([Overdue %], (1/Scale)) * Scale, 0)
    VAR Missing = COALESCE(MROUND([Missing %], (1/Scale)) * Scale, 0)
    RETURN 
        IF (HASONEVALUE('Tasks'[Task]), 
                REPT(green, Compliant) & 
                REPT(white, DueNow) &
                REPT(yellow, Overdue) &
                REPT(red, Missing)
                , BLANK()
                )

I selected a scale value of 10 which builds a simplistic bar chart where each square represents 10%. I This chart uses the following colour coding

And this produces a result like the following, which is really cool since all of the logic is contained in a single measure:

Unicode bar chart

However, you will notice that there are a couple of issues with the bar charts above. This is because I have specifically chosen the amounts I used in the test data to highlight some of the problems with this sort of approach.

  • Task 1: appears to be 100% compliant, but is actually only 99%
  • Task 2: has 11 squares due to multiple categories being rounded up
  • Task 3: has only 9 squares due to multiple categories being rounded down
  • Task 4: has 12 squares due to multiple categories being rounded up

In my specific circumstance, since the goal is to get every task to a status of 100% compliant, the issue with Task 1 was actually the most critical. I did not want something to appear to be 100% compliant if it was not actually at 100%. While annoying from an esthetics point of view the issue of there not always being exactly 10 squares was not a deal breaker.

However we did find one deal breaker with this approach and that is that it appears as follows on some machines, with just a series of white squares on every row:

I think this is probably because Power BI uses the fonts from the client machines when rendering text. So maybe there were different versions of the standard fonts. But even on some machines supposedly running the identical version of Windows 10 we still saw instances of this issue. This is pretty much a deal breaker for this approach as there was no way to guarantee that the necessary extended characters would be available on all the client machines.

Approach 3: A creative use of conditional formatting

It was at this point that Kerry came up with the inspired suggestion of creating 10 columns and using conditional background colours to simulate the bar chart.

I started off by creating the first measure using the following expression.

Status 01 = 
var block = 1   // the number of the current block
var scale = 10  // controls the number of blocks
var comp =   MROUND([Compliant %],(1/ scale)) * scale
var dueNow =  comp + (MROUND([Due Now %],1/ scale) * scale)
var overdue =  dueNow + (MROUND([Overdue %],1/ scale) * scale)
var missing =  overdue + (MROUND([Missing %],1/ scale) * scale)
var result = SWITCH(TRUE(),
    comp >= block , 1,
    dueNow >= block , 2,
    overdue >= block , 3,
    missing >= block , 4,
    blank()
)
return result

I setup all the key components as variables so then I just had to copy paste the code 9 more times just changing the name and the value of the “block” variable. (you can see these 10 measures in the report linked at the bottom of this post)

If you drop these 10 measures into a table you get output like the following:

By renaming each of the measures so they are just a single digit from 1 to 0 we can shrink down the table so that it easily fits on the page without horizontal scroll bars

In my first pass I started off creating rules for the background colour. This involved creating 4 rules, one for each colour and doing this 10 times (once for each column). You can see the rules for column 1 below.

After doing a couple of columns this got a bit tedious. The technique was working, but it was taking a lot of clicks per column which made me wonder if there was a better way. But you can see from the image below how the rules were working.

If you look at the options for the conditional formatting instead of using a rules based approach you could use the value coming from a column or measure. So I created 10 measures like the following which returned the background colour as a hex string instead of returning a numeric value between 1-4 that I could use in the rules.

Status Colour 01 = 
var block = 1
var scale = 10
var compliant =   MROUND([Compliant %],(1/ scale)) * scale
var dueNow =  compliant + (MROUND([Due Now %],1/ scale) * scale)
var overdue =  dueNow + (MROUND([Overdue %],1/ scale) * scale)
var missing =  overdue + (MROUND([Missing %],1/ scale) * scale)
var compliantColour = "#8BC34A"
var dueNowColour = "#CCCCCC"
var overdueColour = "#F69647"
var missingColour = "#FD625E"
var result = SWITCH(TRUE(),
    compliant >= block , compliantColour,
    dueNow >= block , dueNowColour,
    overdue >= block , overdueColour,
    missing >= block , missingColour,
    blank()
)
return result

This meant I could change the conditional formatting settings to look as follows which just required me to change the Format Style to “Field value” and then pick the relevant measure.

This was a much simpler approach which took much less time to configure and once I configured both the background and foreground colours it resulted in an output like the following.

The only remaining issue was the 1-0 column headings, but I “fixed” that by positioning a text box with a white background over the top of these.

What about other options like Charticulator or the HTML Content custom visual?

I’m sure there are many of you wondering if this approach of hacking a table was the best option and what other approaches might have worked. Below are some of the other options which I considered and subsequently chose not to pursue.

Create a Custom Visual – I have dabbled a bit with coding custom visuals and I could have potentially built my own, but it would have taken days and days to do this not hours. And then there is the requirement to maintain and update the custom visual over time which adds extra burdens to this approach which makes it unfeasible for visuals with a limited re-use potential

Charticulator – I have not used this very much so there may be an approach I missed (and I’m more than happy to be proven wrong if someone wants to have a go at implementing this). But from what I could see there is no way to force the individual text glyphs to be constrained to a given column size so if some of the description fields were longer than the available space they would overlap with the next “column” instead of wrapping like they do with a table.

HTML Content Visual – I actually got really close to the original concept for this report with the HTML Content Visual, but one key bit of functionality that I needed to support was the ability to drill through from the table to a detail page. Unfortunately the HTML Content Visual does not currently support any way of configuring drillthrough.

Download the sample file

If you are interested to play with the example file used in this post you can get it from here

Building a Bump Chart in Power BI using Deneb

Bump charts / Slope charts are an interesting way to show changes in rankings over time, but Power BI does not currently have a good built-in chart for this sort of visualization.

See the source image
Example of a bump chart

I saw an interesting example of one of these charts recently where the circles in the bump chart also change size based on the relative value of the metric.

There is a video on the Curbal channel showing how you can get pretty close with a line chart, but the circles are all the same size.

And Kerry Kolosko also has an example using deneb on her site. And while the circles are a constant size in her example, I was pretty sure that with a little tweaking I could make Deneb produce the sort of output I wanted.

I started using a copy of the data set from Curbal’s video. I used the existing [Rank bump chart] measure for the rankings and I created a simple Percent of Year measure using the following expression for the labels and sizing of the circles.

Percent of Year = DIVIDE([Sales], CALCULATE([Sales] , all(), values('Calendar'[Year])))

And after a bit of experimenting and reading through the Vega and Vega-Lite documentation to figure out how to specify the size encoding for the circles I ended up with the following result which I was pretty pleased with.

Final Bump Chart

Below is the Deneb specification for the above chart

{
  "$schema": "https://vega.github.io/schema/vega-lite/v5.json",
  "usermeta": {
    "deneb": {
      "build": "1.4.0.0",
      "metaVersion": 1,
      "provider": "vegaLite",
      "providerVersion": "5.4.0"
    },
    "interactivity": {
      "tooltip": true,
      "contextMenu": true,
      "selection": false,
      "highlight": false,
      "dataPointLimit": 50
    },
    "information": {
      "name": "Bump Chart",
      "description": "[No Description Provided]",
      "author": "Darren Gosbell",
      "uuid": "3416d242-19c2-4c88-b369-63d3f8ca302e",
      "generated": "2022-12-09T07:05:41.633Z"
    },
    "dataset": [
      {
        "key": "__0__",
        "name": "Year",
        "description": "This is the column for the X-Axis",
        "type": "numeric",
        "kind": "column"
      },
      {
        "key": "__1__",
        "name": "CategoryName",
        "description": "This is the column for the Y-Axis",
        "type": "text",
        "kind": "column"
      },
      {
        "key": "__2__",
        "name": "Rank bump chart",
        "description": "This is the ranking measure",
        "type": "numeric",
        "kind": "measure"
      },
      {
        "key": "__3__",
        "name": "Percent of Year",
        "description": "This is the value to display",
        "type": "numeric",
        "kind": "measure"
      }
    ]
  },
  "config": {
    "view": {"stroke": "transparent"},
    "font": "Segoe UI",
    "arc": {},
    "area": {
      "line": true,
      "opacity": 0.6
    },
    "bar": {},
    "line": {
      "strokeWidth": 3,
      "strokeCap": "round",
      "strokeJoin": "round"
    },
    "path": {},
    "point": {
      "filled": true,
      "size": 75
    },
    "rect": {},
    "shape": {},
    "symbol": {
      "strokeWidth": 1.5,
      "size": 50
    },
    "text": {
      "font": "Segoe UI",
      "fontSize": 12,
      "fill": "#666666"
    },
    "axis": {
      "ticks": false,
      "grid": false,
      "domain": false,
      "labelColor": "#666666",
      "labelFontSize": 12,
      "titleFont": "wf_standard-font, helvetica, arial, sans-serif",
      "titleColor": "#070f25",
      "titleFontSize": 16,
      "titleFontWeight": "normal"
    },
    "axisQuantitative": {
      "tickCount": 3,
      "grid": true,
      "gridColor": "#C8C8C8",
      "gridDash": [1, 5],
      "labelFlush": false
    },
    "axisX": {"labelPadding": 5},
    "axisY": {"labelPadding": 10},
    "header": {
      "titleFont": "wf_standard-font, helvetica, arial, sans-serif",
      "titleFontSize": 16,
      "titleColor": "#070f25",
      "labelFont": "Segoe UI",
      "labelFontSize": 13.333333333333332,
      "labelColor": "#666666"
    },
    "legend": {
      "titleFont": "Segoe UI",
      "titleFontWeight": "bold",
      "titleColor": "#666666",
      "labelFont": "Segoe UI",
      "labelFontSize": 13.333333333333332,
      "labelColor": "#666666",
      "symbolType": "circle",
      "symbolSize": 75
    }
  },
  "data": {"name": "dataset"},
  "encoding": {
    "x": {
      "field": "__0__",
      "type": "nominal"
    },
    "y": {
      "field": "__2__",
      "type": "nominal",
      "axis": null
    },
    "order": {
      "field": "__0__",
      "type": "nominal"
    }
  },
  "layer": [
    {
      "name": "Lines Layer",
      "mark": {
        "type": "line",
        "point": false,
        "interpolate": "monotone"
      },
      "encoding": {
        "color": {
          "field": "__1__",
          "legend": false,
          "scale": {
            "scheme": "pbiColorNominal"
          }
        }
      }
    },
    {
      "name": "Circle Layer",
      "mark": {
        "type": "circle",
        "tooltip": true,
        "opacity": 1
      },
      "encoding": {
        "color": {
          "field": "__1__",
          "legend": false
        },
        "size": {
          "field": "__3__",
          "type": "quantitative",
          "scale": {
            "range": [200, 2000]
          },
          "legend": null
        }
      }
    },
    {
      "mark": {
        "type": "text",
        "color": "white"
      },
      "encoding": {
        "text": {
          "field": "__3__",
          "format": "0%",
          "formatType": "pbiFormat"
        }
      }
    },
    {
      "transform": [
        {
          "window": [
            {"op": "rank", "as": "rank"}
          ],
          "sort": [
            {
              "field": "__0__",
              "order": "descending"
            }
          ]
        },
        {"filter": "datum.rank === 1"}
      ],
      "mark": {
        "type": "text",
        "align": "left",
        "baseline": "middle",
        "dx": 25,
        "fontWeight": "bold",
        "fontSize": 12
      },
      "encoding": {
        "text": {
          "field": "__1__",
          "type": "nominal"
        },
        "color": {
          "field": "__1__",
          "scale": {
            "scheme": "pbiColorNominal"
          },
          "legend": false
        }
      }
    },
    {
      "transform": [
        {
          "window": [
            {"op": "rank", "as": "rank"}
          ],
          "sort": [
            {
              "field": "__0__",
              "order": "ascending"
            }
          ]
        },
        {"filter": "datum.rank === 1"}
      ],
      "mark": {
        "type": "text",
        "align": "right",
        "baseline": "middle",
        "dx": -25,
        "fontWeight": "bold",
        "fontSize": 12
      },
      "encoding": {
        "text": {
          "field": "__1__",
          "type": "nominal"
        },
        "color": {
          "field": "__1__",
          "scale": {
            "scheme": "pbiColorNominal"
          },
          "legend": false
        }
      }
    }
  ]
}

If you prefer the look of the straight lines between the data points you can simply remove the "interpolate": "monotone" from the lines layer and you end up with a chart like the following one.

Power BI – Using Field Parameters in Paginated Reports

Field Parameters are a relatively new feature in Power BI which are still in preview and they add a lot of flexibility to an interactive reporting solution, but can they be used in paginated reports?

The short answer is yes, but the experience is not as simple as it is in Power BI Desktop. The reason for this is that Power BI Desktop does not just pass field parameters as filters to other visuals, they actually change the way the underlying DAX queries are generated.

The following example of an Adventure Works based report with a field parameter that allows the user to choose between showing either the Total Sales measure or the Total Quantity measure.

If we use the Performance Analyzer in Power BI Desktop to capture all the queries for this page we find 2 sets of queries.

The first one for the slicer looks as follows and just gets a list of values from our Field parameter table:

// DAX Query
DEFINE
  VAR __DS0Core = 
    SUMMARIZE(
      VALUES('Measures Parameter'),
      'Measures Parameter'[Measures Parameter Fields],
      'Measures Parameter'[Measures Parameter Order],
      'Measures Parameter'[Measures Parameter]
    )

  VAR __DS0PrimaryWindowed = 
    TOPN(
      101,
      __DS0Core,
      'Measures Parameter'[Measures Parameter Order],
      1,
      'Measures Parameter'[Measures Parameter],
      1,
      'Measures Parameter'[Measures Parameter Fields],
      1
    )

EVALUATE
  __DS0PrimaryWindowed

ORDER BY
  'Measures Parameter'[Measures Parameter Order],
  'Measures Parameter'[Measures Parameter],
  'Measures Parameter'[Measures Parameter Fields]

To start integrating our Field Parameters into our paginated report we can create a new paginated report using Power BI Report Builder and we can copy and paste this query into a new dataset. I’ve called this dataset “MeasureParameter” in the screenshot below:

We can then create a new report Parameter linked to this dataset. I’ve called my parameter “MeasureName” and I’ve linked up the available values as follows:

Note that I’m using the field for the Value property which returns the full DAX reference for the measure (eg. ‘Sales'[Total Sales]) and I’m using the display name of the measure for the label field (eg. “Total Sales”)

This will generate a parameter like the following when the paginated report is run to allow the user to select which measure to use:

If we then look at the second set of queries in the Performance Analyzer, we can see that the there are actually 2 queries.

  1. Gets the data for the chart and this query is dynamically generated
  2. Gets the name of the currently selected Field Parameter which is used to populate the chart title and axis labels

We can ignore query 2 as we already have the information about the field parameters in our MeasureParameters dataset, but how do we dynamically generate our query in the same way that Power BI does?

The answer to this is using expressions. Paginated Reports are extremely powerful in this regard and almost all of the properties of the objects in a report can set using expressions – even the query text and that is what we are going to do in this case.

If we look at the text of the first query we captured from Performance Analyzer, it looks as follows:

Where you can see on lines 4 and 7 that the ‘Sales'[Total Sales] measure is referenced (which I’ve highlighted in yellow. And on lines 7, 10 and 16 that this is given a name of [Total_Sales] in the output from the query. The [Total_Sales] is just a column name in the results of the query and we could leave it as end users of our report will not see this reference, but I prefer to make it clear that this is may not always be the Sales measure, so I renamed it to [Measure_Value]

Then I created a new Dataset called “Dataset1” and pasted the above static query in and clicked “OK” this saves our dataset and generates the field mappings for us. (you can manually create the field mappings from the resultset to the paginated dataset, but I find this method of starting with a static query easier)

Then to make this query dynamic I clicked the “fx” button next to the Query property to open the expression editor.

I then made the following changes to the query text:

  1. First, I started the expression with an = sign, this tells the report engine that this value needs to be evaluated as an expression. Then I added a double quote character as this is the start of a string expression.
  2. Then I added a closing double quote character at the end of the query.
  3. I then went through the rest of the query and doubled up any existing double quote characters to escape them.
  4. Finally, I inserted some concatenation expressions to add in the Value property from our MeasureName parameter which we added earlier using the Parameters!MeasureName.Value reference.
configuring the dynamic query

So, we now have a dynamic query which will inject the measure to be used based on the parameter chosen by the user at run time.

To test this we can put a simple table on our report canvas and link up the Color and Measure value fields

Configuring the column references for the table visual

This gives us a table which looks like the following

The design time report layout

And when we run it and choose a measure from the parameter we get the data we expect, but we cannot easily tell from the report which measure was selected by the user since the column header just says “ID Measure Value”. So if this report was exported to a PDF or sent as part of an emails subscription it might be hard to tell which measure was selected.

The basic report with a dynamic query

To fix this we can right click on the “ID Measure Value” header and turn this column header into an expression instead of being static text.

Editing the measure expression for the column header

Then we can use an expression like the following to use the label of the selected parameter value as column header

=Parameters!MeasureName.Label

If we also bold the column headers, now when we run our report, we get an experience almost identical to that in Power BI Desktop and our table now shows which measure was selected in the header for the measure column.

Final Report

This same technique of using an expression-based query could be adapted to work with field parameters that reference columns instead of measures. The expression for the query would just need to be adjusted differently, but I will leave this as an exercise for the reader.

Power BI Report Server – fixing “Can’t upload this report”

A little while ago at my previous job we had a new user report an issue where they could not upload a Power BI Report to our Report Server and I found a simple fix which I thought was worth sharing.

In this case they were getting the error that the report was created with a recent version of Power BI Desktop that is not yet supported by this server.

So, how did this happen…

When you are running Power BI Report Server it is really important that your report authors use the Report Server version of Power BI Desktop that matches the release of your Server.

If you are not already aware there are actually 3 different distributions of Power BI Desktop

  • The Windows Store version
  • The Standalone installer version
  • The Power BI Report Server version

If you are running an on-prem Power BI Report Server, it is vital that you use the Report Server version of Power BI Desktop. Failing do so can mean that you may spend hours building a report that will not run on the server.

If you are not sure where to get this the Report Server Portal has a download link in the top right corner with a link (provided you are running one of the supported releases of Report Server, once the server goes out of support the downloads for Desktop are removed)

On my machine I have both the Microsoft Store version of desktop and the Report Server version. You can tell the difference because the Report Server version has a little black “RS” on the logo and the application title always includes the release month.

What had happened with this user was that someone had sent them a pbix file to work on. Being new to the organization they just downloaded the Microsoft Store version of Power BI Desktop, did their updates to the file and then tried to upload it to Power BI Report Server and this is when they got this error.

The problem is that now the file will not open in the Report Server version of Power BI Desktop since the file format has been updated to work with the newer version of Desktop from the Store.

Fix 1 – Manually rebuild the report

If your report is small and you realize this early, maybe just re-creating the report is an option, but often you do not realise this mistake until you have invested a significant amount of time in your report.

Fix 2 – Copy/Paste

One way to “fix” this is to run both the “Store” version of Desktop and the “Report Server” version of Desktop. Then open the file in the “Store” version of Desktop and manually copy the Power Query text from the Advanced editor, then copy measures and any calculated columns and tables, then copy visuals from one report to the other. As long as you have not used any new functionality that does not exist in Report Server this can work fine and is less work than a full manual re-build.

Fix 3 – Create a template file

This is the simplest option if it works.

All I did was to open the report in the “Store” version, did a “save as” and changed the type to a Power BI Template file. Then the user was able to open the resulting pbit file in the “Report Server” version of Desktop. Then I could refresh the data and upload it to Report Server.

This probably only works if you have not used functionality that only exists in the cloud service, but if you are using the lastest release of Report Server this number of things in this category should be relatively small. If you can identify any items like this, you may be able to remove them from your report before creating the template file.

DAX Studio v3.0.1 Released

I’ve just pushed out an update to DAX Studio with a number of fixes and a few small updates. You can download it now from https://daxstudio.org

Updates

  • Added query information button to traces
    (This captures the actual query text and any parameter values plus the ActivityID)
  • Updated Query History icons
  • #949 Exposed Theme/Mode setting in Options
  • Added MPARAMETER keyword to syntax highlighting

Fixes

  • Fixed #891 Error exporting to static Excel file
  • Fixed #889 crash when dragging trace windows
  • Fixed #886 incorrect link to tutorial in Getting Started dialog
  • Fixed #894 Define and Expand sometimes producing incorrect output
  • Fixed #924 username blank in All Queries trace
  • Fixed #930 increased default code completion window width
  • Fixed #931 automatically select all text in find/replace dialog
  • Fixed #906 improved waterfall layout in Server Timings
  • Fixed setting focus in the editor when changing tabs
  • Fixed file icon registration
  • Improved error handling around shutdown operations
  • Added extra hotkey validation rules
  • Fixed #916 added retry logic to all clipboard operations
  • Improved Power BI filename detection for non-English cultures
  • Removed unused dependencies and reduced installer/portable file size

Power BI Report Server – an unexpected error occurred (Nov 2021)

Earlier this week we started getting reports of the following error on our production Power BI Report Server (PBIRS) instance for any reports that were using a live connection back to one of our Analysis Services models.

All it basically said was “an unexpected error occurred. Please try again later”. And if you looked at the details it showed the message above saying that “We could not connect to the Analysis Services server”.

Some other fun facts about this issue are:

  • It was not affecting all users (at least in our environment)
  • There had been no recent updates or configuration changes on PBIRS, SQL or SSAS
  • The same issue appeared to be affecting our test environment but not our development environment.
  • It was happening for both pbix and paginated reports
  • It was affecting reports that sourced data from SQL Server as well as SSAS
  • When running a profiler trace against SSAS listening for the session events I could see some sessions with usernames and others where the user was listed as “ANONYMOUS USER”

Putting all of the above evidence together point to some sort of Kerberos issue. In our environment our SQL and SSAS instances are on different servers to our PBIRS instance we had to configure Kerberos with constrained delegation to allow for the user credentials to flow from client > to PBIRS > to SSAS.

Kerberos can be a little fiddly to setup initially, but once it’s configured I’ve never had an issue with it until the one this week. When we initially configured Kerberos I had distilled the generic instructions from the Microsoft documentation down into a series of specific steps for our production environment listing the specific setspn commands that needed to be run and how the constrained delegation settings needed to be configured and saved these in a document. This made it very easy to cross check that all these configurations were still correct.

At this point we knew that the Kerberos configuration appeared to be unchanged, but we still had an issue where it worked for some users and not for others.

So the next thing I started doing was to get some of our users to run the klist command from a command prompt. This command lists all the Kerberos tickets that are cached on the current machine. This lead us to find a pattern where everyone that was getting the error were getting their kerberos tickets from a specific list of domain controllers while people that did not have the issue were getting theirs from a different list of domain controllers. Our IT admins then noticed that all the failing servers were in a group which had had security patches applied earlier that day.

The specific patch that had been applied was KB5007206 and after raising a support ticket with Microsoft they were able to point us to the following hotfix (KB5008602) for this update which fixed an issue when you were using Kerberos authentication with constrained delegation and protocol transition (which is exactly how Kerberos needs to be configured when using PBIRS or SSRS or SSRS in Sharepoint integrated mode for people with on-prem Sharepoint environments).

Hopefully this post will help anyone else that runs into this issue.

DAX Studio 2.17.2 released

DAX Studio 2.17.2 has just been released and the following is a run down of the new features and fixes that you will find in this release.

Apologies to anyone who attempted to download the 2.17.0 release – there was an update to the code signing tool and it failed to correctly sign the installer. The automated build has now been fixed so that it correctly signs the installer again.

New Features

Added Parameter support to Query Builder filters

The Query Builder now supports parameters. Typing an @ as the first character in a query builder filter value will switch the value into parameter mode and the following text will be treated as the parameter name. (typing @ a second time or clicking on the purple @ button will toggle the parameter back to the standard value mode)

When you run the query builder where a filter has a parameter you will be prompted for the parameter value just as if you had typed out a query that included a parameter reference. This allows you to build a query with parameters that can be pasted into a paginated report or any other tool that supports DAX parameters.

Running Query Builder with a filter parameter

Removing <pii> tags from xmSQL in Server Timings to improve readability

Power BI and AzureAS have added <pii> tags around some of the text in the query events that DAX Studio captures as part of the Server Timings. These tags allow Microsoft to strip out information that could potentially be personally identifiable when writing them to centralized logs. But because DAX Studio does not send any of this information off your machine and because this information also includes important contextual information for troubleshooting and performance tuning we are just removing these tags to reduce the noise and improve readability.

Fixing accuracy of Server Timings for queries with Batch events

Prior to this update Storage Engine batch events timings were not being added to the SE duration figures. For a lot of models this difference is insignificant, but for large queries against large models this can result in an incorrect inflation of the FE duration. Keep an eye on the posts from Marco and Alberto at sqlbi.com as they are working on an in-depth video to explain this improvement to the Server Timings.

Query Builder queries are now captured in the query history

Prior to this update the Query History pane was only capturing the text of queries that had been typed into the query editor. Now the Query History pane captures any query that was executed regardless if it was from the edit window or from the Query Builder.

There is now an icon next to each query history row indicating where it was captured from and if you double click on a query that was originally run in the Query Builder it will be restored back into the Query Builder window

The /f parameter can now open vpax files

Now when you run DAX Studio from the command prompt the /f parameter will open either .dax or .vpax files.

Added extra information to the error message when the XMLA endpoint is not correctly enabled

If the Power BI XMLA endpoint is not properly enabled the Microsoft ADOMD provider returns a very generic error message making it hard to know what has gone wrong. DAX Studio now adds some extra text after the Microsoft error if you are attempting to connect to an XMLA endpoint and hit this generic WebException error to suggest you check the XMLA endpoint settings in your tenant.

Improved caching of measure expressions

Prior to this update features that required access to the text of the multiple measure expression such as the “Define All Measures” feature would request the expression text one at a time, now all of the measure expressions are retrieved in a single request.

Fixes

  • Fixed #703 preventing a crash when entering a proxy password
  • Fixed #715 added DOUBLE data type to intellisense and syntax highlighting
  • Fixed #740 export to Excel failed on some machines
  • Fixed #755 saving parameters to XML was not correctly encoding special characters
  • Fixed #750 would crash on startup if the WMI information on the machine was corrupt
  • Fixed download link in Help – About screen

DAX Studio v2.16.2 Released

DAX Studio v2.16.2 has just been released and is available at daxstudio.org. It includes the following fixes and improvements. The first one being particularly important:

Traces fail to start with a timeout error

Fixed issue #664 – A small refactoring of the trace engine for some future work has accidentally resulted in the engine not being able to detect if a trace has been started or not, resulting in timeout errors. This appears to affect most scenarios apart from one or two edge cases. Unfortunately it was one of those edge cases that was used when testing that the previous code change had not broken anything. The 2.16.2 code base has been tested against all the different engine types PowerPivot, Power BI Desktop, Power BI Desktop as an External Tool, a local SSAS Instance, a remote SSAS Instance, Power BI Premium XMLA Endpoints and Azure Analysis Services

Showing View Metrics Last Refreshed and Analysis Date in the local timezone

Fixed issue #663 – Previously the Last Data Refresh and Date of Analysis were both displayed in UTC in the Summary tab of the View Metrics pane. Internally the dates are still stored in the vpax file in UTC so that that if you send the vpax file to someone in a different timezone the times will n

When saving a file with the Query Builder open custom measure names were not saved

Fixed issue #659 – if you save a file with the query builder open which has custom measures that were created within the query builder then the name of those custom measures was not saving properly prior to this fix.

Debug commas not handling lines with indented comments correctly

Fixed issue #663 – the new debug commas preview feature was not correctly positioning the commas if one of the lines of the query had an indented comment. This issue and a few other ones around different comment placements have been fixed.

View Metrics – Last Refreshed date was incorrectly adjusting from UTC time

Fixed issue #670 – The Last Refreshed Datetime in the View Metrics summary page was incorrectly getting the timezone offset applied twice when calculating the UTC time. This resulted in the time being moved too far forward if your local timezone is behind UTC or too far back if your local timezone is ahead of UTC.

DAX Studio v2.16.1 Released

This update to DAX Studio includes all the new features introduced in 2.16.0 but includes the following bug fixes:

  • #657 Unable to open a new query tab either with a new connection or with a copied connection
  • #658 passing the database name on the command line
  • #647 hidden objects not hidden correctly when show hidden objects option was turned off (this was partially fixed in 2.16.0 but should work in all scenarios now)

« Older posts Newer posts »