In my previous post I had a short 40 line script for Tabular Editor (free | paid) which could generate descriptions for the measures in your tabular model. There were a number of interesting questions in the comments to that post which I thought would make a follow up post worthwhile.
The things I want to cover in the post are
- Running the script in Tabular Editor 2 (free)
- Running the script in Tabular Editor 3 (paid)
- Dealing with Rate Limiting
- A Final Warning
How to run the script using TE2
Below is a very brief tutorial on running a script using Tabular Editor 2
- If you launched Tabular Editor from the External Tools ribbon in Power BI desktop you can skip this step, otherwise click on this icon to connect to your data model.
- Click on the C# Script tab and paste in the script from my earlier post
- There are instructions in my earlier post for requesting your own API key from Open AI you need to paste the key into this line between the quotes (“)
- Optional: if you are targeting users that speak a language other than English you can update the question text on line 10 to your desired language (from what I understand ChatGPT understands about 100 different languages so you might need to test if this works in your language)
- Click the play button to run the script, this step could take a while depending on the number of measures and whether you hit throttling (more on this below)
- Once the script has run you can check some of the descriptions by clicking on a measure in the model explorer. At this point all the descriptions are still local and have not been saved back to your data model.
- The text generated by ChatGPT will appear in the Description property for the measure.
- If you are happy with the generated descriptions clicking the Save Changes button will save the new descriptions back to your data model.
How to run the script using TE3
The steps for Tabular Editor 3 are almost the same with the main difference being at step 2 because TE3 allows you to have multiple C# scripts open at the same time.
- If you launched Tabular Editor from the External Tools ribbon in Power BI desktop you can skip this step, otherwise click on this icon to connect to your data model.
- Click on the New C# Script button to create a C# script document and paste in the script from my earlier post
- There are instructions in my earlier post for requesting your own API key from Open AI you need to paste the key into this line between the quotes (“)
- Optional: if you are targeting users that speak a language other than English you can update the question text on line 10 to your desired language (from what I understand ChatGPT understands about 100 different languages so you might need to test if this works in your language)
- Click the play button to run the script, this step could take a while depending on the number of measures and whether you hit throttling (more on this below)
- Once the script has run you can check some of the descriptions by clicking on a measure in the model explorer. At this point all the descriptions are still local and have not been saved back to your data model.
- The text generated by ChatGPT will appear in the Description property for the measure.
- If you are happy with the generated descriptions clicking the Save Changes button will save the new descriptions back to your data model.
Dealing with Rate Limiting
If you have not worked with API calls before you may not have come across the term “rate limiting” before. Basically, it is a mechanism that APIs use to prevent users from monopolizing the resources on a service by limiting the number of calls that can be made within a given timeframe.
In the case of the Open AI APIs they document their rate limits here and at the time of writing they only allow 20 calls per minute for a free account. Once you hit that limit the API will return a 429 error code which is a common code meaning “Too many requests”. There are 2 approaches that you can use to work around this.
- You can add pauses in the code to wait until the next minute when you can then make another 20 calls.
- You can upgrade to a paid account which has much a higher limit.
If you would like to see an example of an updated script which will skip measures which already have descriptions (so if you’ve manually updated some or if there was an issue part way through running a previous script
Update Tabular Model Descriptions from ChatGPT with rate limiting logic (github.com)
A Final Warning
The Large Language Model (LLM) behind ChatGPT is an amazing piece of technology and it’s only going to get better over time. But I’ve also seen it described as “a B grade intern who hallucinates occasionally”. In it’s simplest form all these AI models do is to break a piece of text into a series of tokens and then predict what tokens are likely to come next based on a corpus of training material.
In my testing I’ve seen a number of examples where the prediction has missed some important detail. And in a couple of cases, I’ve even seen it include a modification to the original measure and then it has described the modified measure, not the original one!
So, while this is definitely a huge time saver, just be aware that it’s not perfect. Before publishing these descriptions into production, I would suggest doing a review of them to ensure that they make sense and include all the important details.
If you want to see all the descriptions at once you can use a tool like DAX Studio to run the following query to list them.
SELECT [Name], [Description], [Expression] FROM $SYSTEM.TMSCHEMA_MEASURES
Hi Darren,
I’m trying the script with rate limiting logic.
While testing this, as from the first request I run into the 429 error.
I’m the only one having a token in our organization.
Any suggestion ?
Darren,
You can disregard my previous post.
apparently I already got a free account on chatGPT linked to my phone.
when using that one it works, amazing.
I did get a 429 error after documenting 474 measures of 565.
but it gives a good idea on the possibilities
Filip
Hi all, sorry i still run in to 429 error, using the rate limiting logic.
I don’t get why?
Any helps?
Thank you!
The 429 status is being returned from ChatGPT so you might need to ask them about this. I have had other reports from people suggesting that sometimes the rate limiting seems to be applied too soon, so extending the waiting time in the script a bit more might help.
Do you have a version of the script that works with Azure OpenAI Service?
Not yet. I’m waiting to get access to Azure OpenAI, but it might take a few weeks.
Just checking if you were able to get an Azure OpenAI script for this. I love the idea but I need to use Azure OpenAI for security reasons.
Yes, it only took a few minor changes. I have a script here that works with the OpenAI endpoint https://gist.github.com/dgosbell/66e3050f8f41e4474c10230a48657fc9
Really useful article Darren. Thanks!
We struggled with 429 (Too Many Requests) errors on a free account so switch to paid and still struggled with that error. It seems like the guidance from OpenAI is to catch a 429 error and do retries with exponential backoff. Here’s a lazy implementation of that (single retry) which seemed to be enough for 630 measures.
Task res = client.PostAsync(uri, new StringContent(body, Encoding.UTF8, “application/json”));
res.Wait();
HttpResponseMessage msg = res.Result;
var result = res.Result.Content.ReadAsStringAsync().Result;
if (((int)msg.StatusCode) == 429)
{
//wait two minutes and retry once
System.Threading.Thread.Sleep(oneMinute * 2);
res = client.PostAsync(uri, new StringContent(body, Encoding.UTF8, “application/json”));
res.Wait();
msg = res.Result;
result = res.Result.Content.ReadAsStringAsync().Result;
}
I also switched from sleeping after 20 calls to sleeping after every API call which might have helped too. The *2 was just to add some wiggle room as it seems like their throttling mechanism is a bit overzealous:
System.Threading.Thread.Sleep((oneMinute / apiLimit) * 2);
Thanks for letting me know about this Greg. There have been some other reports where people have said that they are still getting throttled in spite of using the v2 script. I also now have a version of the script using Azure OpenAI so it would be interesting to see if that is not as overzealous on the throttling.
Hi,
Is there anyone who have had any recent success to run the code?
Using the code provided, i’ve tried different models, edit waiting time, calls, tokens. But is till get 429..
Any suggestions? 🙂
Hi Alex, a 429 error means you are hitting the rate limit. The rate limit is controlled on the server side. if you are using a free account with OpenAI they have reduced the number of calls allowed per minute since this article was first written from 20 down to 3. So you will most likely need to update the apiLimit variable on line 18 to match the limit for your OpenAI account.
Hi Darren,
I am trying to run this script but I am getting a 404 (not found) error.
I have a ChatGPT paid subscription at the authentication is working, but each run just brings a 404 error.
Is it possible the API has changed? I have otherwise no idea what the error could be.
Cheers for any help,
Ben