Do you ever encounter performance barriers when creating reports? Or do you sometimes feel that your visualisations are too slow when creating reports?
It is very possible that you are encountering bottlenecks that are causing the Power BI engine to have significant problems processing your measures and data.
Below are the basic bottlenecks that cause your data model to be slow and, as a result, exposed to negative feedback from the business. Business is time-sensitive, and if visualisation takes a long time to load, they are not happy.
1. Too many visuals on one report page
Bottlenecks:
- Dashboards with 20–30 visuals, each running their own query
- Using too many slicers, especially when they cross-filter each other
- Tables or matrices with thousands of rows rendered dynamically
Solution:
- Reduce visuals
- Group visualisations, for example, group a pair of columns into one column and then filter by it.
- Use bookmarks to toggle between visuals instead of showing all
- Use custom visuals sparingly — they are often slower than native ones
For example bellowed Power BI report page has too many slicers, so it’s slow:

In this case, it is best to reduce the number of slicers to significantly speed up the report.

2. Excessive Measures and Calculated Columns
Bottlenecks:
- Hundreds of unused or redundant measures
- Too many calculated columns when they could be created in backend side, for example in SQL Server or even in PowerQuery.
- Measures depending on complex DAX chains (measure inside a measure inside a measure…)
Solution:
- Audit and remove unused measures.
- For hundreds or thousands of measurements, it is worth using external tools such as MeasureKiller.
- Use measure reference sparingly; avoid circular dependencies.
- Consider measure branching. This is a technique that involves dividing measures into smaller pieces and combining them into a single measure that you display. For example, if your measure counts the share of sales in total sales, instead of writing the measure in an advanced way, divide it into smaller parts, refer to them in the measure, and I guarantee that the Power BI engine will optimise its memory and significantly speed up the execution of this measure.
For example, the following DAX measure is very complex and debugging it is tedious:
Measure =
IF(
COUNTROWS(DIM_RESTAURANT[Country_Code]) > 1,
CALCULATE(
SUM(FCT_SALES[Amount_EUR]),
FILTER(
ALL(DIM_CALENDAR),
DIM_CALENDAR[YearMonth] >= "2024-01" && DIM_CALENDAR[YearMonth] <= "2024-09"
)
),
CALCULATE(
SUM(FCT_SALES[Amount_LCY]),
FILTER(
ALL(DIM_CALENDAR),
DIM_CALENDAR[YearMonth] >= "2024-01" && DIM_CALENDAR[YearMonth] <= "2024-09"
)
)
)
MeasureUsingMeasureBranching =
VAR rangestart = "2024-01" -- for example it's from slicer
VAR rengeend = "2024-09" -- for example it's from slicer
IF(
[CountCountries] > 1, -- countrows above is transffered into resuable measure
CALCULATE(
[SalesEUR],
FILTER(
ALL(DIM_CALENDAR),
DIM_CALENDAR[YearMonth] >= rangestart && DIM_CALENDAR[YearMonth] <= rengeend
)
),
CALCULATE(
[SalesLCY],
FILTER(
ALL(DIM_CALENDAR),
DIM_CALENDAR[YearMonth] >= rangestart && DIM_CALENDAR[YearMonth] <= rengeend
)
)
)
3. Poor Data Model Design
Bottlenecks:
- Using snowflake schema (dimension tables linked to other dimension tables)
- Multiple inactive relationships handled with USERELATIONSHIP in measures
- Bidirectional filters between tables causing circular dependencies or complex filter propagation
Solution:
- Use star schema: fact table in the center, dimensions around
- Avoid bidirectional filters unless absolutely necessary
- Use surrogate keys and integer joins for better performance
4. Incorrect data types in fact tables
Bottlenecks:
- If a column is used in relationships in Power BI and is duplicated multiple times because it is in the fact table, it must be of the appropriate data type.
Solution:
Avoid combining text columns with numbers or text columns with text. Instead, try to obtain an ordinal number. If you do not have access to a data warehouse, you can even create an ordinal number in Power Query by adding an index, which will significantly speed up your data model.
For example:

This screen shows Store_ID, which is a key for relationships. It’s a text type, so it’s fairly slow. Please change format type to integer to avoid slow performance of the report.
Also you can add index column in PowerQuery to speedup report performance:

5. High Cardinality Columns and Relationships
Bottlenecks:
- Using columns like Email, UserID, or full timestamp in model
- Columns with hundreds of thousands of unique values
- Many-to-many relationships on high-cardinality fields
Solution:
- Remove columns not used in reporting
- Split datetime columns into Date and Time where needed
- Replace high-cardinality columns with categorized mappings (e.g., map zip codes to region)
Many-to-many relationships are examples of the most common bottlenecks; they are very slow and very unusual. Many blogs, training courses and even boot camps recommend materialising a bridge table between tables in such situations, but this is still a many-to-many relationship with a bridge table, and it is still slow and suboptimal. The solution below is expert knowledge, because if we connect to a live model, we are NOT able to create a physical bridge between tables. In this case, we only have measures at our disposal.
Let’s assume that we have the following data model.

The fct_sales table has daily granularity per product, while the fct_budget table has monthly granularity per product. In this situation, a many-to-many relationship is created between the tables:

When creating the report, we do not support the use of such many-to-many relationships, so our solution to this problem is the following workaround:
WE DO NOT CREATE A BRIDGE TABLE, WE DO NOT CREATE MANY-TO-MANY RELATIONSHIPS

6. Unoptimized Power Query Steps
Bottlenecks:
- Using Table.AddColumn or Table.SelectRows in row-based operations
- Performing merges or joins after load, not at the source
- Not using staging queries (queries referencing other queries)
Solution:
- Fold transformations to SQL or source system (check “View Native Query”)
- Move logic upstream if source is a database (e.g., let SQL do grouping/joins)
- Use buffer functions (Table.Buffer) only when absolutely necessary
7. Too Much Data (Volume Bottleneck)
Bottlenecks:
- Loading raw data with millions of rows (e.g., logs, transactions)
- Pulling more columns than needed
- No data reduction before loading (e.g., years of history no one uses)
Solution:
- Use views or stored procedures to pre-aggregate or filter at source
- Only import columns used in visuals, filters, or measures
- Use aggregated tables alongside detailed tables (composite models)
DAX MEASURES
And most importantly, which is a separate point outside of these top 7. The DAX measure you write can be written in 10 different ways, but not every way is optimal and good.
Bottlenecks:
- Using RELATEDTABLE repeatedly in measures
- Iterators like SUMX, AVERAGEX, RANKX over large tables
- Nested IF, SWITCH, or VAR logic without simplification
Solution:
- Push logic to backend side where possible
- Use relationships and implicit filters instead of FILTER(ALL(…))
- Create columns in the source that will eliminate the need for iterating measures (those with X at the end of their name).
For example:
Create columns in the source in the form of, for example, net sales.
If your metric looks like this, it’s time for a change:
SlowMeasure =
SUMX(
FCT_SALES,
FCT_SALES[Sales] - FCT_SALES[TaxAmount] - FCT_SALES[ExcludedSales]
)
FastMeasure
SalesActual =
SUM(FCT_SALES[SalesActual] -- transfer all steps from SUMX to source for example to sql server
That’s a lot. Do I really have to remember everything every time, and are there really no tools to help me with this? Time is money.
Of course there are. There are even great free and secure tools that will automatically check your data model, check for best practices, and highlight unused columns and bad practices that are slowing down your data model.
One of our favourite apps is MeasureKiller. It is a free tool that we connect to the report and display in Power BI:

Link to MeasureKiller: Measure Killer | Brunner BI
After clicking on MeasureKiller and opening the report, the following field appears:

Please click on “Run” and magic happens:

What can you see in the attached screenshot?
- We have as much as 24% of unused columns. You might say that it’s only 4.3 MB, but think about it: if you have a huge report and MeasureKiller shows you that even 10% of it is unused, simply removing it will improve the performance of your model.
- All columns highlighted in red indicate that they are not in use.

But perhaps the most important feature is the best practice analyser. With a single click, I can see that the sample data model is terrible. The incorrect data type mentioned above is marked as a very important error. Some errors are less important, but the fact that they occur is worrying.
Summary
This article aims to present the most important bottlenecks that cause performance issues in the report. Proposed solutions to these bottlenecks are also presented so that readers can optimise their reports and significantly reduce the time needed to calculate them.
An additional external tool is also presented, which, like the icing on the cake, shows the most important bottlenecks and prompts reflection on whether our data model is optimal or whether we should sit down and improve it.