Introduction
The September 2025 Fabric update brings a wide range of enhancements from Oracle and BigQuery mirroring which speeds the path to analytics, to Dataflows Gen2 improvements that make ingestion noticeably cheaper, and much more. In this article, we unpack the features we’re most excited about in the latest Fabric and Power BI updates and why they matter.
Fabric updates
Mirroring for Oracle and Google BigQuery
Mirroring continues to expand as it now supports two new sources – Oracle and Google BigQuery!
Mirroring in Microsoft Fabric lets you replicate data from external systems into OneLake with low latency. Once mirrored, the data lands in open Delta tables and is immediately available across Fabric experiences for data engineering, data science, and more. With the September release, Mirroring now supports Oracle and Google BigQuery in public preview. We’re excited about this since these are among the most common enterprise systems sitting outside the Microsoft stack and bringing them into OneLake in near real-time removes a lot of plumbing and accelerates time-to-insight.
Regarding cost, Microsoft prices Oracle and BigQuery mirroring with a usage-aligned model: Replica storage is free up to a capacity-based allowance—1 TB of mirroring storage per Capacity Unit (e.g., an F64 gives you 64 TB) dedicated to mirroring. Most ongoing spend is concentrated when you access the mirrored Delta tables: requests to OneLake consume capacity, and compute for querying via SQL or Spark is billed at normal rates.
Dataflows Gen2 are now cheaper
For Dataflows Gen 2, we’re most excited about the new two-tier pricing for Dataflows Gen2 that Microsoft introduced. According to Microsoft, this makes evaluation both cheaper and more predictable—12 CU for the first 10 minutes (about 25% lower than before) and 1.5 CU after 10 minutes (about 90% lower than before). And we’re already seeing a tangible impact on CU consumption of Dataflows Gen2 at our clients!
Complementing the cost drop is the new Modern Query Evaluation Engine which is claimed to improve the performance of dataflow runs significantly. Although connector coverage is currently still limited, it already includes SharePoint, which is typically the source we rely on when we want to ingest an excel file.
Finally, on the modeling side, newly-supported schema mapping now allows authors to choose an output schema—closing a gap where tables previously landed in the default schema.
Copy job activity (preview) — movement + transformations in one place
You can now run a Copy job inside a Fabric Data Factory pipeline, so data movement sits in the same place as your transformations, notifications, and error handling. This also means that you can reuse full or incremental/CDC patterns and open a per-activity monitoring link to see progress and diagnose failures without leaving the pipeline.
Practically, this reduces context-switching and makes orchestration cleaner as movement, transforms, and alerting live in one place that you can schedule, govern, and troubleshoot. It’s great for day-to-day ingestion patterns where the activity-level monitor shortens time-to-fix when runs fail mid-pipeline.
Apache Airflow Job (preview) — code-first orchestration inside Fabric
Managed Airflow (preview) brings Apache Airflow as a hosted service inside Fabric (“Apache Airflow Job”), so you can author, schedule, and monitor Python DAGs alongside Data Factory pipelines and notebooks. It includes a built-in DAG editor, scheduling/monitoring, retries, sensors, branching, and a preinstalled operator to trigger Fabric items (e.g., run a Fabric notebook from a DAG) and Git integration for CI/CD support.
This is a great choice when code-first workflows are preferred, such as for feature engineering, model training, batch inference, dbt-style patterns, etc.. and you want a managed runtime with Fabric-native hooks (notebooks/pipelines as tasks).
Power BI updates
Calendar-based time intelligence (preview)
You can now define custom calendars (e.g., fiscal, 4-5-4, week-based) in the model and use native DAX e.g. TOTALMTD, SAMEPERIODLASTYEAR, plus new week functions like TOTALWTD and PREVIOUSWEEK—directly against that calendar. Because the functions follow your calendar rules, you no longer need helper/offset columns or long CALCULATE/FILTER patterns to “force” fiscal or retail periods, which is why this is expected to reduce complicated date logic used to account for these calendars.
This results in cleaner semantic models, fewer custom workarounds, and true alignment to business time (retail weeks, shifted fiscals).
Download of XMLA altered models
Until now, if a semantic model was modified via the XMLA endpoint, you couldn’t download its PBIX from the Power BI Service. If no PBIX was available, all further edits had to be done through XMLA instead of Power BI Desktop, which was inconvenient for self-service teams for which development is easier in Desktop. That limitation is now gone!
One caveat is that not all models are eligible yet—models with incremental refresh partitions still can’t be downloaded at this time. However, Microsoft promises that this will be addressed in a future update.
Data-only vs schema-only refresh in Desktop
You can now choose Refresh data only (no schema sync) or Sync schema only (no data refresh) in Power BI Desktop. This is especially useful when source tables/views evolve, but you don’t want immediate changes pulled into the model.
This gives model authors tighter change control and helps keep production models stable during routine data updates and avoids unintended schema drift that can break measures or relationships. One thing to watch out for is that deferring schema sync can hide real upstream changes that may fail later. Also, this feature can be particularly useful in a development environment when working with large tables where the schema could be updated without having to pull a large amount of data locally.
DAX User-Defined Functions
One of the most interesting recent additions that has the potential to reshape how BI teams design semantic models is User-Defined Functions (UDFs). We’re so excited about this feature that we dedicated a whole insight for, which you can find here.
Conclusion
We’re excited to be using the features at our customers and will monitor how they continue to evolve. If you have any questions, please feel free to reach out to any of our team members and they’d be happy to help you!