Skip to main content
BI4ALL BI4ALL
  • Expertise
    • Artificial Intelligence
    • Data Strategy & Governance
    • Data Visualisation
    • Low Code & Automation
    • Modern BI & Big Data
    • R&D Software Engineering
    • PMO, BA & UX/ UI Design
  • Knowledge Centre
    • Blog
    • Industry
    • Customer Success
    • Tech Talks
  • About Us
    • Board
    • History
    • Partners
    • Sustainability
    • Awards
    • Media Centre
  • Careers
  • Contacts
English
Português
Last Page:
    Knowledge Center
  • Metadata Frameworks in Microsoft Fabric: Logging with Eventhouse (Part 2)

Metadata Frameworks in Microsoft Fabric: Logging with Eventhouse (Part 2)

Página Anterior: Blog
  • Knowledge Center
  • Blog
  • Fabric: nova plataforma de análise de dados
1 Junho 2023

Fabric: nova plataforma de análise de dados

Placeholder Image Alt
  • Knowledge Centre
  • Metadata Frameworks in Microsoft Fabric: Logging with Eventhouse (Part 2)
10 September 2025

Metadata Frameworks in Microsoft Fabric: Logging with Eventhouse (Part 2)

Metadata Frameworks in Microsoft Fabric: Logging with Eventhouse (Part 2)

Following up on the previous post about YAML-based metadata frameworks, let’s talk about logging – the part of the framework that often stays invisible until something fails. In Part 1, YAML helped us replace config tables with cleaner, version-controlled definitions. Now, logging ensures we have the visibility to understand what’s really happening inside our Fabric pipelines.

Because without proper logs, troubleshooting a failed run is a bit like fixing a car in the dark – you know something broke, but you’ll have no clue where the problem is.

 

Why Eventhouse for Logging?

When we switched to YAML for configurations, we moved away from storing pipeline metadata in the warehouse or SQL config tables. For logging, we deliberately chose Eventhouse and its KQL database.

Why? Because KQL is purpose-built for this kind of workload:

  • Optimised for ingestion → it handles high-volume, append-only data (a natural fit for logs).
  • Efficient querying → KQL is designed for quickly scanning large datasets, filtering, and aggregating by time.
  • Real-time analytics → logs can drive actions (e.g., triggering Fabric Activator events).
  • Time-series support → perfect for tracking job runs and spotting patterns.

This is exactly how Azure Monitor and Log Analytics work under the hood, so using KQL in Fabric for logging isn’t reinventing the wheel; it’s adopting a proven pattern within the Fabric ecosystem.

That said, the Eventhouse approach is not without limits. On lower Fabric SKUs, we’ve seen throttling under moderate to high concurrency. Retry mechanisms (like exponential backoff) can help, but they’re still workarounds rather than a complete fix.

 

Why Not Warehouse or Lakehouse?

While logs could technically be stored in a Warehouse or Lakehouse, in practice:

  • Fabric Warehouse writes support from Spark is limited today (no native connector; only pyodbc or pipeline workarounds).
  • Lakehouse tables can receive logs directly from notebooks, but pipelines don’t support writing to Lakehouse tables, which breaks consistency across the framework.
  • Concurrency → Eventhouse is simply better at handling multiple simultaneous writes compared to Warehouse or Lakehouse.

Even if the logging volume is “not that high,” KQL’s concurrency handling and simple write API make it the most practical choice.

 

How Logging Works in the Framework

We capture logs at two different levels: ingestion pipeline execution details and overall orchestration.

  1. Pipeline logging

After each ingestion pipeline finishes, a generic pipeline step calls a KQL activity that logs:

  • Source table name
  • Destination object/location
  • Rows ingested
  • Start/end timestamps
  • Duration, status, and any custom metrics
  1. Notebook logging (orchestration)

A single orchestration notebook reads/parses the YAML, builds the DAG, and executes tasks.

Instead of adding logging code inside every task notebook, we use a notebook wrapper.

  • The wrapper logs start/end and status for each task.
  • It safely catches errors thrown anywhere inside the called notebook (any cell).
  • It records process name, execution status (started/succeeded/failed), start/end timestamps and error message (when applied).
  • Because logging is centralised, no changes are required inside individual task notebooks, and failure info is consistent (which also helps with resume-from-failure scenarios).

The wrapper writes each log entry to Eventhouse using the Kusto Spark Connector:

This gives us granular ingestion logs from pipelines and consistent, centralised task logs from the orchestration layer — all in Eventhouse for analysis and troubleshooting.

 

Wrapping Up

Logging isn’t just a side feature; it’s the backbone of reliable pipelines. By choosing Eventhouse, we align with Fabric’s strengths — specifically, KQL for ingestion, analysis, and time-series queries — while maintaining consistency in both pipeline and notebook logging.

Other engines can also be considered depending on needs:

  • Lakehouse / Warehouse → Both rely on Delta tables and struggle with high-concurrency writes, although they may fit audit-style or low-frequency logging.
  • SQL Database (currently in preview) → Supports structured relational inserts, but CU consumption makes it less attractive for high-volume operational logging, especially in lower SKUs.

For most scenarios, Eventhouse remains the natural fit: scalable ingestion, real-time visibility, and built-in time-series analysis — exactly what operational logging needs.

 

In the next part of this series, we’ll look at how DevOps pipelines were set up for YAML deployment in Fabric, covering version control, environment promotion, and approval workflows.

Author

Rui Francisco Gonçalves

Rui Francisco Gonçalves

Senior Specialist

Share

Suggested Content

Vision 2026: The complete overview of AI Trends eBooks

Vision 2026: The complete overview of AI Trends

This eBook brings together the key trends that will shape 2026, including intelligent agents, invisible AI, and physics.

The Role of Data Governance in Building a Data-Enabled Organisation Blog

The Role of Data Governance in Building a Data-Enabled Organisation

Data governance is the backbone of a truly data-enabled organisation, turning data into a trusted, secure, and strategic asset that accelerates insight and innovation.

Enable Digital Transformation through Data Democratisation Use Cases

Enable Digital Transformation through Data Democratisation

The creation of a decentralised, domain-oriented data architecture has democratised access and improved data quality and governance.

The Data Catalogue: Turning Governance into a Strategic Control Plane Blog

The Data Catalogue: Turning Governance into a Strategic Control Plane

The Data Catalogue transforms Data Governance into a strategic, automated system that connects people, data, and policies to build lasting trust and value.

Strengthening Competitiveness Through Data Strategy and Governance Use Cases

Strengthening Competitiveness Through Data Strategy and Governance

The definition and implementation of a data governance strategy and model enabled data to be aligned with business objectives, ensuring compliance and increasing efficiency and competitiveness.

Enterprise Data Maturity Assessment (DMA) for a Multinational in the Manufacturing Sector Use Cases

Enterprise Data Maturity Assessment (DMA) for a Multinational in the Manufacturing Sector

A decentralised manufacturing multinational implemented a tailored Data Maturity Assessment to align independent entities under a unified data strategy and framework.

video title

Lets Start

Got a question? Want to start a new project?
Contact us

Menu

  • Expertise
  • Knowledge Centre
  • About Us
  • Careers
  • Contacts

Newsletter

Keep up to date and drive success with innovation
Newsletter
PRR - Plano de Recuperação e Resiliência. Financiado pela União Europeia - NextGenerationEU

2026 All rights reserved

Privacy and Data Protection Policy Information Security Policy
URS - ISO 27001
URS - ISO 27701
Cookies Settings

BI4ALL may use cookies to memorise your login data, collect statistics to optimise the functionality of the website and to carry out marketing actions based on your interests.
You can customise the cookies used in .

Cookies options

These cookies are essential to provide services available on our website and to enable you to use certain features on our website. Without these cookies, we cannot provide certain services on our website.

These cookies are used to provide a more personalised experience on our website and to remember the choices you make when using our website.

These cookies are used to recognise visitors when they return to our website. This enables us to personalise the content of the website for you, greet you by name and remember your preferences (for example, your choice of language or region).

These cookies are used to protect the security of our website and your data. This includes cookies that are used to enable you to log into secure areas of our website.

These cookies are used to collect information to analyse traffic on our website and understand how visitors are using our website. For example, these cookies can measure factors such as time spent on the website or pages visited, which will allow us to understand how we can improve our website for users. The information collected through these measurement and performance cookies does not identify any individual visitor.

These cookies are used to deliver advertisements that are more relevant to you and your interests. They are also used to limit the number of times you see an advertisement and to help measure the effectiveness of an advertising campaign. They may be placed by us or by third parties with our permission. They remember that you have visited a website and this information is shared with other organisations, such as advertisers.

Política de Privacidade