Building a Diagnostic AI Agent on Microsoft Fabric: A Structured, Tool-Driven Implementation

Artificial Intelligence

In the first part of this series, we introduced the idea of moving beyond dashboards to build diagnostic AI agents capable of uncovering the why behind business performance shifts. That article focused on architectural principles and the role of AWS Strands in enabling controlled agentic behavior. 

In this follow-up, we take a more detailed look at how we applied the same diagnostic approach inside a Microsoft Fabric environment . The objective remained unchanged: create an agent that can produce consistent, evidence-backed, and domain-aware explanations for revenue fluctuations—without relying on manual analysis. 

What follows is a technical walkthrough of the system we built, the constraints that shaped it, and the design decisions required to ensure reliability, correctness, and explainability in production-like workloads. 

Why Fabric Required a Different Approach 

Our previous example used an Azure-based data warehouse. However, many enterprises today rely heavily on Microsoft Fabric and Power BI as their analytical backbone. This was the case for the client in this scenario: a multi-location services business with detailed operational data stored in a Fabric Lakehouse. 

They had dashboards that clearly showed where revenue declined, but lacked the analytical bandwidth to investigate why. Analysts were constrained by time and volume, and business teams needed explanations, not charts. 

To address this gap, we connected our diagnostic agent directly to the SQL Analytics endpoint of the Fabric Lakehouse, instead of querying the Power BI semantic model. This let the agent work with raw, granular data while still respecting business logic defined in Power BI measures. 

The agent’s job was straightforward: ingest data from Fabric, analyze patterns, correlate signals, and identify plausible drivers behind revenue movements—autonomously and with auditability. 

The Data Landscape: Four Core Tables 

The client’s data model included four domain-critical tables in the Fabric Lakehouse: 

1. Branches 

A dimensional table listing each business location, associated region, and geographic attributes. 

2. Calendar 

A date table with fiscal periods, holidays, and standard temporal attributes. 

3. Weather Trends Forecast 

Daily temperature, Cooling Degree Days (CDD), and Heating Degree Days (HDD) by postal code. 

4. Booking Summary 

Operational booking metrics by branch, service line, and date—including TotalCalls and booking labels. 

These datasets form the minimum context required for the agent to move beyond pattern recognition and into causal inference heuristics—weather, time, location, and operational behavior. 

Tooling the Agent for Reliable, Controlled Execution 

In line with the design philosophy from Part 1, the agent was not allowed to infer internal schema or execute arbitrary metadata queries. Instead, it was given a controlled, well-defined set of tools that enforce correctness and constrain uncertainty. 

The tooling fell into three groups: SQL toolsdata-science tools, and PII handling tools

SQL Tools: Eliminating Schema Guesswork 

1. get_table_schema(table_name) 

This tool retrieves column names and data types for any table in the Fabric Lakehouse. Purpose: prevent hallucinated columns and guarantee that SQL queries are grounded in actual metadata. 

2. run_sql(query) 

Executes agent-generated SQL queries with secret-protected database credentials. Purpose: isolate credentials and ensure that only validated SQL passes through the execution layer. 

These SQL tools create a guardrail system in which every query is observable, reproducible, and traceable. 

Data Science Tools: Turning Raw Data Into Diagnostics 

Once a query returns data, analysis shifts into Python-based tools tailored for statistical inference. 

1. analyze_correlation 

Performs Pearson correlation to identify features most strongly associated with a target variable. Example: Assessing whether revenue decreases align with temperature anomalies or holiday periods. 

2. summarize_column 

Computes mean, min, max, variance, and skewness—useful for spotting branches with episodic spikes. 

3. measure_variation 

Calculates mean and standard deviation, highlighting operational volatility by branch or service line. 

4. binned_statistic_tool 

Generates temperature-based distribution bins (e.g., 40–50°F) to quantify demand under different weather conditions. 

Each of these tools gives the agent the ability to contextualize anomalies rather than simply report them. 

PII Tool: Enforcing Data Governance 

mask_pii 

Uses Azure Language PII detection to sanitize sensitive data. 

This step ensures the agent can work freely with real datasets without violating compliance boundaries. 

A Modular System Prompt for Operational Consistency 

Large system prompts often become brittle over time. To avoid this, we broke the prompt into modular components: 

  • Core System Prompt 
  • Business Rules (markdown) 
  • Report Template (markdown) 

The core prompt defines the agent’s identity (a data analyst), operational constraints, required behaviors, and plan-before-action logic. 

Key Operational Rules Included: 

1. Execution Plan First The agent must produce a 3–8 step plan before any action. This makes reasoning transparent and tool calls predictable. 

2. No schema inference The agent is explicitly forbidden from exploring INFORMATION_SCHEMA or similar constructs. This eliminates unpredictable SQL behaviors. 

3. Evidence and confidence required Every conclusion must include evidence and an explicit confidence score—critical for business trust. 

4. Mandatory markdown reporting format Final outputs follow a fixed report structure, improving readability and auditability across different queries. 

Business Rules: Translating Power BI Logic Into Natural Language 

Because the agent connects directly to SQL tables rather than Power BI measures, we explicitly defined business logic within business_rules.md

Example: Power BI DAX measure for Bookable Calls

Bookable_calls = SUM(Bookings[TotalCalls]), 

WHERE Bookings[Label] IN (“Positive”, “Negative”) 

Translated to natural language: 

  • “Bookable calls is the sum of TotalCalls where Label is either ‘Positive’ or ‘Negative’.” 

This explicitly encodes business semantics, ensuring the agent interprets the data the same way analysts do. 

Report Template: Enforcing a Standardized Output Format

The report_template.md file defines a consistent way to structure the agent's conclusion: 

  • Query 
  • Analysis Summary 
  • Key Findings 
  • Supporting Evidence 
  • Interpretation 
  • Conclusion 
  • Confidence Level 
  • Recommendations 

Regardless of whether the agent is analyzing weather anomalies or geographic volatility, the output remains structured and inspection-friendly. 

This template ensures downstream stakeholders receive uniform, high-quality reports—an essential trait when scaling diagnostic agents across multiple teams. 

What We Observed From the Agent 

1. Strong Contextual Understanding 

The agent demonstrated an ability to incorporate domain semantics—for instance, interpreting HDD (Heating Degree Days) and CDD (Cooling Degree Days) without additional training. 

2. Consistent, Reproducible Output 

Because the tools, rules, and templates are tightly controlled, reports were uniform across a wide range of user queries. 

3. Accurate Logical Confinement 

The combination of schema tools + rule-based guidance prevented common LLM failure patterns, such as: 

  • guessing column names 
  • inventing joins 
  • applying incorrect business definitions 

The result is a system that behaves deterministically within the allowed space, while still benefiting from the agent's analytical reasoning capabilities. 

Conclusion 

By connecting directly to Microsoft Fabric’s SQL Analytics endpoint and combining it with a disciplined toolset, modular prompts, and explicit business logic, we created a diagnostic AI agent capable of delivering consistent, evidence-backed insights at scale. 

The design choices—restricted metadata access, schema-controlled SQL execution, structured analytical tooling, and standardized reporting—allowed the agent to operate reliably in a real enterprise data environment. 

This implementation demonstrates that diagnostic AI agents can move beyond conceptual promise. With the right guardrails, tooling, and domain grounding, they can become dependable partners in uncovering the why behind business performance shifts, even in data-dense environments like Microsoft Fabric. 

Work With CloudIQ 

If your organization is exploring diagnostic AI, agentic workflows, or enterprise-grade analytics modernization, CloudIQ can help you design, build, and operationalize solutions that deliver real, measurable impact. 

 
Bring us your most complex data challenges—and challenge us to solve them. 

Share this:

Take a look at the lastest aricles

In the first part of this series, we introduced the idea of moving beyond dashboards to build diagnostic AI agents capable of uncovering the why behind business performance shifts. That article focused on architectural principles and the role of AWS Strands in enabling controlled agentic behavior.  In this follow-up, we take a more detailed look at how […]

Organizations continue to process a significant portion of their operational data through documents—particularly invoices, which arrive in multiple formats, structures, and levels of quality. Traditionally, handling these documents requires manual review, data entry, and routing, which introduces delays and increases the likelihood of errors.  With the steady advancement of Azure’s AI capabilities and serverless integration services, customers […]

The AI era demands more from our applications than ever before. Legacy ASP.NET applications, while reliable workhorses, often struggle with the scalability, flexibility, and integration capabilities needed to leverage modern AI services. But how do you modernize without risking business continuity?  At CloudIQ, we've not only researched and documented the best strategies—we've built them. This post brings together everything we've learned: comprehensive strategy, […]

Let’s shape your AI-powered future together.

Partner with CloudIQ to achieve immediate gains while building a strong foundation for long-term, transformative success.