{"id":3079,"date":"2025-08-09T22:30:37","date_gmt":"2025-08-09T22:30:37","guid":{"rendered":"https:\/\/violethoward.com\/new\/from-terabytes-to-insights-real-world-ai-obervability-architecture\/"},"modified":"2025-08-09T22:30:37","modified_gmt":"2025-08-09T22:30:37","slug":"from-terabytes-to-insights-real-world-ai-obervability-architecture","status":"publish","type":"post","link":"https:\/\/violethoward.com\/new\/from-terabytes-to-insights-real-world-ai-obervability-architecture\/","title":{"rendered":"From terabytes to insights: Real-world AI obervability architecture"},"content":{"rendered":" \r\n
\n\t\t\t\t
\n

Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders.<\/em> Subscribe Now<\/em><\/p>\n\n\n\n


\n<\/div>

Consider maintaining and developing an e-commerce platform that processes millions of transactions every minute, generating large amounts of telemetry data, including metrics, logs and traces across multiple microservices. When critical incidents occur, on-call engineers face the daunting task of sifting through an ocean of data to unravel relevant signals and insights. This is equivalent to searching for a needle in a haystack.\u00a0<\/p>\n\n\n\n

This makes observability a source of frustration rather than insight. To alleviate this major pain point, I started exploring a solution to utilize the Model Context Protocol (MCP) to add context and draw inferences from the logs and distributed traces. In this article, I\u2019ll outline my experience building an AI-powered observability platform, explain the system architecture and share actionable insights learned along the way.<\/p>\n\n\n\n

Why is observability challenging?<\/h2>\n\n\n\n

In modern software systems, observability is not a luxury; it\u2019s a basic necessity. The ability to measure and understand system behavior is foundational to reliability, performance and user trust. As the saying goes, \u201cWhat you cannot measure, you cannot improve.\u201d<\/em><\/p>\n\n\n\n

Yet, achieving observability in today\u2019s cloud-native, microservice-based architectures is more difficult than ever. A single user request may traverse dozens of microservices, each emitting logs, metrics and traces. The result is an abundance of telemetry data:<\/p>\n\n\n\n

\n
\n\n\n\n

AI Scaling Hits Its Limits<\/strong><\/p>\n\n\n\n

Power caps, rising token costs, and inference delays are reshaping enterprise AI. Join our exclusive salon to discover how top teams are:<\/p>\n\n\n\n

    \n
  • Turning energy into a strategic advantage<\/li>\n\n\n\n
  • Architecting efficient inference for real throughput gains<\/li>\n\n\n\n
  • Unlocking competitive ROI with sustainable AI systems<\/li>\n<\/ul>\n\n\n\n

    Secure your spot to stay ahead<\/strong>: https:\/\/bit.ly\/4mwGngO<\/p>\n\n\n\n


    \n<\/div>
      \n
    • Tens of terabytes of logs per day<\/li>\n\n\n\n
    • Tens of millions of metric data points and pre-aggregates<\/li>\n\n\n\n
    • Millions of distributed traces<\/li>\n\n\n\n
    • Thousands of correlation IDs generated every minute<\/li>\n<\/ul>\n\n\n\n

      The challenge is not only the data volume, but the data fragmentation. According to New Relic\u2019s 2023 Observability Forecast Report, 50% of organizations report siloed telemetry data, with only 33% achieving a unified view across metrics, logs and traces.<\/p>\n\n\n\n

      Logs tell one part of the story, metrics another, traces yet another. Without a consistent thread of context, engineers are forced into manual correlation, relying on intuition, tribal knowledge and tedious detective work during incidents.<\/p>\n\n\n\n

      Because of this complexity, I started to wonder: How can AI help us get past fragmented data and offer comprehensive, useful insights? <\/em>Specifically, can we make telemetry data intrinsically more meaningful and accessible for both humans and machines using a structured protocol such as MCP? <\/em>This project\u2019s foundation was shaped by that central question.<\/p>\n\n\n\n

      Understanding MCP: A data pipeline perspective<\/h2>\n\n\n\n

      Anthropic defines MCP as an open standard that allows developers to create a secure two-way connection between data sources and AI tools. This structured data pipeline includes:<\/p>\n\n\n\n

        \n
      • Contextual ETL for AI:<\/strong> Standardizing context extraction from multiple data sources.<\/li>\n\n\n\n
      • Structured query interface:<\/strong> Allows AI queries to access data layers that are transparent and easily understandable.<\/li>\n\n\n\n
      • Semantic data enrichment:<\/strong> Embeds meaningful context directly into telemetry signals.<\/li>\n<\/ul>\n\n\n\n

        This has the potential to shift platform observability away from reactive problem solving and toward proactive insights.<\/p>\n\n\n\n

        System architecture and data flow<\/h2>\n\n\n\n

        Before diving into the implementation details, let\u2019s walk through the system architecture.<\/p>\n\n\n\n

        \"\"
        Architecture diagram for the MCP-based AI observability system<\/em><\/figcaption><\/figure>\n\n\n\n

        In the first layer, we develop the contextual telemetry data by embedding standardized metadata in the telemetry signals, such as distributed traces, logs and metrics. Then, in the second layer, enriched data is fed into the MCP server to index, add structure and provide client access to context-enriched data using APIs. Finally, the AI-driven analysis engine utilizes the structured and enriched telemetry data for anomaly detection, correlation and root-cause analysis to troubleshoot application issues.\u00a0<\/p>\n\n\n\n

        This layered design ensures that AI and engineering teams receive context-driven, actionable insights from telemetry data.<\/p>\n\n\n\n

        Implementative deep dive: A three-layer system<\/h2>\n\n\n\n

        Let\u2019s explore the actual implementation of our MCP-powered observability platform, focusing on the data flows and transformations at each step.<\/p>\n\n\n\n

        Layer 1: Context-enriched data generation<\/h3>\n\n\n\n

        First, we need to ensure our telemetry data contains enough context for meaningful analysis. The core insight is that data correlation needs to happen at creation time, not analysis time.<\/p>\n\n\n\n

        def process_checkout(user_id, cart_items, payment_method):\u00a0 \u00a0 \u201c\u201d\u201dSimulate a checkout process with context-enriched telemetry.\u201d\u201d\u201d\u00a0 \u00a0 \u00a0\u00a0\u00a0\u00a0\u00a0 \u00a0 # Generate correlation id<\/em>\u00a0 \u00a0 order_id = f\u201dorder-{uuid.uuid4().hex[:8]}\u201d\u00a0 \u00a0 request_id = f\u201dreq-{uuid.uuid4().hex[:8]}\u201d\u00a0 \u00a0\u00a0 \u00a0 # Initialize context dictionary that will be applied<\/em>\u00a0 \u00a0 context = {\u00a0 \u00a0 \u00a0 \u00a0 \u201cuser_id\u201d: user_id,\u00a0 \u00a0 \u00a0 \u00a0 \u201corder_id\u201d: order_id,\u00a0 \u00a0 \u00a0 \u00a0 \u201crequest_id\u201d: request_id,\u00a0 \u00a0 \u00a0 \u00a0 \u201ccart_item_count\u201d: len(cart_items),\u00a0 \u00a0 \u00a0 \u00a0 \u201cpayment_method\u201d: payment_method,\u00a0 \u00a0 \u00a0 \u00a0 \u201cservice_name\u201d: \u201ccheckout\u201d,\u00a0 \u00a0 \u00a0 \u00a0 \u201cservice_version\u201d: \u201cv1.0.0\u201d\u00a0 \u00a0 }\u00a0 \u00a0\u00a0 \u00a0 # Start OTel trace with the same context<\/em>\u00a0 \u00a0 with tracer.start_as_current_span(\u00a0 \u00a0 \u00a0 \u00a0 \u201cprocess_checkout\u201d,\u00a0 \u00a0 \u00a0 \u00a0 attributes={k: str(v) for k, v in context.items()}\u00a0 \u00a0 ) as checkout_span:\u00a0 \u00a0 \u00a0 \u00a0\u00a0 \u00a0 \u00a0 \u00a0 # Logging using same context<\/em>\u00a0 \u00a0 \u00a0 \u00a0 logger.info(f\u201dStarting checkout process\u201d, extra={\u201ccontext\u201d: json.dumps(context)})\u00a0 \u00a0 \u00a0 \u00a0\u00a0 \u00a0 \u00a0 \u00a0 # Context Propagation<\/em>\u00a0 \u00a0 \u00a0 \u00a0 with tracer.start_as_current_span(\u201cprocess_payment\u201d):\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 # Process payment logic\u2026<\/em>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 logger.info(\u201cPayment processed\u201d, extra={\u201ccontext\u201d:

        json.dumps(context)})<\/p><\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n

        Code 1. Context enrichment for logs and traces<\/em><\/p>\n\n\n\n

        This approach ensures that every telemetry signal (logs, metrics, traces) contains the same core contextual data, solving the correlation problem at the source.<\/p>\n\n\n\n

        Layer 2: Data access through the MCP server<\/h3>\n\n\n\n

        Next, I built an MCP server that transforms raw telemetry into a queryable API. The core data operations here involve the following:<\/p>\n\n\n\n

          \n
        1. Indexing<\/strong>: Creating efficient lookups across contextual fields<\/li>\n\n\n\n
        2. Filtering<\/strong>: Selecting relevant subsets of telemetry data<\/li>\n\n\n\n
        3. Aggregation<\/strong>: Computing statistical measures across time windows<\/li>\n<\/ol>\n\n\n\n
          @app.post(\u201c\/mcp\/logs\u201d, response_model=List[Log])def query_logs(query: LogQuery):\u00a0 \u00a0 \u201c\u201d\u201dQuery logs with specific filters\u201d\u201d\u201d\u00a0 \u00a0 results = LOG_DB.copy()\u00a0 \u00a0\u00a0 \u00a0 # Apply contextual filters<\/em>\u00a0 \u00a0 if query.request_id:\u00a0 \u00a0 \u00a0 \u00a0 results = [log for log in results if log[\u201ccontext\u201d].get(\u201crequest_id\u201d) == query.request_id]\u00a0 \u00a0\u00a0 \u00a0 if query.user_id:\u00a0 \u00a0 \u00a0 \u00a0 results = [log for log in results if log[\u201ccontext\u201d].get(\u201cuser_id\u201d) == query.user_id]\u00a0 \u00a0\u00a0 \u00a0 # Apply time-based filters\u00a0 \u00a0 if query.time_range:\u00a0 \u00a0 \u00a0 \u00a0 start_time = datetime.fromisoformat(query.time_range[\u201cstart\u201d])\u00a0 \u00a0 \u00a0 \u00a0 end_time = datetime.fromisoformat(query.time_range[\u201cend\u201d])\u00a0 \u00a0 \u00a0 \u00a0 results = [log for log in results\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 if start_time <= datetime.fromisoformat(log[\u201ctimestamp\u201d]) <= end_time]\u00a0 \u00a0\u00a0 \u00a0 # <\/em>Sort<\/em> <\/em>by<\/em> timestamp<\/em>\u00a0 \u00a0 results = sorted(results, key=lambda x: x[\u201ctimestamp\u201d], reverse=True)\u00a0 \u00a0\u00a0 \u00a0 return results[:query.limit] if query.limit else results<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n

          Code 2. Data transformation using the MCP server<\/em><\/p>\n\n\n\n

          This layer transforms our telemetry from an unstructured data lake into a structured, query-optimized interface that an AI system can efficiently navigate.<\/p>\n\n\n\n

          Layer 3: AI-driven analysis engine<\/h3>\n\n\n\n

          The final layer is an AI component that consumes data through the MCP interface, performing:<\/p>\n\n\n\n

            \n
          1. Multi-dimensional analysis<\/strong>: Correlating signals across logs, metrics and traces.<\/li>\n\n\n\n
          2. Anomaly detection<\/strong>: Identifying statistical deviations from normal patterns.<\/li>\n\n\n\n
          3. Root cause determination<\/strong>: Using contextual clues to isolate likely sources of issues.<\/li>\n<\/ol>\n\n\n\n
            def analyze_incident(self, request_id=None, user_id=None, timeframe_minutes=30):\u00a0 \u00a0 \u201c\u201d\u201dAnalyze telemetry data to determine root cause and recommendations.\u201d\u201d\u201d\u00a0 \u00a0\u00a0 \u00a0 # Define analysis time window<\/em>\u00a0 \u00a0 end_time = datetime.now()\u00a0 \u00a0 start_time = end_time \u2013 timedelta(minutes=timeframe_minutes)\u00a0 \u00a0 time_range = {\u201cstart\u201d: start_time.isoformat(), \u201cend\u201d: end_time.isoformat()}\u00a0 \u00a0\u00a0 \u00a0 # Fetch relevant telemetry based on context<\/em>\u00a0 \u00a0 logs = self.fetch_logs(request_id=request_id, user_id=user_id, time_range=time_range)\u00a0 \u00a0\u00a0 \u00a0 # Extract services mentioned <\/em>in<\/em> logs <\/em>for<\/em> targeted metric analysis<\/em>\u00a0 \u00a0 services = set(log.get(\u201cservice\u201d, \u201cunknown\u201d) for log in logs)\u00a0 \u00a0\u00a0 \u00a0 # Get metrics <\/em>for<\/em> those services<\/em>\u00a0 \u00a0 metrics_by_service = {}\u00a0 \u00a0 for service in services:\u00a0 \u00a0 \u00a0 \u00a0 for metric_name in [\u201clatency\u201d, \u201cerror_rate\u201d, \u201cthroughput\u201d]:\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 metric_data = self.fetch_metrics(service, metric_name, time_range)\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 # Calculate statistical properties<\/em>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 values = [point[\u201cvalue\u201d] for point in metric_data[\u201cdata_points\u201d]]\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 metrics_by_service[f\u201d{service}.{metric_name}\u201d] = {\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u201cmean\u201d: statistics.mean(values) if values else 0,\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u201cmedian\u201d: statistics.median(values) if values else 0,\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u201cstdev\u201d: statistics.stdev(values) if len(values) > 1 else 0,\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u201cmin\u201d: min(values) if values else 0,\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u201cmax\u201d: max(values) if values else 0\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 }\u00a0 \u00a0\u00a0 \u00a0# Identify anomalies using z-score<\/em><\/em>\u00a0 \u00a0 anomalies = []\u00a0 \u00a0 for metric_name, stats in metrics_by_service.items():\u00a0 \u00a0 \u00a0 \u00a0 if stats[\u201cstdev\u201d] > 0:\u00a0 # Avoid division by zero\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 z_score = (stats[\u201cmax\u201d] \u2013 stats[\u201cmean\u201d]) \/ stats[\u201cstdev\u201d]\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 if z_score > 2:\u00a0 # More than 2 standard deviations\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 anomalies.append({\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u201cmetric\u201d: metric_name,\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u201cz_score\u201d: z_score,\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u201cseverity\u201d: \u201chigh\u201d if z_score > 3 else \u201cmedium\u201d\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 })\u00a0 \u00a0\u00a0 \u00a0 return {\u00a0 \u00a0 \u00a0 \u00a0 \u201csummary\u201d: ai_summary,\u00a0 \u00a0 \u00a0 \u00a0 \u201canomalies\u201d: anomalies,\u00a0 \u00a0 \u00a0 \u00a0 \u201cimpacted_services\u201d: list(services),\u00a0 \u00a0 \u00a0 \u00a0 \u201crecommendation\u201d: ai_recommendation\u00a0 \u00a0 }<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n

            Code 3. Incident analysis, anomaly detection and inferencing method<\/em><\/p>\n\n\n\n

            Impact of MCP-enhanced observability<\/h2>\n\n\n\n

            Integrating MCP with observability platforms could improve the management and comprehension of complex telemetry data. The potential benefits include:<\/p>\n\n\n\n