In this guide, we dive into the practical implementation of an AI-powered conference assistant called ConferencePulse. Built entirely with .NET's composable AI stack, the app showcases how to seamlessly integrate live polls, real-time Q&A, automated insights, and session summaries into a Blazor Server experience. Below are seven key questions that break down the architecture, components, and workflow of ConferencePulse.
What is ConferencePulse and what does it do?
ConferencePulse is an interactive Blazor Server application designed for live conference sessions. Attendees join by scanning a QR code, then participate in polls and submit questions to the presenter. The app leverages AI to generate polls based on session content, answer audience queries using a retrieval-augmented generation (RAG) pipeline, surface real-time insights from engagement data, and produce a comprehensive session summary when the presenter ends the session. The entire experience is grounded in a knowledge base built from the session's GitHub repository, Microsoft Learn documentation, and wiki content. By automating content preparation and AI integration, ConferencePulse transforms passive slide decks into dynamic, interactive sessions that keep attendees engaged and provide presenters with valuable feedback.

How does the app handle live polls and Q&A using AI?
Live polls are generated automatically by an AI model that analyzes the session's knowledge base. The presenter simply sets a topic, and ConferencePulse creates relevant poll questions. Attendees vote, and results update in real time on the Blazor interface. For Q&A, audience questions are processed through a RAG pipeline that retrieves relevant context from the knowledge base—including session notes, Microsoft Learn docs, and GitHub wiki pages—then uses an IChatClient abstraction to generate accurate answers. This ensures responses are grounded in verified content rather than relying solely on the model's static knowledge. The combination of live polls and AI-mediated Q&A creates a two-way dialogue between presenter and audience, making sessions more participatory and data-driven.
What .NET building blocks were used to create ConferencePulse?
The application relies on a set of composable libraries from the Microsoft.Extensions.* ecosystem:
- Microsoft.Extensions.AI – Provides a unified
IChatClientinterface for AI providers like OpenAI, Azure OpenAI, Ollama, and Foundry Local. - Microsoft.Extensions.DataIngestion – Manages the pipeline that downloads, processes, and indexes content from GitHub repos into a vector store.
- Microsoft.Extensions.VectorData – Abstracts vector database operations (used with Qdrant) for similarity search.
- Model Context Protocol (MCP) – Standardizes tool definitions and client-server communication for AI agents.
- Microsoft Agent Framework – Orchestrates multiple AI agents that work concurrently to analyze polls, questions, and generate summaries.
- .NET Aspire – Handles cloud orchestration, connecting services like Qdrant, PostgreSQL, and Azure OpenAI.
These components work together seamlessly, allowing developers to swap AI providers, databases, or ingestion sources without rewriting core logic.
How does the RAG pipeline work for answering audience questions?
The RAG (Retrieval-Augmented Generation) pipeline in ConferencePulse operates in two phases: ingestion and query. During ingestion, the app downloads markdown files from a GitHub repository, runs them through a processing pipeline that splits content into chunks, generates embeddings, and stores both embeddings and metadata in a Qdrant vector database managed by Microsoft.Extensions.VectorData. When an attendee submits a question, the pipeline first converts the question into an embedding vector using the same model. A similarity search in Qdrant retrieves the most relevant chunks from the knowledge base. These chunks are then injected into the prompt sent to the AI model via IChatClient. The model crafts an answer that cites the retrieved content, ensuring accuracy and reducing hallucinations. This approach also allows the knowledge base to be updated without retraining the model.

How does the app automate session preparation from a GitHub repo?
ConferencePulse automates the tedious task of manual content preparation. The presenter simply points the app to a GitHub repository URL. The ingestion pipeline, built with Microsoft.Extensions.DataIngestion, clones or fetches markdown files from the repo. It then cleans and splits the documents into manageable chunks, generates embeddings using an AI model, and stores them along with metadata (such as file path and headings) in the vector database. This process runs as a background job with status updates visible in the UI. Once complete, the app has a searchable knowledge base that grounds all subsequent AI features—poll generation, Q&A, and insights. The same pipeline can be re-run to update content, making it ideal for sessions that evolve iteratively.
How does Microsoft.Extensions.AI simplify provider integration?
Microsoft.Extensions.AI introduces a single IChatClient interface that abstracts away the differences between AI service providers. Instead of writing separate code for OpenAI, Azure OpenAI, or local models, developers call the same methods for sending prompts and receiving responses. ConferencePulse uses this abstraction to switch between cloud-based Azure OpenAI and local Ollama models without changing any business logic. The library also handles token management, retries, and error handling consistently. Additionally, it integrates with .NET's dependency injection, allowing developers to configure the AI client in Program.cs with a simple AddChatClient extension method. This composability means teams can prototype with free local models and later scale to production-grade services with minimal friction.
What is the role of .NET Aspire in deploying the app?
.NET Aspire serves as the orchestration layer for ConferencePulse, managing the runtime dependencies and cloud resources needed by the application. In the project structure, the ConferenceAssistant.AppHost project defines a distributed application model that includes Qdrant (vector database), PostgreSQL (relational store), Azure OpenAI (AI service), and the Blazor frontend. Aspire automatically configures connection strings, manages service discovery, and sets up health checks. Developers can run the entire stack locally using Docker containers, then deploy to Azure with minimal changes. The Aspire dashboard provides real-time metrics and logs for every component, making debugging and monitoring straightforward. This drastically reduces the complexity of running multiple inter-dependent services—an integral part of modern AI-powered applications like ConferencePulse.