AI Tools t ≈ 12 min

Build a Market Research Agent in Microsoft Foundry: Step-by-Step for Growth Teams

Build a multi-agent market research system in Microsoft Foundry. Automate competitor tracking, trend analysis, and opportunity scoring.

yfx(m)

yfxmarketer

December 27, 2025

🎯

Market research eats 10-15 hours per week for most growth teams. You scrape competitor sites, monitor industry news, analyze trends, and synthesize findings into reports nobody reads. A market research agent running in Microsoft Foundry automates 80% of this work.

This guide walks you through building a multi-agent research system. You get automated competitor monitoring, trend analysis grounded in your internal data, and opportunity scoring based on your specific criteria. The output feeds directly into your planning workflows.

TL;DR

Microsoft Foundry lets you build AI agents that automate market research. This guide covers building three specialized agents (competitor tracker, trend analyst, opportunity scorer), connecting them in a workflow, and grounding responses in your company data. Total build time: 2-3 hours. Ongoing time saved: 8-12 hours weekly.

Key Takeaways

  • A single research agent handles basic queries; multi-agent workflows handle complex analysis
  • Ground your agents in SharePoint docs to incorporate internal context and company strategy
  • The competitor tracker agent monitors pricing, features, and positioning changes weekly
  • Trend analysis agents pull from external sources and compare against your product roadmap
  • Opportunity scoring uses your custom criteria, not generic frameworks
  • Publish the finished system to Teams so your whole team queries it directly
  • Evaluations catch when agent quality drops before it affects your decisions

Why Build a Market Research Agent?

Market research agents eliminate repetitive intelligence gathering. You define the research parameters once. The agent executes the same methodology consistently, at scale, without the cognitive drain of manual research.

The real value comes from grounding agents in your internal data. Generic research tools produce generic insights. An agent connected to your product roadmap, sales data, and strategic docs produces insights specific to your situation.

Growth teams using research agents report these outcomes:

  • Weekly competitor briefings generated automatically
  • Trend reports that reference internal strategy docs
  • Opportunity scores calibrated to company priorities
  • 8-12 hours reclaimed from manual research tasks

Action item: List three recurring research tasks your team performs weekly. Estimate hours spent on each. These become your first automation targets.

What You Need Before Starting

Microsoft Foundry requires an Azure subscription with AI services enabled. Access the platform at ai.azure.com. Your Azure admin provisions the project and grants your team access.

Prepare these assets before building:

  • Competitor list with URLs and key tracking parameters
  • Internal docs you want agents to reference (strategy decks, roadmaps, positioning docs)
  • SharePoint site or folder containing these materials
  • Criteria for scoring opportunities (market size thresholds, strategic fit factors)

The build process takes 2-3 hours for a basic system. Add another 1-2 hours for testing and refinement. Plan for a half-day focused session.

Action item: Upload your competitor list, latest strategy deck, and product roadmap to a dedicated SharePoint folder. Note the folder path for later.

How Do You Build the Competitor Tracker Agent?

The competitor tracker monitors specific companies for changes in pricing, features, messaging, and positioning. Start in Foundry at ai.azure.com and create a new agent from the Agents section.

Step 1: Configure the Base Agent

Select GPT-5 Chat as your model. This provides the reasoning capability needed for competitive analysis. Name the agent “Competitor Tracker” and add these instructions:

“You are a competitive intelligence analyst for [your company]. Monitor competitors for changes in pricing, product features, messaging, and market positioning. Compare findings against our current strategy. Flag significant changes that require response.”

Step 2: Connect Your Knowledge Base

Add your SharePoint folder as a knowledge source. The agent now references your strategy docs, positioning materials, and product roadmap when analyzing competitors. This grounds every analysis in your specific context.

Click Knowledge in the agent configuration. Select your SharePoint site and folder. The connection takes 2-3 minutes to index.

Step 3: Add Web Search Capability

Enable web search tools so the agent pulls current competitor information. Without this, the agent only knows what’s in your SharePoint docs. With it, the agent compares real-time competitor data against your internal strategy.

Test the agent with a prompt: “What pricing changes have [competitor name] made in the last 30 days? How does this affect our positioning?”

Action item: Build and test your competitor tracker agent. Run three test queries covering pricing, features, and messaging for your top competitor.

How Do You Build the Trend Analyst Agent?

The trend analyst identifies market shifts, emerging technologies, and changing customer preferences. This agent combines external research with your internal strategic context.

Step 1: Configure with Trend-Specific Instructions

Create a new agent named “Trend Analyst” with these instructions:

“You are a market trend analyst for [your company]. Identify emerging trends in [your industry]. Assess each trend’s relevance to our product roadmap and strategic priorities. Quantify market size and timing where possible. Recommend specific actions based on trend trajectory.”

Step 2: Ground in Product Roadmap

Connect the same SharePoint knowledge base. The trend analyst needs access to your product roadmap to assess trend relevance. Without this grounding, you get generic trend reports. With it, you get actionable recommendations.

Step 3: Test with Specific Trend Queries

Run test prompts that combine external trends with internal context:

  • “What AI trends should we consider for our 2026 roadmap?”
  • “How is [specific technology] adoption affecting our target market?”
  • “Which emerging competitors are gaining traction in [segment]?”

Review outputs for specificity. The agent should reference your roadmap items and strategic priorities, not generic recommendations.

Action item: Build your trend analyst agent. Test with three queries specific to your industry vertical and product category.

How Do You Build the Opportunity Scorer Agent?

The opportunity scorer evaluates market opportunities against your custom criteria. This agent filters the noise and surfaces opportunities worth pursuing.

Step 1: Define Your Scoring Criteria

Before building, document your opportunity evaluation criteria. Common factors include:

  • Market size (minimum threshold)
  • Strategic alignment (fit with current capabilities)
  • Competitive intensity (number and strength of incumbents)
  • Time to revenue (months to first dollar)
  • Resource requirements (team and budget needed)

Write these as explicit criteria the agent applies. Vague instructions produce vague scores.

Step 2: Configure with Scoring Instructions

Create the agent with detailed scoring methodology:

“You are an opportunity analyst for [your company]. Evaluate market opportunities using these criteria: [list your criteria with thresholds]. Score each opportunity 1-10 on each factor. Provide an overall recommendation: Pursue, Monitor, or Pass. Justify each score with specific evidence.”

Step 3: Connect to Financial Context

If available, add data sources containing your financial targets, resource constraints, and strategic priorities. The more context the agent has, the more relevant its scoring becomes.

Action item: Document your opportunity scoring criteria with specific thresholds. Build the scorer agent and test with one real opportunity you’re currently evaluating.

How Do You Connect Agents in a Multi-Agent Workflow?

Single agents answer single questions. Multi-agent workflows execute complex research processes automatically. Connect your three agents in a workflow that produces complete research briefings.

Step 1: Create the Workflow

In Foundry, go to Workflows and create a new Sequential workflow. Name it “Market Research Pipeline.” The sequential topology runs agents in order, passing context between them.

Step 2: Add Your Agents to the Flow

Add nodes in this order:

  1. Intent Classifier (built-in): Routes incoming queries to the right starting point
  2. Competitor Tracker: Gathers competitive context
  3. Trend Analyst: Identifies relevant market shifts
  4. Opportunity Scorer: Evaluates the synthesized findings

Step 3: Configure Handoffs

Each agent passes its output to the next. The competitor tracker’s findings inform the trend analysis. The trend analysis provides context for opportunity scoring. The final output synthesizes all three perspectives.

Add a condition after the opportunity scorer: If the score exceeds 7, flag for immediate review. If below 4, archive without notification. Scores between 4-7 route to a weekly digest.

Step 4: Test the Complete Workflow

Run the workflow with a research prompt: “Evaluate the opportunity for [your company] to expand into [new market segment].”

The output should include competitor dynamics, relevant trends, and a scored recommendation with supporting evidence.

Action item: Connect your three agents in a sequential workflow. Run an end-to-end test with a real expansion opportunity you’re considering.

How Do You Evaluate Agent Quality?

Agent outputs degrade over time. Models update. Market conditions change. Your strategy evolves. Evaluations catch quality drops before they affect decisions.

Step 1: Create Evaluation Datasets

Build test cases representing expected agent behavior. For the competitor tracker, create cases like:

  • “Identify [competitor] pricing changes” with expected output format
  • “Compare [competitor] features against our roadmap” with quality criteria
  • “Assess [competitor] positioning shift” with accuracy benchmarks

Foundry supports synthetic data generation. Use it to scale your test cases beyond manual creation.

Step 2: Run Regular Evaluations

Schedule weekly evaluations against your test cases. Foundry scores outputs on AI quality (relevance, accuracy) and safety (appropriate responses, no hallucinations).

Watch for score drops below your threshold. A 10% quality decline indicates instruction drift or model changes requiring attention.

Step 3: Iterate Based on Results

Low scores reveal specific improvement areas. Update agent instructions to address gaps. Add knowledge sources to fill information holes. Refine scoring criteria for edge cases.

Action item: Create five test cases for each agent covering expected use patterns. Run your first evaluation and establish baseline quality scores.

How Do You Deploy to Your Team?

A research agent only delivers value when your team uses it. Foundry publishes agents directly to Microsoft Teams and Copilot Chat.

Step 1: Publish from the Workflow

Click Publish from your market research workflow. Select Microsoft Teams as the destination. Submit for admin approval.

Step 2: Coordinate with Your Microsoft 365 Admin

Your admin reviews the agent and approves for distribution. Provide context on the agent’s purpose and expected users. Most approvals complete within 24-48 hours.

Step 3: Onboard Your Team

Once approved, the agent appears in the Teams Agent Store. Team members pin it for quick access. Create a one-page guide covering:

  • What questions the agent answers well
  • How to phrase queries for best results
  • What the agent cannot do (limitations)
  • Who to contact for issues

Step 4: Monitor Adoption

Track usage through the Foundry Operate dashboard. Watch for patterns in queries. Low usage signals onboarding gaps or capability mismatches. High usage of specific query types reveals additional automation opportunities.

Action item: Submit your workflow for Teams publishing. Prepare the one-page user guide while awaiting approval.

What Does Ongoing Operations Look Like?

Research agents require ongoing attention. Markets change. Competitors shift. Your strategy evolves. The agent system needs updates to stay relevant.

Weekly operations take 30-60 minutes:

  • Review evaluation scores for quality drops
  • Check usage patterns for new query types
  • Update competitor lists as the landscape changes
  • Refresh knowledge base connections when docs update

Monthly operations take 2-3 hours:

  • Audit agent instructions against current strategy
  • Add new data sources as they become available
  • Expand to new research domains based on team requests
  • Fine-tune scoring criteria based on outcome tracking

The Operate tab in Foundry provides fleet-wide visibility. Monitor success rates, token costs, and error rates across all agents. Set alerts for thresholds that matter: error rates above 5%, quality scores below 7, cost spikes above budget.

Action item: Block 30 minutes weekly and 2 hours monthly for agent operations. Add these to your calendar now.

What Results Should You Expect?

Teams running market research agents report consistent outcomes after 4-6 weeks of operation:

Time savings range from 8-12 hours weekly. The biggest gains come from automated competitor monitoring and trend synthesis. Manual research shifts to validation and strategic interpretation.

Research quality improves through consistency. Agents apply the same methodology every time. No more variation based on who does the research or how rushed they are.

Response speed increases dramatically. Queries that took hours now complete in minutes. Leadership questions get same-day answers instead of week-long research projects.

The compound effect matters most. Freed hours go toward strategic work. Better research informs better decisions. Faster responses create competitive advantage. These benefits compound over quarters.

Action item: Establish baseline metrics before launch. Track weekly hours on research tasks, query response time, and research output volume. Compare after 30 days of agent operation.

Final Takeaways

Market research agents automate 80% of recurring intelligence work. Build three specialized agents: competitor tracker, trend analyst, and opportunity scorer.

Ground every agent in your internal data. SharePoint connections transform generic insights into company-specific recommendations.

Multi-agent workflows handle complex analysis. Sequential pipelines pass context between agents for synthesized outputs.

Deploy to Teams for team-wide access. The best agent delivers zero value if nobody uses it.

Plan for ongoing operations. Weekly evaluations and monthly refinements keep agent quality high as conditions change.

yfx(m)

yfxmarketer

AI Growth Operator

Writing about AI marketing, growth, and the systems behind successful campaigns.

read_next(related)