The Easy Guide To Building AI Agents in n8n for Finance
Every month-end, the same story plays out: spreadsheets flying through email, version control nightmares, and analysts burning midnight oil to tie out the balance sheet. By the time we’ve wrangled the data, we’re too exhausted to analyze it, let alone make it actionable.
But here’s the shift that’s quietly changing everything: AI Agents.
AI Agents aren’t just another automation trend or flashy tech demo. They’re the next evolution of finance workflows, automations that don’t just follow rules, but actually reason through steps like a junior analyst would. Instead of you telling the system, “Do this, then that,” you can tell it:
“Pull the latest actuals, check for major variances, and draft commentary for anything over 10%.”
And it’ll go do it, without you touching a single cell.
Now, most people think building AI Agents requires a developer army or enterprise-level software. In reality, there’s no extensive coding required to build AI agents in n8n.
n8n is a no-code/low-code automation platform that acts like an orchestration layer for all your finance tools, Excel, Power BI, SharePoint, Outlook, GPT models, APIs, you name it. And now, with its AI Agent nodes, you can build intelligent automations that think, learn, and adapt, without writing Python scripts or begging IT for support.
In this guide, I’m going to show you exactly how to do it. This is a hands-on tutorial designed to help you build your first AI agent in n8n. Not in theory. Not with Wall Street trading bots. We’re going to build real AI agents for corporate finance—the kind that help you close faster, forecast smarter, and reclaim your nights and weekends.
Here’s what we’ll cover:
- What AI Agents actually are (and how they differ from traditional automations).
- How to build your first one in n8n step-by-step.
- A hands-on tutorial for building your first AI agent in n8n, with no extensive coding required.
- Real-world case studies:
- A Month-End Close Assistant that tracks files and chases submissions.
- A Variance Commentary Agent that writes insights directly from your data.
- A Budget Approval Agent that reviews submissions for compliance.
- And how to scale these into a finance automation ecosystem that runs itself.
By the end, you’ll see exactly how to go from spreadsheet chaos to an autonomous finance workflow that works 24/7, so you don’t have to.
What Exactly Is an AI Agent?
Let’s clear something up right away — an AI Agent isn’t some futuristic robot CFO coming to take your job. It’s a smart automation that can reason through a workflow instead of just following rigid, pre-programmed steps.

If you’ve ever built a macro, Power Automate flow, or n8n workflow, you’ve already built rules-based automation:
“When a file lands here → clean the data → send a report.”
That works great… until something changes — a new file name, a slightly different data structure, or a missing field. Traditional automations break because they don’t know why they’re doing something; they just follow instructions.
AI Agents are different. They don’t just execute steps — they decide which steps to take based on goals, context, and memory. Think of them as digital analysts that can reason, summarize, and troubleshoot.
How AI Agents Work
AI Agents are made up of four key components. If you understand these, you can design almost anything:
- Goal (Intent):
What you want it to achieve — for example, “Summarize department-level variances between actuals and forecast.” - Tools:
These are the apps or APIs it can use — Excel files, databases, email, Power BI datasets, even GPT models.
In n8n, these “tools” are the different nodes you connect. - Memory (Context):
Agents remember prior runs, past conversations, or previously computed results.
This is how your month-end commentary agent can recall last month’s trends or the CFO’s usual phrasing. - Decision Logic (Reasoning):
This is the magic — instead of rigid rules, the agent figures out what to do based on the current situation.
Example: “If a variance exceeds 10%, summarize it. Otherwise, skip it.”
All four of these come together inside n8n’s AI Agent node, which sits on top of your existing automations like a brain. You can still connect the same Excel, Power BI, or email steps — you’re just giving them a layer of intelligence.
Why This Matters for Corporate Finance
Finance teams live in a world of exceptions.
Every dataset is slightly wrong, every report needs context, and every executive wants “one more version” of the numbers.
Traditional automation can move the data, but it can’t explain why things changed or what to do next. AI Agents can.
Here are a few real examples of how this shifts your day-to-day:
Pain Point | Traditional Automation | AI Agent Approach |
|---|---|---|
Month-end close reminders | Send a generic email to everyone | Ask who’s missing submissions, pull list from SharePoint, and craft personalized follow-ups |
Forecast updates | Refresh numbers and stop | Analyze deltas, flag large variances, and summarize key trends |
CFO decks | Copy-paste charts manually | Draft narrative commentary aligned to the visuals |
Data validation | Hard-coded logic only | Identify anomalies even when they don’t fit simple rules |
In other words, AI Agents bring judgment to your automations.
When You Should (and Shouldn’t) Use AI Agents
You don’t need an AI Agent for everything. If the process is predictable — like copying files or refreshing Power BI datasets — keep it simple with standard automations.
But the moment your workflow involves interpretation, decision-making, or context — that’s where agents shine.
✅ Perfect Use Cases for Finance:
- Writing variance explanations from data
- Cleaning messy Excel files with dynamic logic
- Identifying anomalies in spending or headcount
- Coordinating status updates during close
- Drafting management commentary or KPI summaries, where AI agents can generate tailored content for specific audiences or reporting needs
AI agents can also tailor their outputs to match the style, tone, or requirements of different stakeholders, ensuring the content is always relevant and on-brand.
🚫 Bad Use Cases:
- Anything requiring precise, unambiguous calculations (use formulas instead)
- Processes with zero tolerance for creative interpretation (regulatory filings, journal entries)
- Workflows that don’t change month to month
The Sweet Spot: Blending Automation + Intelligence
The real power comes when you combine traditional workflow automation with AI reasoning.
Let the bots do the repetitive stuff, and let the agent handle the interpretation.
Here’s a simple example:
- n8n pulls the actuals file from SharePoint.
- Power Query or Python node cleans it.
- AI Agent node reviews changes and writes a variance summary.
- Email node sends the result to your controller.
That’s not just automation — that’s autonomous finance in motion.
Getting Started Building AI Agents With n8n
Alright — now that you know what an AI Agent is, let’s get our hands dirty.
If you’ve never used n8n before, think of it like Power Automate and Zapier had a baby that grew up to be an engineer.
It’s visual, drag-and-drop simple, but with the power to handle complex, data-heavy workflows that finance teams live in.
And best of all, it’s built for people like us — analysts who want control and visibility without needing to write 500 lines of Python.
Step 1: Set Up Your n8n Environment
You’ve got three main options depending on your setup and IT restrictions:
n8n Cloud (Easiest)
- Go to n8n.io and start a free account.
- Perfect if you just want to build prototypes or personal workflows.
- Comes pre-secured and maintained — no need to mess with servers.
Self-Hosted (Best for Corporate IT)
- Install n8n on a company VM or server using Docker.
- You control all data flows, which is important for finance teams handling sensitive P&L data.
- Full documentation here: docs.n8n.io/hosting.
- Great for scaling and connecting internal systems like SAP, NetSuite, or Workday.
Desktop Version (Local Testing)
- Download the desktop app from n8n.io.
- Works fine for experimenting or testing agent logic before you deploy.
Regardless of setup, your first step after installation is to connect your data tools and credentials — Excel, SharePoint, Outlook, Teams, Power BI, or SQL databases. Integrating external services and efficiently managing resources in n8n can help optimize your AI agent workflows, enabling better collaboration, data sharing, and reduced processing overhead.
Step 2: Connect Your Finance Tech Stack
Most finance automations live across five core tools. Here’s how to wire them into n8n:
Tool | What It Does | How to Connect |
|---|---|---|
Excel / OneDrive | Source of truth for actuals, forecasts, and templates | Use the “Microsoft Excel” or “Microsoft OneDrive” node. You’ll sign in using OAuth. |
SharePoint / Teams | File storage, workflow triggers, notifications | Use “Microsoft Graph API” nodes or “HTTP Request” node with webhooks. |
Power BI | Reporting, visuals, and DAX logic | Use “HTTP Request” node to call the Power BI REST API for dataset refreshes. |
Email (Outlook / Gmail) | Alerts, notifications, commentary submission | Use “Microsoft Outlook” or “Email Send” node. |
ERP / Database | Source data (e.g., NetSuite, Workday, SAP) | Use “MySQL,” “PostgreSQL,” or “HTTP Request” nodes to query or update data. |
Once these are connected, you can move data between systems seamlessly — the agent becomes the orchestrator, not just another tool.
Step 3: Understand the AI Agent Node
Now the fun part. The AI Agent Node in n8n is where intelligence meets workflow.
It’s built on LangChain, a framework for reasoning and tool-use. That means your agent can not only think about a task, but also decide which tools to use along the way.

Here’s what makes it powerful: Adjusting settings such as model choice, temperature, and other parameters allows you to fine-tune agent performance, optimize efficiency, and control costs.
Setting | What It Does | Finance Example |
|---|---|---|
System Message | Defines your agent’s personality and goals | “You are a finance automation assistant. Your job is to summarize key variances between actuals and forecast.” |
Tools | Lets the agent access other nodes or APIs | Fetch data from Excel, summarize using GPT, and send email commentary. |
Memory | Keeps track of previous inputs/outputs | Remembers department comments from last month. |
Temperature | Controls creativity level | Keep it low (0–0.3) for consistent financial reporting. |
Be sure to review the details of each setting to ensure your AI agent behaves as intended and meets your specific finance workflow needs.
Essentially, you’re designing an intelligent process owner that can follow goals, use your connected data, and output polished commentary — just like a human analyst.
Step 4: Use Supporting Nodes for Real Workflows
AI Agents in n8n are powerful, but they don’t work alone. You’ll typically combine the AI Agent Node with these supporting pieces:
- Trigger Nodes — e.g., “When file uploaded to SharePoint” or “Every Friday at 8 AM.”
- Excel Nodes — read and write data from your working files.
- HTTP Request Nodes — connect to APIs like Power BI, ERP, or market data feeds.
- If/Else Nodes — add logic (e.g., “If variance > 10%, then summarize”).
- Code Nodes — for quick calculations or data formatting using JavaScript.
- Email/Teams Nodes — send outputs, alerts, and summaries automatically.
Once you’ve built a few, you’ll notice the pattern: 👉 The AI Agent sits in the middle — reading data from upstream nodes, reasoning through it, and sending results downstream.
You can inspect each step in n8n using inline logs and visual workflow tools to monitor agent behavior, identify regressions, and ensure quality throughout your workflow.
Step 5: Create a Test Workflow
Let’s build a simple sanity-check to make sure everything works:
- Add a Manual Trigger node (for testing).
- Add an AI Agent Node and set this system message:
“You are a finance assistant. Summarize the following data table into 2–3 key points.”
- Connect a Spreadsheet Node that reads a small Excel file (like a department P&L).
- Map that data into the Input field of the AI Agent Node.
- Add an Email Node to send the AI’s summary to yourself.
Click Execute Workflow — if you see a clean, human-like summary of your Excel data, congrats. The AI agent is able to analyze the data and provide clear, actionable answers in its summary. You’ve just built your first AI-driven finance automation.
Step 6: Add Security and Governance
Before scaling up, make sure your setup meets finance standards:
- ✅ Use service accounts or app registrations (not personal logins).
- 🔐 Store credentials in n8n’s built-in vault.
- 🧾 Log every run (n8n does this automatically).
- ⚠️ Add review checkpoints for anything that hits executives (variance commentary, forecasts).
You’re not just automating — you’re delegating judgment. Governance keeps that safe.
Step-by-Step Walkthrough: Building Your First Finance AI Agent
Alright, time to build something that actually saves you hours.
We’re going to walk through, step-by-step, how to build an AI Agent that automatically writes variance commentary between your actuals and forecast.
Step 1: Define the Goal
Before you drag a single node, define your objective clearly.
Goal: Generate professional, accurate variance commentary for each department based on actual vs. forecast data.
Here’s the workflow logic in plain English:
- When the latest actuals file is uploaded to SharePoint,
- Read the “Actuals” and “Forecast” tabs from Excel,
- Calculate % variance for each department,
- Use an AI Agent to summarize the top drivers,
- Email the commentary to the FP&A lead.
This clarity is key. AI Agents don’t need a 50-step rulebook — they just need a well-defined goal.
Step 2: Create Your Workflow Trigger
In n8n, every workflow starts with a trigger node.
- Option 1 (SharePoint trigger):Use the “HTTP Trigger” node connected to a Power Automate flow that fires when a file is added to your SharePoint folder.
- Option 2 (Scheduled trigger):Use the “Cron” node to run this daily or weekly (e.g., every Monday at 7 AM).
Once the trigger fires, the workflow will continue to the next step automatically, ensuring a seamless process.
For our guide, let’s use a Cron trigger to keep it simple.

💡 Tip: Always build your first few agents on a schedule. It keeps testing predictable.
Step 3: Add Your Data Sources
Next, connect your actuals and forecast data.
- Add an Excel (Microsoft OneDrive) node.
- Choose “Read Spreadsheet from File.”
- Point to your actuals file (e.g., Finance/MonthEnd/Actuals.xlsx).
- Select the worksheet (e.g., “Dept_Actuals”).
- Choose “Read Spreadsheet from File.”
- Point to your actuals file (e.g., Finance/MonthEnd/Actuals.xlsx).
- Select the worksheet (e.g., “Dept_Actuals”).
- Add another Excel Node for your forecast file.
- Same setup, but with the “Forecast” file.
- Use a consistent schema — Department, Account, and Amount columns.
- Same setup, but with the “Forecast” file.
- Use a consistent schema — Department, Account, and Amount columns.
- Add a Code Node (optional) to calculate variances:
- 💡 If you don’t code, don’t worry — you can also use n8n’s “Spreadsheet Formula” node for this.
Step 4: Add the AI Agent Node
Now it’s time to make the automation smart.
- Add an AI Agent Node.
- Under “System Message,” paste something like:
- “You are a corporate finance analyst. Based on the input data, identify and summarize key variances between actuals and forecast for each department. Keep explanations concise, factual, and in a professional tone.”
- Under “System Message,” paste something like:
- “You are a corporate finance analyst. Based on the input data, identify and summarize key variances between actuals and forecast for each department. Keep explanations concise, factual, and in a professional tone.”
- Set your AI Model (visit OpenAI Platform or Google AI Studio for an API key).
- Under Tools / Input, map in the variance data from the prior node.
- Set Temperature to 0.2 (low creativity = consistent output).
- Choose “Return AI Output” so you can pass it to the next node.
🧠 Pro tip: You can also connect external APIs as tools. For example, a “Company Policy GPT” to ensure commentary matches internal phrasing or reporting guidelines.
Step 5: Format and Deliver the Output
Now we’ll send that commentary somewhere useful.
- Add a Markdown Node (or Code Node) to format the AI output into a clean email summary:
- Add an Email Node (Outlook or Gmail).
- Subject: “📊 Month-End Variance Summary – {{ new Date().toLocaleDateString() }}”
- Body: Use the formatted text above.
- Recipients: Your finance leadership or whoever reviews commentary.
- Subject: “📊 Month-End Variance Summary – {{ new Date().toLocaleDateString() }}”
- Body: Use the formatted text above.
- Recipients: Your finance leadership or whoever reviews commentary.
- Optional: Add a Teams Node or Slack Node to post the same summary in your channel.

Step 6: Test and Trace
Now click Execute Workflow.
n8n will walk through each node and show you real-time logs:
- ✅ Excel nodes: data loaded successfully.
- ✅ Code node: variances calculated.
- ✅ AI Agent: commentary generated.
- ✅ Email: summary sent.
If something breaks, click into the node and view the “Execution Data.” n8n’s trace view makes debugging easy — you can even re-run specific nodes.
🔍 Tip: Keep a small test dataset (2–3 departments) while you fine-tune prompts. Then expand to full reports.
Step 7: Add Guardrails and Governance
Before putting it into production, let’s add a few checks:
- Add an If Node to stop the process if any data source is empty.
- Add a Validation Node (or a code check) to flag unusually high variances.
- Store all outputs in a “Variance Logs” folder so you can audit AI-generated commentary.
- Include a line in your email footer:
- “AI-generated draft – please review before distribution.”
Case Study 1: The Month-End Close Assistant
If you’ve ever spent the first three days of close refreshing your inbox and wondering,
“Where the hell is Operations’ P&L?” you’ll appreciate this one.
The Month-End Close Assistant is one of the most practical use cases of building AI Agents. It automates one of the most painful parts of close — tracking deliverables, chasing stragglers, and summarizing progress — by using its ability to search shared folders or databases to identify missing submissions and track progress, so your team can focus on the numbers, not the nagging.
The Problem
Every finance team has a version of this chaos:
- 15+ contributors, all sending different templates.
- Shared folders that look like a file-naming crime scene.
- Status emails that get buried in reply-all threads.
- Last-minute scrambles to figure out who’s missing what.
None of it’s “hard” work — it’s just coordination hell.
And because it’s manual, it’s also where the most time gets wasted.
So let’s fix that.
The Goal
Build an AI Agent that monitors a shared folder, tracks who’s submitted their close files, sends personalized reminders, and provides a daily summary of what’s done, what’s missing, and what’s overdue.
No more “Hey, just checking in on your variance file…”
Your agent does that automatically — politely and on schedule.
How It Works
Here’s what the Month-End Close Assistant will do:
- Check your Shared Drive or SharePoint folder for submissions.
- Compare the uploaded files to your expected department list.
- Identify who’s missing their files.
- Generate a human-sounding reminder email or Teams message.
- Send a daily status summary to your inbox.
- Store the progress log for audit and transparency.
Step 1: Set Up Your Trigger
Start with a Cron Node that runs once a day at 8 AM during close week.
- Name it “Daily Close Check.”
- You can also add a manual trigger for on-demand runs (handy for testing).
💡 Pro tip: You can later swap this for a SharePoint “File Added” trigger to make it real-time.
Step 2: Pull File Data
Add a Microsoft OneDrive or SharePoint Node to list files in your month-end folder.
Example:
Path: /Finance/MonthEnd/2025-10/
Filter: *.xlsx
This returns a JSON array with filenames, timestamps, and owners.
Step 3: Compare Against Expected Submissions
You’ll need a list of departments or contributors.
Easiest way? Store it in a small reference Excel or Google Sheet like this:
Department | Owner | Expected File Name |
|---|---|---|
Sales | Sarah Kim | Sales_Variance.xlsx |
Marketing | Dan Patel | MKT_Variance.xlsx |
Ops | Rachel Li | OPS_Variance.xlsx |
HR | Mike Ortiz | HR_Variance.xlsx |
Add an Excel Node to read that list.
Then, add a Code Node to compare it with your OneDrive file list:
const expected = items[0].json; // Excel list
const uploaded = items[1].json; // OneDrive list
const missing = expected.filter(e =>
!uploaded.some(u => u.name.toLowerCase().includes(e.Expected_File_Name.toLowerCase()))
);
return [{ json: { missing, uploadedCount: uploaded.length, total: expected.length } }];
If the “missing” array isn’t empty, we’ll handle that next.
Step 4: Draft Personalized Reminders
Here’s where your AI Agent Node steps in.
- System Message:
- “You are a finance operations assistant. Write short, polite reminder messages for team members who haven’t uploaded their month-end files yet. Include the file name, due date, and a friendly tone.”
- Input: The “missing” list from your previous node.
- Output Example:
Now you’ve got personalized nudges that don’t sound robotic or passive-aggressive (a rare gift in finance).
Step 5: Send the Reminders
Add an Email Node or Microsoft Teams Node right after the AI Agent.
- Use the Owner column as the recipient.
- Subject line: “🔔 Month-End File Reminder – [Department Name]”
- Body: The personalized text from the AI output.
Optional: Add a delay node to stagger sends and avoid spamming servers.
Step 6: Generate a Daily Summary
Now we’ll have the agent summarize overall progress.
Add another AI Agent Node with this system message:
“Summarize today’s month-end close status. Mention how many files have been submitted, who’s missing, and overall progress in a friendly, professional tone.”
Input the “missing” list and “uploadedCount” metrics.
Then, add an Email Node to send the summary to yourself and your controller.
Example output:
🧾 Month-End Close Update – October 7
12 of 15 department files have been submitted.
Still outstanding: Marketing, HR, and Operations.
Reminders sent this morning.
At this pace, we’re on track to finalize all submissions by tomorrow.
— Your Month-End Close Assistant
Step 7: Add an Audit Log
To keep things traceable, add a Google Sheet or Database Node to store:
- Date and time of each run
- Who was reminded
- When they submitted afterward
You now have a built-in compliance record — a bonus your controller will love.
Step 8: Automate the Cycle
Once it’s tested:
- Schedule it to run automatically every morning during close week.
- Or trigger it every time a new file hits your SharePoint folder.
- Combine it with your Variance Commentary Agent (from Section 4) so once everyone submits, the commentary builds itself.
💡 Imagine this: your Month-End Close Assistant checks submissions at 8 AM, sends reminders, and your Variance Commentary Agent kicks in at 3 PM once all data is in.
By 5 PM, your CFO has a clean deck with commentary — and no one’s pulling an all-nighter.
Results From the Field
A mid-size SaaS company I coached implemented this exact workflow in under a week.
Before automation, their FP&A lead spent two full days per close just tracking down submissions and emailing reminders.
After deploying the AI-powered close assistant:
- Reminder emails went out automatically.
- Submission rates improved 25%.
- Close cycle time dropped from 6 days to 4.
- Morale skyrocketed (because nobody likes being the “bad cop”).
Their favorite part?
They named their agent “Janet the Close Cop.”
Now Janet handles the follow-ups — politely, promptly, and without caffeine.
Case Study 2: The Forecast Refresh Agent
If month-end close is the grind, mid-month reforecasting is the marathon — constant updates, last-minute assumptions, and “just one more version before the exec review.”
Every FP&A pro knows this pain:
you finally finalize the forecast, and the next day Sales drops a new pipeline update, Ops changes headcount plans, and someone from HR casually mentions a new hire class that wasn’t in the model.
So, what happens?
You spend the next 48 hours chasing data, updating Excel models, and hoping nothing breaks before the CFO review.
That’s where your Forecast Refresh Agent comes in.
This AI Agent runs quietly in the background — checking for new data, updating drivers, and generating summaries for you.
It doesn’t replace you.
It just makes sure the boring, repeatable parts of forecasting happen automatically — so you can focus on the insights.
The Goal
Build an AI Agent that refreshes your forecast when new data is available, compares it to your prior forecast, flags anomalies, and generates a one-page update summary for stakeholders.
Basically: an automated FP&A analyst that handles your rolling forecast maintenance.
How It Works
- Detects when new source data (sales, expenses, headcount, etc.) lands in SharePoint, your ERP, or SQL.
- Pulls that data into Excel or a staging table.
- Recalculates key forecast drivers and KPIs.
- Compares new projections against the last forecast.
- Summarizes deltas, trends, and risks using AI.
- Sends out a quick update to the team — without waiting for someone to ask.
Step 1: Set Up the Trigger
You’ve got options for when the forecast should refresh:
- Event-based: New data lands in your “Forecast Inputs” folder → trigger workflow.
- Time-based: Run every Monday morning at 7 AM.
- Manual override: Button click in Teams or an email command like “/refresh forecast.”
For now, we’ll keep it simple with a Cron Node to schedule a weekly refresh.
💡 Later, you can swap in a SharePoint trigger that runs every time a new dataset is uploaded.
Step 2: Connect to Your Data Sources
Add nodes to pull from your main data systems:
Data Source | Node Type | Example Use |
|---|---|---|
ERP or SQL Database | “MySQL” / “PostgreSQL” Node | Pull actuals for current month |
Sales CRM | “HTTP Request” Node | Get pipeline totals and bookings |
HR System | “Google Sheets” or “Excel” Node | Import new headcount plan |
Expense Tracker | “HTTP Request” Node | Pull current spend YTD |
Forecast File | “Excel (OneDrive)” Node | Load the current working forecast |
Each node brings in one piece of the puzzle. You can keep this modular — just update the data nodes as your systems evolve.
Step 3: Merge and Calculate Updated Forecasts
Use a Code Node or Spreadsheet Formula Node to apply your forecast logic.
Example (simplified):
const sales = items[0].json; // from CRM
const expenses = items[1].json; // from ERP
const headcount = items[2].json; // from HR
const forecast = sales.map((s, i) => ({
Month: s.Month,
Revenue: s.Bookings * 0.9, // 10% falloff assumption
OpEx: expenses[i]?.Amount || 0,
Headcount: headcount[i]?.Count || 0,
Margin: ((s.Bookings * 0.9) - (expenses[i]?.Amount || 0)) / (s.Bookings * 0.9),
}));
return forecast.map(f => ({ json: f }));
That’s your “refreshed” forecast data — dynamic, consistent, and updated without you touching Excel.
Step 4: Add the AI Agent for Analysis
Here’s where the Forecast Refresh Agent earns its title.
Add an AI Agent Node with a clear system message, like:
“You are an FP&A analyst. Analyze the updated forecast data and summarize major changes vs. the prior version. Highlight significant variances, explain possible drivers, and flag anomalies or risks. Write in a concise, executive-friendly tone.”
Then, feed two data inputs into the AI Agent:
- The new forecast from the previous node
- The prior forecast (pulled from your archive or versioned Excel file)
Example output:
Revenue is projected to increase 6.2% vs. prior forecast, driven primarily by higher Q4 pipeline conversion in EMEA.
Marketing and G&A expenses remain flat, while headcount is expected to rise modestly (+3 FTEs) due to new hiring plans in R&D.
Overall operating margin improves by 80 bps, mainly from stronger topline growth.
That’s commentary you can drop directly into your CFO deck.
Step 5: Flag Anomalies Automatically
You can use the same AI Agent (or a second one) to identify anomalies:
“Identify any forecast changes that look unrealistic based on historical patterns or business context.”
Or use a Function Node to define quantitative flags:
return items.filter(i => Math.abs(i.json.ChangePct) > 15);
The agent then highlights which departments or metrics need review — saving hours of manual Excel scanning.
Step 6: Deliver the Update
Now that the analysis is done, let’s make sure people actually see it.
- Add a Markdown Node to format the summary.
- Add a Teams Node or Email Node.
- Subject: “📈 Forecast Update: Key Changes vs. Prior Version”
- Body: The formatted summary.
- Recipients: CFO, FP&A team, department heads.
- Subject: “📈 Forecast Update: Key Changes vs. Prior Version”
- Body: The formatted summary.
- Recipients: CFO, FP&A team, department heads.
- Optional: Attach the refreshed Excel file or upload it to your Power BI workspace via API.
Step 7: Add Context Memory (Optional but Powerful)
Want your agent to get smarter over time?
Add a Memory Node or database connection to store:
- Prior commentary text
- Key assumptions
- Past forecast errors
Next time it runs, the AI Agent can reference those to write more consistent and contextual summaries. This feedback loop allows the AI agent to improve its performance and generate more accurate, context-aware outputs over time:
“Revenue growth remains above trend (+6%) following last month’s revised forecast, confirming that pipeline quality improvements are holding.”
Now you’ve got continuity and institutional memory — something even human teams struggle to maintain.
Step 8: Monitor and Iterate
Once deployed, keep an eye on:
- Accuracy of the data merges
- Consistency of AI commentary
- Runtime logs in n8n (to ensure nothing silently fails)
You can even add a Slack alert for failed runs, so your agent tells you when it’s having a bad day.
Real-World Example
A Fortune 500 manufacturing client built a version of this workflow with n8n + GPT-4.
Before automation:
- They ran 3 separate reforecast cycles a month.
- Each one took 2–3 analysts roughly 12 hours to compile, validate, and comment.
After deploying their Forecast Refresh Agent:
- Each cycle ran in under 2 hours total.
- FP&A commentary drafts were generated automatically.
- Analysts shifted from “Excel janitors” to “scenario modelers.”
Their CFO said it best:
“We didn’t replace analysts. We gave them an intern who never sleeps.”
Case Study 3: The Budget Approval Workflow Agent
If you’ve ever survived a corporate budgeting cycle, you know the hardest part isn’t the math — it’s the waiting.
Waiting for department heads to upload their numbers.
Waiting for leadership to review them.
Waiting for the inevitable “Can we stretch this target by 5%?” email.
Budgeting isn’t a single event — it’s a coordination marathon. And in most companies, that marathon happens in Excel and email chains.
That’s where your next AI Agent comes in: the Budget Approval Workflow Agent — an automation that tracks submissions, validates data, enforces rules, and routes exceptions automatically so approvals actually move.
The Goal
Build an AI Agent that reviews department budget submissions, checks them against predefined rules (like spend limits or headcount caps), flags exceptions, and routes each submission to the right approver automatically.
Think of it as your digital finance coordinator — part policy enforcer, part workflow manager, part therapist for impatient CFOs.
The Workflow at a Glance
Here’s what your Budget Approval Agent will do:
- Detect when a new budget file is submitted (e.g., to SharePoint).
- Validate the data for errors or rule violations.
- Check compliance with corporate budget guidelines.
- Generate an AI summary of key highlights and risks.
- Route compliant files to approvers automatically via Teams or email.
- Flag exceptions and track approval status.
Step 1: Set Up the Trigger
Use a SharePoint Trigger Node (or “When file is added” flow) to start the process every time a department uploads their budget file.
If your IT security is strict, you can also use:
- A Manual Trigger during pilot testing.
- Or a Cron Node that runs nightly to process new files.
💡 Naming convention tip: Enforce standardized file names like Budget_DeptName_2026.xlsx so your automation doesn’t chase mystery uploads called “FinalBudget(3)_REALLYFINAL.xlsx”.
Step 2: Ingest and Parse the Data
Add an Excel Node to read the uploaded file.
Select the “Budget” tab or the relevant sheet.
Typical columns might include:
Department | Account | Budget | Limit | Headcount | Notes |
|---|
If you want to keep things dynamic, you can add an If Node to check for different sheet names and route accordingly.
Step 3: Validate and Enforce Rules
Next, we’ll use a Code Node to validate data against your rules file (which could live in another Excel sheet or database).
Example JavaScript logic:
const rows = items[0].json;
const rules = [
{ account: "Travel", maxIncreasePct: 10 },
{ account: "Headcount", maxTotal: 150 },
];
const violations = rows.filter(r => {
const rule = rules.find(rule => r.Account === rule.account);
if (!rule) return false;
if (rule.maxIncreasePct && r.ChangePct > rule.maxIncreasePct) return true;
if (rule.maxTotal && r.Headcount > rule.maxTotal) return true;
return false;
});
return [{ json: { violations, compliant: rows.length - violations.length } }];
This filters out anything over budget or violating your spending policies.
Step 4: Use AI to Summarize and Explain
Here’s where the AI Agent brings clarity.
Add an AI Agent Node and give it this system message:
“You are a finance approval assistant. Summarize the department’s budget submission and highlight any compliance issues or risks. Mention major increases, explain potential drivers, and suggest questions for the approver.”
Feed it:
- The raw budget data
- Any validation results (especially violations)
Example output:
Marketing’s 2026 budget increased 18% vs. prior year, primarily in event spending and digital campaigns.
Headcount remains flat at 24 FTEs.
Note: Travel expenses exceed policy limits by 6%.
Recommended action: Request justification for marketing event ROI before approval.
Now you’ve got a crisp, CFO-ready summary — no manual review required.
Step 5: Route Approvals Automatically
Next, we’ll use the results to determine the approval path.
- Add an If Node:
- If no violations → route to VP for approval.
- If violations exist → route to CFO for escalation.
- If no violations → route to VP for approval.
- If violations exist → route to CFO for escalation.
- Add a Teams Node or Email Node for each route.
- Subject: “🚦 Budget Submission – [Department Name] ([Compliant/Exception])”
- Body: Use the AI-generated summary.
- Attach the budget file.
- Include quick buttons or links to approve/reject.
- Subject: “🚦 Budget Submission – [Department Name] ([Compliant/Exception])”
- Body: Use the AI-generated summary.
- Attach the budget file.
- Include quick buttons or links to approve/reject.
- Optional: Add an Approval Tracker (Google Sheet or DB) that logs:
- Submission time
- Approver name
- Status (Pending / Approved / Rejected)
- Comments
- Submission time
- Approver name
- Status (Pending / Approved / Rejected)
- Comments
Step 6: Automate Reminders and Escalations
Because approvals always get stuck somewhere.
Add a Wait Node for 48 hours.
Then use another AI Agent Node to draft follow-up messages:
“Hi Alex, just checking in on the HR budget approval. It’s been pending since Tuesday. Please review and approve if ready.”
If no response after another day → escalate to the next level approver automatically.
Now your budget workflow manages itself like a well-trained PM.
Step 7: Generate a Daily Summary
End each day with a digest email to finance leadership.
Add an AI Agent Node with this prompt:
“Summarize current budget submission and approval status across departments. Include number of submissions, approvals, pending items, and any critical exceptions.”
Output Example:
💰 **Budget Approval Update – October 7**
- 9 of 12 departments submitted their budgets.
- 6 approved, 2 pending VP sign-off, 1 escalated to CFO for policy exceptions.
- Major variances: Ops +12% due to capex, Marketing +18% from events.
- On track to complete all approvals by end of week.
Delivered automatically every morning to your CFO’s inbox or Teams channel.
Step 8: Add Governance and Audit Logging
Finance approvals require traceability — so make sure to:
- Log every approval/rejection in a Google Sheet or SQL table.
- Include timestamps and approver names.
- Retain AI summaries for transparency (auditors love documentation).
- Lock access to the workflow and credentials.
🧾 Pro tip: If you’re in a regulated environment, include a “Human-in-the-loop” checkpoint where final approvals still require manual confirmation before posting to ERP.
Real-World Impact
One of my corporate clients (a $2B manufacturing firm) used to run their budget cycle entirely in email and Excel.
The finance team manually chased 30+ department heads and compiled approvals in a massive tracker.
After implementing this AI-powered workflow:
- Budget approval time dropped by 40%.
- CFO reviews came pre-summarized with risks and exceptions.
- Approvers spent minutes, not hours, reviewing data.
- The finance team renamed the agent “Nancy the Negotiator” — because she always followed up (nicely, but relentlessly).
Design Patterns and Advanced Techniques
By now, you’ve built three powerful standalone automations:
- A Variance Commentary Agent that explains the “why.”
- A Month-End Close Assistant that keeps everyone on track.
- A Budget Approval Agent that enforces rules and routes approvals.
Each one saves you time. But when you connect them? You stop automating tasks — and start automating entire workflows.
These connected workflows represent multi agent systems, where multiple AI agents collaborate, each with specialized roles, to handle complex finance processes through coordination and communication.
Welcome to the world of multi-agent finance systems — where your automations talk to each other, share context, and make your finance function run like a well-oiled machine.
The Big Idea: From Silos to Systems
Most finance automation fails because it’s fragmented:
a Power Automate flow here, an Excel macro there, and an intern’s Python script somewhere in the middle.
The result? You save time individually but still waste time overall — fixing broken handoffs and reconciling disconnected systems.
In n8n, you can fix that by designing agentic ecosystems instead of isolated bots.
Think of each agent like a member of your finance team:
Agent | Role | Key Tools | Primary Output |
|---|---|---|---|
Close Assistant | Operations Coordinator | SharePoint, Teams, Email | Submission tracking, status summaries |
Variance Agent | Financial Analyst | Excel, GPT, Outlook | Commentary and explanations |
Budget Agent | Policy Enforcer | Excel, Teams, DB | Validations, approvals, escalations |
Forecast Agent | FP&A Partner | ERP, Power BI, AI | Forecast updates and insights |
When connected, they behave like a cross-functional team.
Each agent has its own job, but they share data, memory, and timing — creating a continuous finance cycle that runs mostly on autopilot.
Design Pattern #1: The Controller Agent
In complex workflows, you don’t want every agent acting independently.
You need a “controller” — a master agent that coordinates timing and task flow.
Example:
- Controller checks that all departments have uploaded files (Close Assistant).
- Once everything is in, it triggers the Variance Agent to generate commentary.
- If variances exceed 15%, it calls the Forecast Agent for updates.
- At the end, it compiles everything into one consolidated report.
In n8n, this controller is just another workflow — using Execute Workflow Nodes to run the others in sequence.
🧩 Pattern: Orchestrator → Specialist Agents → Output.
Just like a finance manager delegating work to analysts.
Design Pattern #2: Multi-Agent Collaboration
Sometimes, two agents need to share information dynamically.
Example: Your Budget Agent identifies departments overspending. It sends that data to the Variance Agent, which explains the impact on forecasted margins. The Forecast Agent then updates next month’s projection accordingly.
You can make this happen in n8n using:
- Webhooks → trigger one agent from another.
- HTTP Request Nodes → pass data between workflows.
- Shared Database Tables → act as memory storage.
Sharing resources such as data schemas or integrating external services can further optimize collaboration and reduce processing overhead in multi-agent workflows.
Now your agents talk to each other through structured data, just like real colleagues.
Design Pattern #3: The Gatekeeper
Sometimes your AI agent shouldn’t run until a condition is met — for example, all files are uploaded, or all approvals are in.
In that case, build a Gatekeeper Pattern:
- Add an If Node to check status (e.g., “all submissions complete”).
- Only then execute the next workflow (like the Variance Commentary Agent).
- If not ready, the gatekeeper sends a polite “still waiting” summary to the team.
This prevents half-baked reports and “Version 7_Final_Final” disasters.
Design Pattern #4: The Fallback Agent
Even the best AI gets confused sometimes.
Maybe it doesn’t understand a new cost center, or the data formatting changes.
That’s where the Fallback Agent comes in.
Set up two layers of protection:
- Error Handling Node: catches failed runs and logs them.
- Secondary AI Agent: writes a simple note like
- “Unable to generate commentary for Department X — data appears incomplete.”
It’s far better to send an honest placeholder than a wrong analysis.
And it builds trust with your stakeholders — they’ll see that your automation is smart and responsible.
Design Pattern #5: Context Memory
Right now, your agents work in isolation each month.
But what if they could remember past data, commentary tone, or recurring issues?
That’s where memory comes in — a game-changer for long-term workflows.
You can store memory in:
- A database (e.g., Postgres or Airtable)
- A Google Sheet
- Or even a text file in SharePoint
Each time the agent runs, it retrieves past context:
“Last month, Marketing was over budget due to events. This month, they’re back on target.”
You can implement this in n8n using the Read Database Node before the AI Agent step, and the Write Node afterward.
This makes your agents sound consistent — like the same analyst wrote every report.
Design Pattern #6: The Review Loop
In finance, you always need a human checkpoint.
So instead of fully automating approvals, create a review loop pattern:
- AI Agent generates output.
- n8n emails the draft to a reviewer (controller, manager, CFO).
- Reviewer approves via a button or Teams reply.
- Workflow continues only after approval.
You can capture approvals using:
- Webhook Triggers (from Teams or email links).
- Form nodes (using tools like Typeform or internal portals).
This pattern keeps accountability high — the AI works fast, but the humans still decide.
Case Study: The “Finance Brain” System
A mid-market SaaS company I worked with built an entire “Finance Brain” in n8n using this architecture:
- Close Assistant tracked files and sent reminders.
- Variance Agent summarized results once everything was in.
- Forecast Agent updated rolling forecasts weekly.
- Controller Agent compiled the outputs and emailed a single PDF report to executives every Friday.
The result?
Their FP&A cycle time dropped by 60%.
Executives got fresh insights weekly instead of monthly.
And their team went from firefighting to forecasting.
Testing, Monitoring, and Improving Over Time
If you’ve ever managed an intern, you know the first week is magic — they’re eager, fast, and overconfident.
Then you check their work, and half of it’s brilliant… and half of it’s “what were they thinking?”
Your AI agents are the same.
They’ll do exactly what you tell them — until they don’t.
And in finance, even one rogue output can turn into a late-night fire drill.
That’s why you need to treat your agents like employees: train them, monitor them, and review their performance regularly.
Let’s talk about how to test, track, and tune your AI automations so they stay sharp over time.
Step 1: Build a Testing Sandbox
Never test new agents on live data — trust me, that’s how bad Slack screenshots happen.
Instead, set up a sandbox workflow inside n8n:
- Use sample Excel or CSV data with realistic (but fake) numbers.
- Create a “Test” SharePoint or OneDrive folder that mirrors your production one.
- Replace email nodes with logging nodes or a test mailbox.
- Add a “Run Mode” variable (TEST or PROD) to easily toggle environments.
💡 Pro tip: I always suffix my test workflows with _DEV. It’s a cheap insurance policy against sending 100 test emails to your CFO.
Once you’re confident the workflow behaves correctly — and that the AI output makes sense — then promote it to production.
Step 2: Use n8n’s Built-In Execution Logs
n8n gives you incredibly detailed execution logs for every run.
Each node shows:
- Input data
- Output data
- Runtime
- Errors (if any)
Review these after your first few runs. You’ll quickly spot:
- Broken node connections
- Unexpected data types
- AI hallucinations (yes, they happen)
You can also export these logs for documentation — helpful for audits or when you’re proving ROI.
Step 3: Add Self-Checking Logic
Don’t just assume your agent got it right — make it check its own work.
Here are three ways to build guardrails directly into your workflows:
1. Validation Nodes
Add a “Check for Nulls or Outliers” Code Node before sending any data to AI.
if (!items[0].json || items[0].json.length === 0) {
throw new Error("No data found. Stopping run.");
}
2. Double-Verification
Have your AI Agent output both:
- The answer, and
- Its reasoning summary.
Then check if both align logically.
If the variance explanation doesn’t match the data, flag it.
3. Fallback Logic
If the AI Agent fails, route to a fallback:
“AI unable to generate commentary for Department X — data appears incomplete.”
That one line of transparency turns a potential failure into a professional safeguard.
Step 4: Track Performance Metrics
Every month, give your AI agents a performance review.
I’m not kidding — make it a KPI just like any other process.
Here’s what to measure:
Metric | Why It Matters | Target |
|---|---|---|
Execution Success Rate | Reliability of workflow | >95% |
Average Runtime | Measures efficiency and API health | Steady or improving |
Error Count / Week | Tracks stability | Decreasing |
AI Accuracy | How often the commentary or output makes sense | 90%+ validated |
Manual Interventions | When humans had to fix or override results | Trending down |
Log these in a shared tracker or Power BI dashboard.
That’s how you prove your automation is creating real, measurable impact — not just “feeling faster.”
Step 5: Add Notifications for Failures
Even the best automation fails occasionally — and the worst failure is the one you don’t know about.
Use n8n’s Error Workflow Trigger:
- When any workflow fails, it launches a backup workflow.
- That backup sends you a message in Teams or Slack with:
- The workflow name
- Error message
- Timestamp
- Link to re-run
- The workflow name
- Error message
- Timestamp
- Link to re-run
Example:
🚨 Workflow Failed: Month-End Close Assistant
Error: File not found at path /Finance/MonthEnd/2025-10
Occurred: Oct 7, 2025 – 8:12 AM
👉 [View Execution Log in n8n]
This turns firefighting into calm problem-solving — no more mystery outages.
Step 6: Monitor Outputs for Drift
AI drift happens when your model’s tone or reasoning subtly shifts over time — maybe after a model update or new data source.
Here’s how to stay ahead:
- Save each AI output to a “history” folder or database.
- Once a month, compare samples side-by-side.
- Watch for changes in phrasing, accuracy, or tone.
If it starts writing in a weird style (“Variance is unacceptably slaying expectations 😎”), it’s time to retrain or tweak the prompt.
💬 Pro tip: Keep a “master prompt” document for each agent. It’s your control sample when results start drifting.
Step 7: Include a Human-in-the-Loop Step
Even in a mostly autonomous finance function, certain workflows should always include a human checkpoint — especially anything executive-facing or compliance-heavy.
Insert a review step before sending outputs to leadership:
- AI Agent drafts the summary or commentary.
- n8n sends it to an FP&A reviewer (you, your controller, etc.).
- Reviewer approves, rejects, or edits directly from Teams or a form.
- Once approved, the workflow continues.
This keeps trust high and ensures the AI never sends out something half-baked.
Step 8: Iterate With Feedback Loops
Just like analysts learn from feedback, your agents should too.
After each reporting cycle, ask yourself (or your team):
- Which steps still required manual fixes?
- Which outputs needed edits?
- Which prompts could be more specific?
Then update your workflow or system message accordingly.
Example:
If the Variance Agent keeps writing “Revenue increased significantly” with no number, tweak the prompt:
“Always include % changes and 1–2 quantified explanations for each variance.”
Small refinements like that compound fast — and your agents get noticeably “smarter” each month.
Step 9: Version Control Your Agents
Treat your workflows like software:
- Export each version after a big change.
- Add a short note: “v3.1 – Added Teams notifications, fixed null variance bug.”
- Store them in GitHub, OneDrive, or an internal archive.
That way, if an update breaks something (or if GPT suddenly changes tone), you can roll back in seconds.
Common Mistakes and How to Avoid Them
Let’s be real: most finance automation projects fail for the same reason diets do — people go too big, too fast, with zero plan to maintain it.
They try to automate everything at once, chase shiny tools, or hand the process to IT and hope for magic.
But AI Agents in n8n aren’t some plug-and-play miracle. They’re powerful, flexible, and—when built right—life-changing for finance teams. When built wrong, they just become another “cool demo” that no one trusts.
So let’s look at the biggest mistakes I see finance pros make (and how to avoid them) before you scale your agent ecosystem.
Mistake #1: Automating Chaos
If your current process is messy, automating it won’t fix it—it’ll just make the mess run faster.
I see this constantly:
“Let’s automate our close!”
(Translation: “Let’s hard-code our broken file structure into an n8n workflow.”)
Before you build an agent, map your process manually:
- What’s the trigger?
- Where does the data come from?
- Who approves it?
- What causes delays or confusion?
Then fix the friction first.
Automation should amplify efficiency, not enshrine dysfunction.
✅ Do this instead: Use your first workflow as a process cleanup exercise. By the time you automate it, it should already be lean and logical.
Mistake #2: Forgetting to Define the Goal
AI Agents are not mind readers (yet). If you don’t clearly define what “done” looks like, you’ll get vague, inconsistent results.
Example bad prompt:
“Summarize this data.”
You’ll get random paragraphs of fluff.
Better prompt:
“Summarize the top 3 variances >10% between actual and forecast, include % changes and 1-sentence explanations for each department.”
See the difference?
AI is powerful, but it’s not psychic. Clear direction = consistent results.
✅ Do this instead: Write your system message like you’re onboarding a new analyst on day one. The more context you give, the better the agent performs.
Mistake #3: Using AI for Everything
Not everything needs to be “smart.”
Sometimes you just need a clean VLOOKUP.
I’ve seen teams overcomplicate simple automations by adding AI where deterministic logic works better. For instance:
- Validating numeric data? Use rules.
- Moving files or renaming tabs? Use n8n’s standard nodes.
- Summarizing or interpreting? That’s where AI shines.
✅ Do this instead: Use AI where judgment is required, not where precision is.
If the output needs to be exact, keep it rule-based.
Mistake #4: Ignoring Governance
If your agents are touching financial data, you must treat them like part of your control environment.
Common fails:
- Using personal logins for workflow connections
- Storing API keys in plaintext
- No audit logs
- No oversight on outputs
All it takes is one wrong file or unsecured token to set off a compliance nightmare.
✅ Do this instead:
- Store credentials in n8n’s Credential Vault.
- Use service accounts (not personal ones).
- Log all outputs and approvals.
- Add a “Human-in-the-Loop” step for sensitive workflows.
Remember: the point is automation, not abdication.
Mistake #5: Not Testing Edge Cases
The happy path is easy.
The broken path is where things explode.
You have to test for:
- Missing data
- Renamed files
- Empty columns
- Bad dates
- Zero actuals
- Variances over 1000%
Because guess what? Those will happen every month.
✅ Do this instead:
Before going live, stress-test your workflow with bad data.
If your agent survives that, it’ll survive anything.
Mistake #6: Overcomplicating the Workflow
Finance teams love complexity.
(We can’t help it—it’s how we prove we’re smart.)
But in automation, complexity kills maintainability.
When your workflow looks like a plate of spaghetti, you’ll spend more time debugging than saving.
✅ Do this instead:
- Build one workflow per objective (close, forecast, budget).
- Chain them together using Execute Workflow Nodes.
- Document your logic in plain English inside a “Notes” node.
- Keep it so simple that if you got hit by a bus, someone else could take over tomorrow.
Mistake #7: Skipping the “Explain It Like I’m Five” Test
If your CFO or team can’t explain what your automation does in one sentence, it’s too complex or too opaque.
“It’s a neural network that ingests real-time variances via vector memory…”
No. Try again.
“It reads the variance report, summarizes key points, and emails them to leadership.”
That’s clarity.
✅ Do this instead:
When your automation is finished, explain it to a non-technical coworker.
If they understand it immediately, it’s ready.
If not, simplify until they do.
Mistake #8: Treating Automation as a “Project” Instead of a Culture
Many teams build one flashy automation, present it in a meeting, then move on.
Six months later, no one maintains it, it breaks, and everyone says,
“See, automation doesn’t work here.”
That’s not a failure of AI.
That’s a failure of ownership.
✅ Do this instead:
- Assign workflow owners (just like process owners).
- Include automation metrics in team OKRs.
- Celebrate each automation that saves time.
- Encourage peer learning—host a 15-minute “Automation Friday” where someone demos a new n8n build.
Automation isn’t a project. It’s a mindset shift.
Mistake #9: Not Measuring ROI
If you can’t show how much time or effort your automations save, leadership will always see them as “nice to have.”
✅ Do this instead:
Track these three numbers:
- Hours saved per month (time previously spent manually)
- Errors reduced (manual reporting mistakes, version issues)
- Cycle time improvements (days faster to close/report)
Then quantify it:
“We saved 40 hours/month — that’s half an FTE or ~$30k annually.”
That’s when automation stops being a cool toy and becomes a strategic lever.
