March 7, 2026 · 16 min read

Where Do We Actually Start?

What to do, in what order, how it works technically, and what good looks like.

This comes up in almost every customer conversation right now.

“We have all this unstructured information. Emails, Slack, meetings. And I want to store it somewhere so it doesn’t get lost. But then I ask myself: how do I actually make this useful?”

And a colleague, same week:

“Claude Code is cool, but I don’t really know what it should help me with.”

I hear this constantly. “We know we should use AI. Where do we actually start?”

Not “should we.” That’s settled. The question is how. And most advice out there doesn’t actually answer it. “Give everyone a license. Run a workshop. Appoint a champion.” Cool. Then what?

This is what I actually tell people. Not abstract principles. What to do, in what order, how it works technically, and what good looks like.


The Short Version

You should stop doing knowledge work. Manage it instead.

You shouldn’t write that email. You should review the draft AI wrote. You shouldn’t research that topic. You should review AI’s research. You shouldn’t build that report. You should specify what you need and review what comes back.

Your value is taste, judgment, relationships, decisions. Not the typing.

This is the practical version. Not where AI is going. What to do Monday.

There’s a staircase with four steps. Each one builds on the last. The order matters. Not just because each step generates the data the next step needs. But because each step builds the judgment. You learn to review AI-drafted meeting notes before you review AI-drafted client proposals. You learn to evaluate AI search results before you trust AI-prepared decisions. Skip a step and you’re managing something you don’t understand well enough to catch when it’s wrong.


Step 1: Automate Pattern Work

Start with the knowledge work nobody wants to do but everyone does daily. The stuff that follows a pattern.

Meeting Intelligence

This is the single highest-consensus first move across every practitioner I’ve read and talked to.

Your project lead finishes a 45-minute client call. Before they’ve made coffee, everyone on the team has a summary with the three things the client wants changed, who’s responsible for each, and the deadline they mentioned. Nobody has to write it up. Nobody has to ask “what happened in that call?”

The way it works: audio gets transcribed (speech-to-text), then an LLM processes the transcript to extract structure: decisions, action items, sentiment, topics. Some tools (Granola) let you add your own notes during the meeting and merge them with the AI summary. The best ones learn your team’s format over time.

Tools: Otter.ai, Granola, Fireflies, Google Meet built-in, Microsoft Teams Copilot. The space is mature.

What you don’t expect: transcripts become searchable institutional knowledge. “What did the client say about pricing in the March call?” You can answer that now. This feeds directly into Step 2. You’re building the knowledge base without even trying.

When nobody writes meeting notes anymore, people notice on day one.

Email and Communication Drafting

Every Friday, your project status email is drafted automatically from Jira ticket status, this week’s meeting summaries, and the client’s last email. You spend 3 minutes reviewing it instead of 30 minutes writing it. The email is better than what you’d have written, because it didn’t forget to mention the thing from Tuesday’s call.

This works for any recurring communication: status updates, project reports, client check-ins. AI drafts from your data. You review, edit, send.

How to start: give Claude or Gemini the last three emails you sent to a client plus this week’s context. Ask it to draft the next one. Once you see the quality, formalize it. Build templates. Or go straight to automation: n8n can pull context from Jira, email, and meeting summaries, assemble it, and generate the draft on a schedule.

The mental shift is important here. You’re not asking AI a question. You’re giving it data and a template and getting back a draft. From chatbot to managed drafting.

Report Generation

Your Monday morning sales pipeline review arrives in your inbox, pre-built. Deals by stage, movement since last week, deals at risk based on last-contact-date, recommended follow-ups. You spend 10 minutes reviewing it instead of 45 minutes pulling it together from HubSpot.

A script or workflow pulls data from your tools (CRM, project management, ticketing), formats it, and runs it through an LLM to generate the narrative sections. The best version: the AI doesn’t just summarize data, it flags anomalies. “Deal X hasn’t had activity in 3 weeks” or “support tickets for Product Y increased 40% this month.”

This can be as simple as a recurring Claude or Gemini prompt with pasted data, or as sophisticated as a custom pipeline using an API. Tools like n8n can automate the data collection. The point is: reports that follow a pattern should not be written by humans.

Classification and Routing

A customer emails support. AI reads the email, classifies it (billing issue, technical bug, feature request), assesses urgency (payment blocked = high, nice-to-have question = low), and routes it to the right team member. The team member opens their queue and sees a pre-sorted, prioritized list with AI-suggested responses for the simple ones.

Under the hood: fine-tuned classification models or prompted LLMs. Feed it examples of past tickets and their categories. The model learns the patterns. For routing, map categories to team members or queues. Most helpdesk platforms have AI classification built in now. Jira has automation rules you can extend with AI. If your support inbox is just Gmail, Google Apps Script or n8n can watch incoming mail, classify it with an LLM, and route it to the right person or label automatically.

Why Step 1 Goes First

Low stakes. A slightly off meeting summary costs nothing.

High visibility. Everyone sits in meetings. Everyone hates writing notes. When the AI summary is good, people notice. It builds confidence for everything that comes after.

It generates data. Transcripts, email logs, classified tickets. All of this feeds Steps 2 and 3.

One thing to watch for. A Berkeley Haas study tracked 200 employees over eight months after they got AI tools. They didn’t work less. They worked more. Product managers started writing code. Researchers took on engineering tasks. 83% said AI actually increased their workload.

The promise of AI is “do less of the boring stuff.” The reality, when it’s unmanaged, is “do more of everything.” Faster output raises expectations. Higher expectations expand scope. Expanded scope creates new work. It’s a cycle.

That’s not an argument against Step 1. It’s an argument for Step 1 being managed. When you automate meeting notes, decide in advance what happens with the freed time. The meeting summary saves 30 minutes. Those 30 minutes should go somewhere intentional, not get absorbed by scope creep.


Step 2: Make Knowledge Accessible

This is the exact problem my customer described. Knowledge scattered across tools nobody searches and channels nobody reads.

Every company has this: critical knowledge trapped in email threads, Slack channels, meeting recordings, shared drives, wikis nobody updates.

Someone asks “how did we handle this kind of request before?” and the answer is “ask Sarah, she’s been here five years.” When Sarah is on vacation, the knowledge is gone.

This is the unstructured data problem. And it’s where AI genuinely changes things.

Instead of searching for a file, you ask a question.

  • “What did we discuss with Customer X last quarter?” Actual answer, with source links to the specific meetings and emails.
  • “What are the open action items from the last project review?” Structured list, pulled from meeting summaries and Slack threads.
  • “How did we handle the GDPR compliance issue for the Y project?” Precedent, with the decision and the reasoning behind it.

A new employee can query the company’s collective knowledge on day one. Picture this: your account manager is about to call a client. They ask the system: “Give me a briefing on Client X.” They get:

  • Last 3 meeting summaries (from Step 1’s transcriptions)
  • Open action items and their status
  • Recent support tickets and their resolution
  • The last proposal that was sent
  • Any mentions of the client in Slack from the past month

Assembled in 30 seconds. Without this, the account manager would’ve spent 20 minutes digging through email, CRM, and Slack. Or just winged it.

How does the system pull that briefing together in 30 seconds? This is where it gets a bit technical, but it matters for understanding what’s possible and what the limitations are.

Traditional search is keyword-based. You search “pricing” and get every document containing the word “pricing.” If the doc you want uses the word “cost model” instead, you don’t find it.

Semantic search works differently. Your documents get converted into embeddings: mathematical representations of meaning, not just words. When you search “pricing,” the system also finds documents about “cost model,” “rate card,” “billing structure.” Because they mean similar things.

This is done with vector databases. Your documents get chunked into passages, each passage gets embedded (turned into a vector), and stored in a database optimized for similarity search. When you query, your question also gets embedded, and the system finds the passages most similar in meaning.

RAG (Retrieval-Augmented Generation) is the pattern that ties it together. When you ask a question:

  1. Your question gets embedded
  2. The system retrieves the most relevant passages from your knowledge base
  3. Those passages get fed to an LLM as context
  4. The LLM generates an answer grounded in your actual data

The key word is “grounded.” The AI isn’t making things up. It’s answering based on your documents, and it can cite which documents it used.

The tooling here is mature and the space is crowded. Every major cloud provider has a solution: Google Vertex AI Search, Microsoft 365 Copilot, AWS Kendra. Langdock (Berlin-based, GDPR-compliant, hosted in Europe, $15M ARR) is strong for European companies with data sovereignty requirements. You can also build a solid RAG pipeline with n8n and a vector database. RAG is a solved pattern. The question isn’t whether the technology works. It’s whether you feed it the right data.

If you want full control: LangChain or LlamaIndex as the orchestration layer, Pinecone, Weaviate, or Chroma as the vector store, OpenAI or Anthropic embeddings. This path makes sense if you have specific requirements or a technical team that wants ownership.

What Most People Get Wrong

“We need to clean up our wiki first.” No. Semantic search works on messy data. Point it at your existing tools: email, Slack, Drive, Confluence. Let people query it. The value is immediate. Don’t let perfect be the enemy of good.

Dumping everything in. The opposite mistake. Not everything should go into the knowledge base. Relevance filtering matters. Only retrieve what’s relevant for the specific question, not everything ever written. I wrote about this in depth in On Context: more context isn’t always better context. What matters is curating the right context, not pouring in everything.

Ignoring decision reasoning. Foundation Capital argues that the real value isn’t just making documents searchable. It’s capturing the reasoning behind decisions. When a VP approves a discount, the CRM logs the number. The business rationale, why this customer, why this size, disappears into forgotten Slack threads.

Start capturing decision reasoning alongside decisions. Even informally. A sentence in the CRM note. A short “rationale” field. It compounds, and it’s exactly what AI needs to make good decisions on your behalf later (Step 3).

Why This Is Step 2

It depends on Step 1. The meeting transcripts, the email drafts, the classified tickets. That’s the raw material that makes the knowledge base valuable. Without Step 1, your knowledge base is mostly a fancier search over the same stale wiki.


Step 3: Prepare Decisions

Steps 1 and 2 are about AI answering your questions. Step 3 is about AI telling you what to look at before you ask.

Your head of customer success opens their dashboard Monday morning. Instead of a list of accounts, they see:

  • “3 accounts have had no contact in 30+ days. Based on past patterns, accounts with this inactivity level churn at 2x the base rate.”
  • “Support ticket volume for Product Y increased 40% this month. The primary issue is [X]. Here’s a draft FAQ update.”
  • “Customer Z mentioned competitor evaluation in last week’s call. Relevant: they’re up for renewal in 6 weeks.”

None of this required a question. The system assembled it from meeting transcripts (Step 1), the knowledge base (Step 2), CRM data, and support tickets.

This is what proactive AI intelligence looks like across different functions:

Support and customer intelligence. Which support requests are increasing? Which customers haven’t been contacted in too long? Which deals in the pipeline are at risk based on engagement patterns? The AI doesn’t wait for you to ask. It surfaces these patterns proactively.

Pre-meeting briefings. Before a client meeting, the system automatically assembles: recent interactions (from Step 2’s knowledge base), open issues, contract status, relevant market news about the client. Your account manager walks in prepared without 20 minutes of manual prep.

Process and bottleneck analysis. Where are the bottlenecks? Which projects consistently miss deadlines? Which types of requests take longest to resolve? What’s the average time between “customer asks for X” and “customer gets X”?

Marketing and sales personalization. Dynamic content that adapts to specific segments. Outbound emails that reference the prospect’s actual situation. Campaign copy that adjusts tone and emphasis based on what’s worked for similar audiences.

How It Connects

This is where things connect across systems. CRM, support tickets, meeting transcripts, email, project management tools. APIs or integration platforms (n8n, custom pipelines) handle the plumbing.

The system tracks ticket volumes, deal progression, customer engagement. This doesn’t need fancy ML. Often simple rules plus an LLM to interpret what it sees works well. “If no contact in 30 days AND renewal within 90 days, flag.”

Daily or weekly summaries. Proactive, not reactive. Push to Slack, email, or dashboard.

Agentic workflows. More advanced: AI agents that don’t just analyze but take preliminary action. Draft the follow-up email for the at-risk account. Create the Jira ticket for the recurring bug. Prepare the agenda for the next client meeting based on open items. You review and approve. The tools to build this exist today (n8n, LangChain, Claude’s tool use, custom pipelines). What’s still catching up is adoption. Most companies haven’t built these yet, but the components are all there.

Honest Limitations

This is where most of the long-term value sits. But the tooling here is thinner than Steps 1 and 2.

Meeting transcription is a solved problem. Enterprise semantic search is mature. Proactive AI intelligence across multiple systems, pulling patterns from your CRM, support desk, meeting history, and project tools simultaneously? Still early.

Which is why getting Steps 1 and 2 right matters so much. They build the data foundation that makes Step 3 possible. Without searchable meeting transcripts and a unified knowledge base, Step 3 has nothing to work with.

The companies seeing the most value here built their own pipelines. It’s not off-the-shelf yet. But the components are all available, and if your team has any engineering capacity, this is where it should go.


Step 4: Managed Workforce

The end state.

This is where the thesis lands: you direct AI work and review output. Your day is taste, judgment, relationships, decisions.

Email drafts prepared from context, you review and send. Research completed with sources cited, you evaluate and decide. Reports built from live data, you verify and distribute. Client proposals drafted from past proposals and current requirements, you refine and personalize.

Not for one task. For most of your knowledge work.

Shopify put AI competency in performance reviews. CEO Tobi Lutke: “Reflexive AI usage is now a baseline expectation.” Employees must justify why a task can’t be done by AI before requesting headcount. Revenue growing 30% year-over-year.

The critical detail: Shopify didn’t just hand people tools. They changed how work is evaluated. AI proficiency became part of how you’re assessed. That’s organizational design, not tool deployment.

The Honest Caveat

“Manage, not do” requires knowing what you’re managing. A study from January 2026 found that users without AI assistance actually scored higher on tasks than those relying on AI, across all experience levels. Over-reliance degrades the skills needed to evaluate AI output.

This matters. The danger is creating “prompt managers” who can’t tell good output from confident-sounding garbage. You need enough domain expertise to manage well. The ideal is T-shaped: broad AI orchestration skills plus deep domain knowledge in what you’re managing.

Step 4 is a destination, not a starting point. The staircase builds both the infrastructure and the organizational muscle to get here.

The biggest constraint on AI adoption isn’t technology. It’s leadership, organizational design, and the willingness to change how work gets evaluated.


What Makes This Work

The staircase doesn’t stand on its own. It needs a few things underneath it.

Someone Who Owns This

Not IT. Not the “innovation team.” Someone who does the actual work and can show what good looks like.

Two or three people who are genuinely curious, use AI daily, and can translate between “what the tool can do” and “what our team needs.” They test workflows, capture what works, and distribute it across the org.

Their job: pick up what the frontline discovers, formalize it, and make it repeatable. Not a committee. Not a strategy deck. People who ship.

Clear Data and Privacy Guidelines

Where does company data go? What’s okay to put into AI tools? What about customer data?

In the absence of clear guidelines, the default is “don’t.” And that’s exactly what happens. Or worse: people use AI secretly with no guardrails at all.

You don’t need a fifty-page policy. You need practical rules:

  • Approved tools: which AI tools are sanctioned, with what data?
  • Data boundaries: what’s okay to input (internal notes, drafts) vs. what’s not (customer PII, financial data)?
  • Enterprise tiers: are you on plans that contractually prevent training on your data? (Consumer ChatGPT ≠ ChatGPT Enterprise)
  • European context: GDPR is real. If you’re in the EU, data sovereignty matters. This is why tools like Langdock (EU-hosted) exist.

Write it on one page. Make it findable. Update it quarterly.

AI Literacy Beyond “How to Prompt”

Most people’s mental model of AI is still “chatbot.” You open a window. You ask a question. You get an answer. That’s maybe 10% of the value.

Think of it as a ladder:

LevelMindsetValue
Chatbot”I ask it things sometimes”~10%
Assistant”It helps me with specific tasks”~30%
Workflow partner”It’s part of how I do my job”~60%
Managed workforce”I direct AI work and review output”~90%

A survey of 5,000 workers found that 40% of executives save eight or more hours per week with AI. Two-thirds of non-management staff? Less than two hours. Or nothing. Same tools. Same AI. Different mental model.

Executives intuitively operate at Level 3-4. They use AI to prepare decisions, draft communications, assemble context. Staff stays at Level 1-2. Ask a question, get an answer, go back to doing things the old way.

Part of this is the tooling itself. There’s a difference between a chat window and an agentic assistant. Chat windows are Level 1: you ask, it answers. Agentic assistants are Level 3-4: you give them a task and they execute it. “Draft this week’s status update from Jira and the meeting notes.” “Research this client and prepare a briefing.” “Review this proposal and flag issues.” They do multi-step work autonomously. You review the output.

The most interesting tools here are universal assistants like Claude Code or Cowork, or the open-source equivalents (OpenCode, OpenWork). These aren’t features bolted onto one product. They’re general-purpose agents that work across your tools, files, and systems. Copilot helps you inside Microsoft. Claude Code helps you with whatever you’re working on.

The colleague who said “Claude Code is cool, but I don’t know what it should help me with” has a Level 4 tool and is using it at Level 1. The ladder isn’t just a mindset shift. It’s knowing what your tools can actually do.

Literacy isn’t prompt engineering courses. It’s the mental model shift from “tool I use” to “workforce I manage.” And the best way to teach it is to show it. Pick a real workflow, set it up at Level 3-4, demonstrate the before/after. That does more than any training deck.

Quality Review Habits

“Manage, not do” means AI does the work and you review it. That requires knowing what to check.

The operating mode is “trust but verify.” Not “assume it’s wrong” (that kills the efficiency gain) and not “assume it’s right” (that’s how you ship hallucinated data to a client).

Practical approach:

  • Spot-check facts and numbers. LLMs confidently fabricate specifics. Check dates, figures, names.
  • Read for reasoning, not just conclusions. If you asked for an analysis, check whether the logic holds, not just whether the summary sounds right.
  • Know your AI’s weak spots. Every model has domains where it’s strong and domains where it’s confidently wrong. Learn yours through use.
  • Don’t review everything equally. Internal status update? Quick skim. Client-facing proposal? Line by line.

The Checklist

Where are you right now?

  • Every knowledge worker has access to an AI tool they can delegate work to, not just a chat window (Claude, Gemini, Copilot)
  • Meetings are transcribed and summarized automatically
  • There are clear, practical guidelines for AI use with company and customer data
  • At least one recurring report or communication is AI-drafted and human-reviewed
  • At least one workflow is AI-augmented end-to-end
  • Internal knowledge (meetings, email, docs) is semantically searchable
  • Someone owns AI adoption who uses AI daily and does the actual work
  • AI competency is part of how people are evaluated

If you’re missing items in the first half, that’s your starting point. Before strategy, before the staircase, before everything else. Those are table stakes.

If you’ve got the first half and you’re missing the second, you’re ready for Steps 2-4.


Where Do We Start?

Back to the customer question.

Start with the checklist. Then start climbing.

Here’s your first move. This week: turn on meeting transcription for one recurring meeting. After the meeting, send the AI summary to the attendees. Don’t announce it as a pilot. Just do it. See what people say.

Then pick one recurring communication. A status update, a weekly report, a client check-in. Give AI the context, review the draft, send it. See how much time comes back. Then use AI to prepare the agenda for your next meeting from last meeting’s action items.

That’s Step 1. Use that credibility, and that data, to build toward Steps 2, 3, and 4.

The tools are here. They work. The gap isn’t technology. It’s understanding what AI is actually for. Not a chatbot you ask questions. Not a tool that helps you work faster.

A workforce you manage.