Skip to content

bencoding

    • About
    • Publications
  • Microsoft Agent Framework: Designing Human-in-the-Loop Agents That Enterprises Can Actually Trust

    Apr 22, 2026

    ·

    Agents, AI
    Microsoft Agent Framework: Designing Human-in-the-Loop Agents That Enterprises Can Actually Trust
  • Demo-Grade vs Ship-Grade: The Most Expensive Confusion in AI

    A great demo is a dopamine hit with a budget. It is the moment when a messy idea turns into something you can click, react to, and show your board with confidence. And in 2026, with copilots, agents, and “vibe-coded” prototypes, the demo is getting easier to manufacture than ever.… Read more ⇢

    Demo-Grade vs Ship-Grade: The Most Expensive Confusion in AI
  • Long Conversations Break Agents Before They Break Models

    There is a mistake I see over and over in LLM projects. Teams assume that once they pick a model with a large context window, memory is basically solved. They think the hard part is buying enough room. It usually is not. The hard part is deciding what deserves to… Read more ⇢

    Long Conversations Break Agents Before They Break Models
  • Ship Like a Creator: What MrBeast’s Production Memo Teaches Modern Product Teams

    Most product teams still build like they are producing a film, even as the playbook has shifted under their feet. This is why the internal creator-style operating guidance in How-To-Succeed-At-MrBeast-Production.pdf is so useful: it reads less like entertainment advice and more like a blueprint for how modern product teams should… Read more ⇢

    Ship Like a Creator: What MrBeast’s Production Memo Teaches Modern Product Teams
  • Context Window Compaction in Mastra: How to Keep Agents Sharp as Conversations Grow

    Some teams think about long context the wrong way. They treat the context window like a storage upgrade. Bigger model. Bigger window. Bigger bill. Problem solved. That is not how this works in production. As conversations get longer, the real challenge is not whether the model can technically accept more… Read more ⇢

    Context Window Compaction in Mastra: How to Keep Agents Sharp as Conversations Grow
  • How to Ship Microsoft Agent Framework Skills from a CMS Instead of the File System

    Most teams start with Microsoft Agent Framework skills on disk because that is the default mental model the framework encourages today. In .NET, FileAgentSkillsProvider is explicitly documented as an AIContextProvider that discovers skills from filesystem directories and follows a progressive disclosure pattern: advertise the skill, load the full SKILL.md only when needed, then read supporting resources… Read more ⇢

    How to Ship Microsoft Agent Framework Skills from a CMS Instead of the File System
  • How to Read Microsoft Agent Framework Skills from a Database Instead of the File System

    Most teams start with file-based skills because that is the built-in model in Microsoft Agent Framework today. In .NET, the built-in FileAgentSkillsProvider discovers SKILL.md files from directories, advertises skill names and descriptions in the prompt, returns the full skill body through load_skill, and reads supporting files through read_skill_resource. That model is clean, portable, and easy to… Read more ⇢

    How to Read Microsoft Agent Framework Skills from a Database Instead of the File System
  • Task Context vs Shared Context: The Mental Model That Makes AI Product Teams Actually Scale

    AI did not introduce the need for context. It exposed how little of it most teams have. Right now, “context engineering” is having its moment. People talk about RAG, long context windows, tool calling, and standards like the Model Context Protocol. (Model Context Protocol) Those are real advances. But they… Read more ⇢

    Task Context vs Shared Context: The Mental Model That Makes AI Product Teams Actually Scale
  • Your Agent Does Not Need More Prompt. It Needs Memory.

    Most teams try to fix weak agents by rewriting prompts. That is usually the wrong move. The real issue is that the agent has no durable memory model. It can answer the current turn, but it cannot reliably carry forward user preferences, prior decisions, task context, or the small facts… Read more ⇢

    Your Agent Does Not Need More Prompt. It Needs Memory.
  • Defining Product Value: Stop Treating Products Like Projects

    Most leadership teams say they are building “product value,” then immediately measure it like a finance exercise. ROI. LTV. Payback period. Those numbers matter, but they are also the fastest way to undervalue the most important products you will ever build. The uncomfortable truth is that many of the products… Read more ⇢

    Defining Product Value: Stop Treating Products Like Projects
1 2 3 … 18
Next

About BENCODING

Writing on enterprise AI for CTOs, operators, and builders: the challenges, the foundations, and where the field is heading.

Written by Ben Bahrenburg. For the full bio and an AI chat grounded in my writing, visit bahrenburgs.com.

Elsewhere: LinkedIn · GitHub · bahrenburgs.com · RSS

bencoding

    • About
    • Publications
  • LinkedIn
  • Tumblr
  • GitHub
  • Subscribe Subscribed
    • bencoding
    • Join 52 other subscribers
    • Already have a WordPress.com account? Log in now.
    • bencoding
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar