Most teams start with Microsoft Agent Framework skills on disk because that is the default mental model the framework encourages today. In .NET, FileAgentSkillsProvider is explicitly documented as an AIContextProvider that discovers skills from filesystem directories and follows a progressive disclosure pattern: advertise the skill, load the full SKILL.md only when needed, then read supporting resources on demand. That is a strong default, but it is still a file-system model, and Microsoft’s current .NET package is also still documented as prerelease. (Microsoft Learn)
That approach works well when skills are developer-owned assets that change with source code. It starts to break down when skills become operational artifacts. The minute you need legal review, policy updates, regional variations, tenant-specific skills, staged rollout, or emergency rollback without waiting for an application deploy, your storage model is no longer a technical detail. It becomes part of your operating model. Ben Bahrenburg’s March 30 article made that case for moving skills into a database. I think the next step for many enterprise teams is even more pragmatic: move skills into a CMS built for managed content operations, and let your C# agent runtime consume that content in a governed way.
I would implement this with Payload CMS as the skill control plane and a custom C# AIContextProvider as the runtime integration layer. Payload gives you structured collections, generated APIs, drafts, versions, access control, hooks, and an admin UI. Microsoft Agent Framework gives you the runtime seam through AIContextProvider and function tools created with AIFunctionFactory.Create. Put those together and you get a clean split: content teams manage skills as governed content, while the .NET application remains the execution boundary. (Payload)
Why a CMS is a better fit than a database for many enterprises
A database solves persistence. A CMS solves management.
That distinction matters. A raw database-backed solution usually forces you to build your own editorial workflow, role-based access patterns, audit history, publishing controls, and release process. Payload already gives you much of that shape. Collections automatically generate Local API, REST API, and GraphQL API endpoints. Drafts are built on top of versions, which means editors can change content without publishing it immediately. Access control can be scoped by operation, role, or document criteria. Hooks let you integrate with outside systems when content changes. (Payload)
From a management perspective, this is the real reason to do it. Skills are increasingly not just prompts. They are encoded operating policy. In a regulated environment, the question is not only “can the agent load the skill?” The question is “who changed it, who approved it, when does it go live, which tenant sees it, and how do we roll it back?” Payload’s drafts, versions, and access control line up much more naturally with that set of questions than a homegrown admin screen over a SQL table. (Payload)
There is also a release-management advantage. File-based skills tie skill rollout to application rollout. A CMS-based model decouples the two. Your engineering team can ship the runtime once, while product operations, risk, or domain experts can release updated skills on their own cadence through a controlled publishing process. That is a much better fit when the logic inside a skill changes more frequently than the code that executes it.
The architecture I recommend
I would keep the same progressive disclosure model Microsoft already documents for file-based skills. Do not dump full skill bodies into the system prompt. Advertise a small set of visible skills, let the agent call load_skill when it needs the full body, and let it call read_skill_resource only when it needs supporting material. That preserves token efficiency and keeps the interaction model aligned with the framework. (Microsoft Learn)
The shift is simply where the content comes from. Payload becomes the source of truth for:
- skill metadata
- skill body content
- supporting resources
- publish state
- version history
- tenant and environment targeting
- editorial ownership and approval
Your C# layer does three jobs:
- Query Payload for the set of skills the current run is allowed to see
- Inject that advertisement into the agent through
AIContextProvider - Expose
load_skillandread_skill_resourceas function tools backed by Payload APIs
That model stays true to the framework. AIContextProvider is documented as a lifecycle extension point that can provide additional context and function tools during invocation, and Microsoft documents C# function tools through AIFunctionFactory.Create. (Microsoft Learn)
Modeling skills in Payload
The mistake I would avoid is treating a skill as one giant blob of markdown with no structure. That puts you back in document chaos. Instead, define a first-class collection for skills.
A practical agent-skills collection in Payload should include fields like:
nameslugdescriptioninstructionBodytenantScopeenvironmentScopeisEnabledeffectiveFromeffectiveToresourceManifesttoolPolicyownerriskTier
I would enable versions and drafts on this collection so edits can be staged and published deliberately. That gives you native revision history and draft publishing without inventing a separate release system. (Payload)
A simple Payload collection config might look like this:
import type { CollectionConfig } from 'payload'export const AgentSkills: CollectionConfig = { slug: 'agent-skills', admin: { useAsTitle: 'name', }, versions: { drafts: { autosave: true, }, }, access: { read: ({ req: { user } }) => Boolean(user), create: ({ req: { user } }) => user?.role === 'skill-admin', update: ({ req: { user } }) => user?.role === 'skill-editor' || user?.role === 'skill-admin', delete: ({ req: { user } }) => user?.role === 'skill-admin', }, fields: [ { name: 'name', type: 'text', required: true }, { name: 'slug', type: 'text', required: true, unique: true }, { name: 'description', type: 'textarea', required: true }, { name: 'instructionBody', type: 'code', admin: { language: 'markdown' }, required: true }, { name: 'tenantScope', type: 'array', fields: [{ name: 'tenantId', type: 'text', required: true }], }, { name: 'environmentScope', type: 'select', options: ['dev', 'test', 'prod'], hasMany: true, required: true, }, { name: 'isEnabled', type: 'checkbox', defaultValue: true }, { name: 'effectiveFrom', type: 'date' }, { name: 'effectiveTo', type: 'date' }, { name: 'resources', type: 'array', fields: [ { name: 'name', type: 'text', required: true }, { name: 'body', type: 'textarea', required: true }, { name: 'contentType', type: 'text' }, ], }, { name: 'riskTier', type: 'select', options: ['low', 'medium', 'high'], required: true, }, ],}
That schema is code-defined, which is important. You still want developers to control the shape of skills. What you are delegating is the management of skill content, not the integrity of the platform.
The C# side: consume Payload through a repository
Payload collections automatically expose APIs, so your .NET runtime does not need direct database access. It can consume Payload through REST or GraphQL. In most enterprise environments I would choose REST for simplicity and cacheability, and I would keep the C# code behind a repository interface so the runtime does not care whether the backing system is Payload today or something else later. (Payload)
public sealed record AdvertisedSkill( string Name, string Description, string Slug, string Version);public sealed record SkillDocument( string Name, string Slug, string Description, string InstructionBody, string Version, bool IsEnabled, DateTimeOffset? EffectiveFrom, DateTimeOffset? EffectiveTo, IReadOnlyList<SkillResource> Resources);public sealed record SkillResource( string Name, string Body, string ContentType);public interface ISkillRepository{ Task<IReadOnlyList<AdvertisedSkill>> ListAdvertisedSkillsAsync( string tenantId, string environment, CancellationToken ct); Task<SkillDocument?> GetSkillAsync( string slug, string tenantId, string environment, CancellationToken ct); Task<SkillResource?> GetSkillResourceAsync( string slug, string resourceName, string tenantId, string environment, CancellationToken ct);}
And a Payload-backed implementation:
public sealed class PayloadSkillRepository : ISkillRepository{ private readonly HttpClient _httpClient; public PayloadSkillRepository(HttpClient httpClient) { _httpClient = httpClient; } public async Task<IReadOnlyList<AdvertisedSkill>> ListAdvertisedSkillsAsync( string tenantId, string environment, CancellationToken ct) { var url = $"/api/agent-skills" + $"?where[isEnabled][equals]=true" + $"&where[environmentScope][in]={environment}" + $"&depth=0&limit=100"; using var response = await _httpClient.GetAsync(url, ct); response.EnsureSuccessStatusCode(); var payload = await response.Content.ReadFromJsonAsync<PayloadListResponse>(cancellationToken: ct); return payload?.Docs .Where(d => IsVisibleToTenant(d, tenantId)) .Select(d => new AdvertisedSkill(d.Name, d.Description, d.Slug, d.UpdatedAt)) .ToList() ?? []; } public async Task<SkillDocument?> GetSkillAsync( string slug, string tenantId, string environment, CancellationToken ct) { var url = $"/api/agent-skills" + $"?where[slug][equals]={Uri.EscapeDataString(slug)}" + $"&where[isEnabled][equals]=true" + $"&where[environmentScope][in]={environment}" + $"&draft=false&depth=1&limit=1"; using var response = await _httpClient.GetAsync(url, ct); response.EnsureSuccessStatusCode(); var payload = await response.Content.ReadFromJsonAsync<PayloadListResponse>(cancellationToken: ct); var doc = payload?.Docs.FirstOrDefault(); if (doc is null || !IsVisibleToTenant(doc, tenantId)) return null; return new SkillDocument( doc.Name, doc.Slug, doc.Description, doc.InstructionBody, doc.UpdatedAt, doc.IsEnabled, doc.EffectiveFrom, doc.EffectiveTo, doc.Resources.Select(r => new SkillResource(r.Name, r.Body, r.ContentType)).ToList()); } public async Task<SkillResource?> GetSkillResourceAsync( string slug, string resourceName, string tenantId, string environment, CancellationToken ct) { var skill = await GetSkillAsync(slug, tenantId, environment, ct); return skill?.Resources.FirstOrDefault(r => string.Equals(r.Name, resourceName, StringComparison.OrdinalIgnoreCase)); } private static bool IsVisibleToTenant(PayloadSkillDoc doc, string tenantId) => doc.TenantScope.Count == 0 || doc.TenantScope.Any(t => t.TenantId == tenantId);}
Wiring it into Microsoft Agent Framework
The runtime pattern is the same as the file-based provider. The difference is that your context comes from Payload at invocation time instead of from disk. Microsoft documents AIContextProvider as participating in the invocation lifecycle, and function tools can be added using AIFunctionFactory.Create. (Microsoft Learn)
using System.ComponentModel;using Microsoft.Agents.AI;using Microsoft.Extensions.AI;public sealed class PayloadSkillsProvider : AIContextProvider{ private readonly ISkillRepository _repository; private readonly string _environment; public PayloadSkillsProvider(ISkillRepository repository, string environment) : base(null, null) { _repository = repository; _environment = environment; } protected override async ValueTask<AIContext> ProvideAIContextAsync( InvokingContext context, CancellationToken cancellationToken = default) { var tenantId = ResolveTenant(context); var skills = await _repository.ListAdvertisedSkillsAsync( tenantId, _environment, cancellationToken); var advertised = string.Join("\n", skills.Select(s => $"""<skill> <name>{s.Slug}</name> <description>{s.Description}</description></skill>""")); return new AIContext { Instructions = $"""You have access to the following skills:{advertised}When a task clearly matches one of these skills, call load_skill with the exact skill slug.Only call read_skill_resource when additional reference material is needed.""", Tools = [ AIFunctionFactory.Create(LoadSkillAsync), AIFunctionFactory.Create(ReadSkillResourceAsync) ] }; } [Description("Load the full instructions for a skill.")] public async Task<string> LoadSkillAsync( [Description("Exact skill slug")] string skillSlug, CancellationToken cancellationToken = default) { var tenantId = TenantContext.CurrentTenantId; var skill = await _repository.GetSkillAsync(skillSlug, tenantId, _environment, cancellationToken); return skill?.InstructionBody ?? $"Skill '{skillSlug}' was not found."; } [Description("Read a named resource for a skill.")] public async Task<string> ReadSkillResourceAsync( [Description("Exact skill slug")] string skillSlug, [Description("Resource name")] string resourceName, CancellationToken cancellationToken = default) { var tenantId = TenantContext.CurrentTenantId; var resource = await _repository.GetSkillResourceAsync( skillSlug, resourceName, tenantId, _environment, cancellationToken); return resource?.Body ?? $"Resource '{resourceName}' was not found for skill '{skillSlug}'."; } private static string ResolveTenant(InvokingContext context) => TenantContext.CurrentTenantId;}
And then:
AIAgent agent = chatClient.AsAIAgent(new ChatClientAgentOptions{ Name = "OperationsAgent", ChatOptions = new() { Instructions = "You are a controlled enterprise assistant." }, AIContextProviders = [ new PayloadSkillsProvider(skillRepository, environment: "prod") ]});
Release management is where this architecture really pays off
This is the part engineering teams often undersell. Payload drafts let you separate authoring from publishing. Versions give you history. Access control lets you separate editors from approvers. Hooks let you notify downstream systems when a skill changes. Live Preview and autosave can even be used to give a controlled editorial experience when you want reviewers to see how a skill reads before it is published. (Payload)
That means your release flow can look like this:
- A domain expert edits a skill as a draft
- A risk or operations lead reviews the draft
- The draft is published to a target environment
- A Payload hook notifies the .NET runtime to clear cache or refresh a snapshot
- New agent runs see the updated skill without redeploying the application
That is dramatically better than “open a pull request to change markdown in the repo” when the real owners of the content are not developers.
The risks to plan for
The first risk is governance drift. Because it becomes easier to edit skills, it also becomes easier to introduce bad instructions. That means you need stronger approval discipline, not weaker discipline. Payload access control should be used to separate who can author from who can publish. (Payload)
The second risk is runtime inconsistency. If you advertise one version of a skill and load another because content changed mid-run, the agent can operate against a moving target. My recommendation is to include a version or updated timestamp in the advertised metadata and pin the chosen version in session state for the rest of the run or session.
The third risk is unsafe action execution. If a skill can lead to external side effects, do not rely on content governance alone. Microsoft documents human-in-the-loop approval for function tools, and that should be used for high-risk actions such as sending emails, updating records, or triggering downstream workflows. (Microsoft Learn)
My recommendation as a technical architect
Use the file system when skills are developer assets. Use a CMS when skills are managed business instructions. That is the decision line.
If your organization wants product managers, policy owners, legal reviewers, or operations leaders to participate in the lifecycle of skills, then skills are no longer just files. They are governed content. Payload gives you the management plane. Microsoft Agent Framework gives you the execution seam. C# remains the runtime backbone that enforces visibility, tooling, session behavior, and safety.
That is the architecture I would put into production.