---
title: "Medisolv CLI Creator"
type: "Skill"
slug: "cli-creator"
icon: "terminal"
category: "Platform"
tags: ["Meta", "CLI", "Python", "Click", "Testing", "Developer Tools", "Platform", "Augment Code"]
installs: "0"
author: "Medisolv Platform Team"
authorInitial: "M"
lastUpdated: "2026-04-06"
popularity: "5.0/5"
reviewCount: "New"
platformTags: ["v1.0", "Internal"]
installLabel: "SKILL.md"
securityBadges:
  - label: "Data Handling"
    status: "No PHI — generates code and docs only"
  - label: "Compliance"
    status: "Internal Developer Use"
---

# Medisolv CLI Creator

Gives any AI agent the complete knowledge needed to scaffold, document, and test a production-ready command-line tool that works well alongside Medisolv's Skills and MCPs. Install this skill so your agent can design agent-native CLIs using practical conventions inspired by CLI-Anything, without confusing a standalone CLI with a Skill or an MCP server.

## When to Use

Use this skill when you need to:

- Scaffold a brand-new CLI tool from scratch with a clean package layout and command surface
- Design a command-line tool that works well for both humans and AI agents
- Wrap an internal API, desktop workflow, or repetitive engineering task behind a clean CLI
- Add `--json` output, inspection commands, and predictable exit behavior so AI tools can script it safely
- Create a real install/test/documentation workflow instead of a one-off script with no packaging or verification
- Verify that an existing CLI follows Medisolv conventions for installation docs, usage examples, and agent-friendly design
- Onboard a new engineer who needs a practical recipe for creating agent-friendly CLIs without reverse-engineering other projects

## Tool Definition

```json
{
  "name": "cli_creator",
  "description": "You are a Medisolv platform engineer who specialises in creating agent-native CLI tools. When asked to create or scaffold a CLI, always follow the Medisolv conventions in this skill. Generate the CLI package, tests, README, and any companion skill guidance with no critical sections skipped.",
  "capabilities": [
    "scaffold a new CLI package and command surface",
    "design subcommands, REPL behavior, and JSON output for agent use",
    "generate pyproject.toml, source files, tests, and README",
    "define installation, usage, and AI-tool guidance sections",
    "verify an existing CLI against Medisolv and CLI-Anything-inspired conventions"
  ],
  "constraints": [
    "Always decide whether the user needs a new Skill, a new MCP, a new CLI, or a combination before scaffolding",
    "Always provide machine-readable output mode, preferably via a global --json flag",
    "Always include list/info/status style commands before mutation-heavy commands so agents can inspect state safely",
    "Always prefer wrapping the real backend or real API instead of reimplementing complex behavior with toy substitutes",
    "Always include runnable tests that verify meaningful behavior, not just import success",
    "Always make installation and usage copy-pasteable for engineers using terminals, Auggie, and Augment Code",
    "Always fail with clear, actionable error messages and non-zero exit codes on errors"
  ]
}
```

## Workflow

Follow this lifecycle for every new CLI tool:

1. **Capture intent** — clarify what the CLI should automate, who will use it, and whether it should be standalone, agent-facing, or both
2. **Choose the resource shape** — confirm whether the request needs only a CLI, or also a Skill and/or MCP alongside it
3. **Design the command surface** — define command groups, required inputs, output modes, and whether interactive REPL mode is useful
4. **Scaffold the package** — create the CLI source, packaging metadata, README, and tests
5. **Document the CLI** — add a project `README.md` with installation, usage, JSON output, and troubleshooting guidance
6. **Test** — verify `--help`, representative commands, JSON output, and any real backend integration
7. **Iterate** — tighten naming, errors, docs, and examples until the CLI is reliable for both humans and agents

---

## First Decision: Does the User Need a Skill, MCP, or CLI?

Before writing anything, classify the request:

| If the user needs... | Build... | Why |
| --- | --- | --- |
| Reusable instructions for an AI tool | **Skill** | Skills teach the AI how to work |
| Runtime access to live APIs or data | **MCP** | MCP servers expose tools/resources/prompts to the AI |
| A terminal command engineers can install and run directly | **CLI** | CLIs work for humans, scripts, and AI agents |
| A terminal tool plus AI guidance for using it well | **CLI + Skill** | The CLI executes; the Skill teaches the AI how to use it |

> **Rule of thumb:** if the output should be executable from a shell prompt like `my-tool report generate`, you're building a CLI. If the AI also needs reusable instructions for how to use that CLI well, add a companion Skill under `skills/<slug>/SKILL.md`.

## What Makes a CLI Agent-Native?

An agent-native CLI is not just a script with arguments. It is a tool that an AI can reliably discover, invoke, inspect, and recover from.

Core traits:

- **Structured output** — supports `--json` or another machine-readable mode
- **Inspectable state** — has `list`, `info`, `status`, or `show` commands so the AI can look before changing anything
- **Clear errors** — returns non-zero on failure and prints actionable messages
- **Composable commands** — commands do one logical thing and can be chained by scripts or agents
- **Stable naming** — subcommands and flags are predictable, explicit, and documented
- **Real backend integration** — talks to the real API, service, or application whenever that is the actual requirement

## Recommended Technology Stack

For most Medisolv CLIs, start with this default stack unless the target ecosystem strongly requires something else:

| Layer | Tool | Why |
| --- | --- | --- |
| Language | Python 3.11+ | Fast to build, readable, easy packaging |
| CLI framework | `click` | Mature, explicit subcommands, proven in agent-facing CLIs |
| Packaging | `pyproject.toml` + `uv` | Reproducible installs and simple local workflows |
| Output | `json` stdlib | Easy machine-readable mode |
| Testing | `pytest` | Good unit + subprocess testing support |
| Process integration | `subprocess` / `httpx` | For real backend invocation and API access |

Use richer frameworks only when they solve a real problem. Do not add complexity just to make the CLI look modern.

## File Layout

Separate the **CLI implementation** from any optional companion AI guidance:

```text
your-cli-project/
  pyproject.toml
  README.md
  .env.example              # if the CLI needs config
  src/
    your_cli/
      __init__.py
      __main__.py
      cli.py                # Click entry point
      core/
        __init__.py
        ...                 # business logic modules
      utils/
        __init__.py
        output.py           # JSON/human output helpers
        backend.py          # subprocess/API wrapper if needed
  tests/
    test_core.py
    test_cli.py

ai-discovery-portal/
  skills/
    your-cli-skill/
      SKILL.md              # optional companion skill for AI usage guidance
```

If the CLI also needs an AI-installable skill, keep that skill separate from the CLI codebase and use it to teach the AI how to invoke the CLI safely and effectively.

## Command Surface Design

Design commands around logical nouns and verbs:

```text
my-tool project init
my-tool project info
my-tool report generate
my-tool report list
my-tool auth status
```

Good patterns:

- Top-level group = domain (`project`, `report`, `auth`, `export`)
- Subcommand = action (`list`, `get`, `create`, `delete`, `run`)
- Global flags = cross-cutting behavior (`--json`, `--config`, `--verbose`)

Bad patterns:

- 30 unrelated top-level commands with no grouping
- hidden behavior that changes based on the current directory with no `status` command to explain it
- JSON only on some commands but not others
- mutating commands without a way to inspect the current state first

## CLI Entry Point Template

Use a single output helper so all commands behave consistently:

```python
import json
import click


def emit(payload: object, as_json: bool) -> None:
    if as_json:
        click.echo(json.dumps(payload, indent=2))
    elif isinstance(payload, str):
        click.echo(payload)
    else:
        click.echo(json.dumps(payload, indent=2))


@click.group()
@click.option("--json", "as_json", is_flag=True, help="Emit machine-readable JSON output.")
@click.pass_context
def cli(ctx: click.Context, as_json: bool) -> None:
    ctx.ensure_object(dict)
    ctx.obj["as_json"] = as_json


@cli.group()
def project() -> None:
    """Project-related commands."""


@project.command("info")
@click.pass_context
def project_info(ctx: click.Context) -> None:
    emit({"name": "example", "status": "ready"}, ctx.obj["as_json"])
```

### Output rules

- If `--json` is passed, output valid JSON and nothing extra
- If the command fails, exit non-zero and print a message the user or AI can act on
- Do not mix banners, progress text, and JSON in the same stdout stream
- Put logs or debug chatter on stderr when needed

## Real Backend Rule

Borrow the strongest rule from CLI-Anything: **wrap the real thing whenever possible**.

Examples:

- If the CLI is for an HTTP service, call the real API
- If the CLI is for a desktop renderer/exporter, invoke the real backend tool or application
- If the CLI manages local files, write the actual files and verify them

Avoid fake substitutes when they change the meaning of the tool. A CLI that only pretends to do the real work is hard for engineers to trust and impossible for agents to reason about safely.

## Testing Requirements

Every CLI should have at least three layers of validation:

### 1. Unit tests

Test parsing, transformation, validation, and helper functions in isolation.

### 2. CLI subprocess tests

Run the installed or module-invoked CLI as a real command:

```bash
uv run python -m your_cli --help
uv run python -m your_cli --json project info
```

Verify:

- exit code is `0`
- stdout parses as JSON when `--json` is used
- expected fields or output text are present

### 3. Real workflow or backend tests

For any CLI that claims to generate files, call APIs, or control software, verify the real result:

- output file exists and has expected format/content
- API response is parsed correctly
- subprocess/backend call succeeds and the result is inspected, not just assumed

> **Do not stop at `--help` tests.** The whole point is confidence that the tool actually performs useful work.

## README.md Requirements for the CLI Project

The CLI project's own `README.md` should include:

1. What the CLI does in plain English
2. Prerequisites (Python version, accounts, tokens, external software)
3. Installation steps
4. Common command examples
5. JSON output example
6. Testing instructions
7. Troubleshooting notes

## Optional Companion Skill

If the CLI is mostly for direct human terminal use, the project `README.md` may be enough.

If an AI agent will need repeatable guidance, create a companion skill under `skills/<slug>/SKILL.md` with:

1. **`## When to Use`** — when the AI should reach for this CLI
2. **`## Installation`** — how to install or invoke the CLI
3. **`## Tool Definition`** or command overview — what the CLI does and what patterns it supports
4. **Usage examples** — representative commands for common workflows
5. **Best practices** — JSON mode, inspection-first workflows, and common pitfalls

If the CLI requires environment variables, document them in the CLI project's `README.md` and `.env.example`, and mirror only the usage guidance that an AI truly needs in any companion skill.

## Auggie and Augment Guidance

When the CLI is relevant to AI-assisted workflows, document the difference clearly:

- **Auggie** — explain whether the user just runs the CLI from a terminal, or whether Auggie should call it as part of a broader workflow
- **Augment Code** — explain how to install the tool locally and ensure it is on `PATH` so generated terminal commands succeed

Do not invent a fake integration story. If the CLI is just a terminal tool, say that plainly.

## Checklist Before Committing

- [ ] Confirmed the request truly needed a CLI and not a Skill or MCP alone
- [ ] Chosen a slug and command name that are stable, explicit, and human-readable
- [ ] Added `--json` or another consistent machine-readable output mode
- [ ] Added inspection commands such as `list`, `info`, or `status`
- [ ] Wrote installation instructions that can be copy-pasted exactly
- [ ] Added real tests for representative CLI workflows
- [ ] Verified meaningful behavior, not just import success
- [ ] Added a companion `skills/<slug>/SKILL.md` only if the AI needs reusable guidance for the CLI
- [ ] Added Auggie/Augment usage notes when they genuinely apply
- [ ] Documented required env vars or external software dependencies

## Common Mistakes

| Mistake | Fix |
| --- | --- |
| Confusing the CLI with a companion Skill | The CLI is the executable tool; the Skill is optional AI guidance about how to use it |
| No `--json` support | Add a global machine-readable output mode so agents can parse results safely |
| Commands only mutate state | Add `list`, `info`, `show`, or `status` commands first |
| CLI prints logs and JSON to stdout together | Keep JSON clean on stdout; send extra logs to stderr |
| README explains the code but not how to install it | Add exact install, verify, and usage commands |
| Only testing helper functions | Add subprocess tests that run the real CLI command |
| Faking backend behavior with toy logic | Invoke the real API, binary, or workflow whenever that is the actual user need |
| Using an unstable or overly clever command structure | Group commands by domain and keep verbs predictable |

## Example Prompts

> "Create a new CLI called `medisolv-release-helper` that compares branches and outputs JSON for automation. If it needs AI guidance, add a companion skill too."

> "I have a Python script that talks to our internal reporting API. Turn it into a proper installable CLI with `report list`, `report get`, and `--json`, plus tests and docs."

> "Design a CLI for a repetitive QA workflow. I need subcommands, a README, and advice on whether it should also ship with a companion skill."

> "Audit this existing CLI and tell me what is missing to make it agent-friendly for Augment Code users."