Quickstart

From signup to a trustworthy first run in under 10 minutes

zTester works best when you give it three things up front: a reachable environment, 2-5 user journeys, and working auth for protected pages.

Dashboard path

Before you run discovery

The project dashboard shows a Discovery Gate panel. If it is blocked, fix the listed journey or auth items first. This prevents zTester from generating a large suite of shallow or unauthenticated tests.

Environment

Add a staging or preview URL that zTester can reach from the public internet.

Journeys

Provide 2-5 numbered flows using arrows between user actions and expected outcomes.

Auth

Use form login, OAuth, or session cookies. Magic-link apps should use cookies or Playwright storageState.

Validation

Generated tests start as drafts and are promoted only after validation evidence exists.

1

Create your account

Sign in with a magic link or GitHub. Self-serve — open to any email.

Sign in
2

Connect a repo or app URL

GitHub and Bitbucket discovery reads your source to find real routes and workflows. Just a URL is enough to start.

Create project
3

Add user journeys

Write at least 2 numbered business flows. This unlocks deep multi-page tests instead of shallow page checks.

Open settings
4

Configure auth and run discovery

Use the auth wizard for form login, cookies, OAuth, or Playwright storageState. The readiness panel shows blockers before discovery starts.

Run discovery

Autopilot API path

For another app or AI agent, call Autopilot once. zTester checks readiness, asks for missing inputs, starts discovery when ready, validates generated drafts, then loops through repair or focused regeneration until it reaches the target pass rate or returns a blocker.

Start Autopilot

curl -X POST https://ztester.zavecoder.com/api/v1/projects/$PROJECT_ID/autopilot \
  -H "Authorization: Bearer $ZTESTER_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "baseUrl": "https://staging.example.com",
    "targetPassRate": 80,
    "maxIterations": 3,
    "journeys": [
      "Customer selects tickets -> adds to cart -> checks out -> sees confirmation",
      "Admin signs in -> reviews booking -> updates status -> customer sees update"
    ],
    "authStrategy": {
      "type": "cookies",
      "cookies": [
        { "name": "sb-access-token", "value": "...", "domain": ".example.com" },
        { "name": "sb-refresh-token", "value": "...", "domain": ".example.com" }
      ],
      "verificationUrl": "/dashboard"
    }
  }'

If zTester needs more input

{
  "status": "needs_input",
  "sessionId": "7d4a...",
  "missing": [
    {
      "type": "journeys",
      "message": "Autopilot needs at least 2 numbered user journeys to generate deep tests.",
      "fix": "Provide 2-5 critical business flows with steps and expected outcomes."
    },
    {
      "type": "auth",
      "message": "Autopilot needs working auth or an explicit authNotRequired flag.",
      "fix": "Provide form_login, oauth_provider, cookies, or Playwright storageState-derived cookies."
    }
  ],
  "resumeUrl": "/api/v1/autopilot/7d4a.../inputs"
}

Send the missing fields to the returned resumeUrl. For public apps, pass "authNotRequired": true.

Poll progress

curl https://ztester.zavecoder.com/api/v1/autopilot/$SESSION_ID \
  -H "Authorization: Bearer $ZTESTER_API_KEY"

Poll until status is completed, blocked, or failed.

CLI path

5 commands after your API key. Add user journeys in Project Settings before discovery if you want deep multi-page tests.

1
npx @ztester/cli login --key <API_KEY>

Save your API key (get it from Settings → API Keys)

2
npx @ztester/cli init --name "My App" --url https://staging.myapp.com

Bootstrap project + environment, write ztester.config.json

3
npx @ztester/cli auth --type form_login --login-url /sign-in \ --username test@myapp.com --password secret --verification-url /dashboard

Configure auth so tests can reach protected pages. Skip if your app is public.

4
npx @ztester/cli link-repo --repo owner/repo

(Optional) Link a GitHub repo — enables source-based discovery and auto-run on every PR.

5
npx @ztester/cli autopilot --target 85 --ci

Run the full quality loop: discover → validate → run → improve until 85% pass rate. Exits 0 on success, 1 if blocked.

6
npx @ztester/cli run --ci

Or run the test suite directly — exits 1 on failures, perfect for CI pipelines.

7
npx @ztester/cli resume <runId>

If a run stalls (runner went silent), resume from where it stopped — only re-runs tests with no result yet.

Auth is required for any app with a login screen

Without it, protected tests will fail with an auth redirect. Use the strategy that matches the real login flow:

# Email + password login

ztester auth --type form_login --login-url /sign-in \ --username test@myapp.com --password secret

# Magic link / Supabase / Clerk — paste session cookies or import Playwright storageState

ztester auth --type cookies \ --cookies '[{"name":"sb-access-token","value":"eyJ...","domain":".myapp.com"}]'

# OAuth (Google / GitHub / Microsoft)

ztester auth --type oauth_provider --login-url /sign-in

Add user journeys to unlock deep multi-page tests

Without journeys, zTester only generates shallow per-page tests. Go to Project Settings → User Journeys and write 2–5 numbered flows like:

1. Customer signs up → creates project → runs discovery → sees tests 2. Admin creates invoice → adds items → sends to customer → invoice marked Sent
CI environment variables: Set ZTESTER_API_KEY, ZTESTER_PROJECT_ID, and ZTESTER_ENVIRONMENT_ID — no config file needed.

MCP / AI orchestration path

Wire zTester directly into Claude Desktop or Cursor. Tell your AI assistant to set up projects, run discovery, and report failures — no manual steps.

Add to claude_desktop_config.json or .cursor/mcp.json:

{
  "mcpServers": {
    "ztester": {
      "command": "npx",
      "args": ["-y", "@ztester/mcp"],
      "env": { "ZTESTER_API_KEY": "your_key" }
    }
  }
}

Full setup prompt (Claude does everything):

"Bootstrap a zTester project for https://staging.myapp.com named 'My App'. Configure form_login auth with login URL /sign-in, username test@myapp.com, password secret123. Then trigger discovery and wait for it to complete. Finally run all tests and give me a summary of what passed and what failed."

Claude chains: bootstrap_projectlist_environmentsupdate_environmenttrigger_discoveryget_discovery_resultsrun_testsget_failures

Autonomous loop prompt (Autopilot via MCP):

"Start Autopilot for project <ID> with a target of 85% pass rate and max 3 iterations. Poll every 20 seconds. When it completes tell me the final pass rate and any tests that are still failing. If blocked, tell me what the blockers are."

Claude chains: start_autopilot → polls get_autopilot_status until isTerminal: trueget_failures

Self-healing loop (read actionRequired and act):

Every run response includes a structured diagnosis + actionRequired block when something needs fixing. AI agents read it and call the suggested endpoint directly — no prose parsing needed.

Example: AskVAVA's expired cookies → pre-flight probe returns 422 in ~4s with diagnosis: "auth_failure" and actionRequired.fix.cliCommand. Agent sees it, swaps the auth strategy via PATCH /environments/{id}, and retries — fully unattended.

Diagnoses: auth_failure · run_stalled (use resume_run) · high_timeout_rate · high_failure_rate

All 20 tools:

bootstrap_projectupdate_environmenttrigger_discoveryget_discovery_resultsrun_testsget_run_statusget_failuresget_recent_runslist_projectslist_environmentslist_test_casesapprove_test_casessubmit_feedbacktrigger_reruncancel_runmanage_failure_webhooksstart_autopilotget_autopilot_statusresume_runlink_repo

CI / CD integration

Copy-paste templates for GitHub Actions and GitLab CI.

GitHub Actions (.github/workflows/ztester.yml)

- uses: actions/setup-node@v4
  with: { node-version: '20' }
- run: npx --yes @ztester/cli run --ci
  env:
    ZTESTER_API_KEY: ${{ secrets.ZTESTER_API_KEY }}
    ZTESTER_PROJECT_ID: ${{ vars.ZTESTER_PROJECT_ID }}
    ZTESTER_ENVIRONMENT_ID: ${{ vars.ZTESTER_ENVIRONMENT_ID }}

Full templates (GitHub Actions + GitLab CI): docs/ci-templates/ →

What "ready" looks like

  • At least one environment has a verified auth check or is explicitly public.
  • Generated tests stay in draft until validation says they deserve promotion.
  • Release readiness is visible per project before you rely on the suite in CI.
  • Public status and runner health are exposed before you trust production automation.

Need deeper detail?