← Back to Dashboard

zTester API Documentation

REST API for integrating zTester with your CI/CD pipeline and external tools

Quick Start

1. Get your API key

Navigate to Settings → API Keys and create a new API key.

2. Make your first request

bash
curl https://ztester.zavecoder.com/api/v1/projects \
  -H "Authorization: Bearer zt_your_api_key_here"

Authentication

All API requests require authentication using an API key. Include your API key in the Authorization header:

bash
Authorization: Bearer zt_your_api_key_here

⚠️ Keep your API keys secure. Never commit them to version control or share them publicly.

Base URL

text
https://ztester.zavecoder.com/api/v1

Integration Guide

Get started with zTester in minutes. Automatically generate comprehensive E2E tests and integrate with your CI/CD pipeline.

How zTester Works

1

Automatic Test Generation

Connect your GitHub/Bitbucket repository or provide your application URL. zTester intelligently analyzes your application and automatically generates comprehensive multi-step workflow tests (typically 5-8 steps per test covering real user journeys).

2

Parallel Test Execution

Execute hundreds of tests in minutes using our high-performance test runner. Tests run in parallel batches with automatic authentication handling and real-time progress updates.

3

Self-Healing Tests

When UI changes break selectors, our feedback system auto-detects failures, suggests fixes, and verifies corrections automatically. Tests adapt to your evolving application.

4

CI/CD Integration

Seamlessly integrate with GitHub Actions, GitLab CI, Jenkins, or any CI/CD tool. Run tests on every pull request and get pass/fail reports in minutes.

💡 Zero Manual Test Writing: Unlike traditional E2E tools, zTester eliminates the need to manually write and maintain test scripts. Our AI-powered generation creates production-ready tests automatically, saving your team hundreds of hours.

Typical Workflow

text
1. SETUP (One-time)
   ├─ Create project
   ├─ Configure environment & authentication
   └─ Link GitHub/Bitbucket repository (optional)

2. GENERATE TESTS
   ├─ Trigger test discovery via API
   ├─ zTester analyzes your application
   └─ Returns 40-100+ ready-to-run tests in 2-5 minutes

3. EXECUTE TESTS
   ├─ Run tests via batch execution API
   ├─ Monitor progress in real-time
   └─ Get detailed pass/fail results

4. CONTINUOUS IMPROVEMENT
   ├─ Tests auto-adapt to UI changes
   ├─ Track flaky tests and pass rates
   └─ Re-generate tests when code changes significantly

1. Authentication Setup

Create API Key

Navigate to Settings → API Keys and create a new API key with appropriate scopes:

Available Scopes
  • read - View projects, tests, and results
  • write_tests - Create and update test cases
  • run_tests - Execute tests
  • generate - Trigger test discovery/generation
  • admin - Full access (create projects, manage settings)

💡 Recommended: For CI/CD pipelines, use a dedicated API key with generate and run_tests scopes.

2. Project & Environment Setup

Create Project

bash
curl -X POST https://ztester.zavecoder.com/api/v1/projects \
  -H "Authorization: Bearer zt_your_key" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "My App",
    "description": "E2E tests for production app"
  }'

Create Environment with Auth

Define authentication strategies for your test environments:

bash
curl -X POST https://ztester.zavecoder.com/api/v1/environments \
  -H "Authorization: Bearer zt_your_key" \
  -H "Content-Type: application/json" \
  -d '{
    "projectId": "proj-abc",
    "name": "Staging",
    "type": "staging",
    "baseUrl": "https://staging.myapp.com",
    "authStrategy": {
      "type": "form_login",
      "loginUrl": "/login",
      "credentials": {
        "usernameSelector": "#email",
        "passwordSelector": "#password",
        "submitSelector": "button[type=\"submit\"]",
        "username": "test@example.com",
        "password": "test123"
      }
    }
  }'
Supported Auth Types
  • none - Public pages
  • form_login - Standard email/password login
  • cookies - Pre-authenticated session cookies
  • bearer_token - JWT or API token in headers
  • basic_auth - HTTP Basic Authentication

🔍 Detailed Auth Error Messages: When authentication fails, zTester provides specific, actionable error messages at each step:

  • Missing config: Lists which fields are missing (loginUrl, username, password)
  • Login page unreachable: Shows URL and error details
  • Form not found: Shows the selector attempted and current page URL
  • Credentials fill failed: Shows both email and password selectors
  • Submit failed: Shows submit button selector
  • Success indicator timeout: Shows expected selector and current URL, suggests invalid credentials

3. Automated Test Generation

zTester automatically generates comprehensive multi-step workflow tests (5-8+ steps) that exercise real user journeys and business logic, not just basic UI checks.

Repository-Based Generation (Recommended)

Link your GitHub/Bitbucket repository for the highest quality test generation:

bash
curl -X POST https://ztester.zavecoder.com/api/v1/projects/proj-abc/discover-tests \
  -H "Authorization: Bearer zt_your_key" \
  -H "Content-Type: application/json" \
  -d '{
    "source": "git",
    "repoUrl": "https://github.com/myorg/myapp.git",
    "branch": "main",
    "sourcePaths": ["src/app", "src/pages", "src/components"],
    "environmentId": "env-xyz"
  }'
What Gets Generated
  • End-to-End Workflows — Complete user journeys from start to finish (CRUD, multi-step forms)
  • Navigation Completeness — Every route is checked: loads without 404, no “not found”, no raw undefined or [object Object] leaking to the DOM
  • Data Rendering — Table/list pages verify the container and that at least one data row is present (catches empty-table bugs)
  • Form Validation — Happy-path submit + companion empty-submit test to confirm required-field validation fires
  • Search Behaviour — Types a query and asserts results appear; separate test clears the query and confirms the full list restores without a crash
  • State Persistence — For settings/config/profile pages: fill → save → reload → assert value still present
  • Interactive Elements — Buttons, modals, dialogs, dropdowns (only action-verb buttons: Add, Create, Edit, Delete)

URL-Based Generation (Alternative)

Without repository access, generate tests by crawling your live application:

bash
curl -X POST https://ztester.zavecoder.com/api/v1/projects/proj-abc/discover-tests \
  -H "Authorization: Bearer zt_your_key" \
  -H "Content-Type: application/json" \
  -d '{
    "source": "crawl",
    "baseUrl": "https://staging.myapp.com",
    "environmentId": "env-xyz",
    "maxDepth": 3,
    "maxPages": 80
  }'

✅ Best Practice: Repository-based generation produces higher quality tests and is 10x faster. Typically generates 40-100 production-ready tests in 2-5 minutes.

4. Adding Custom Test Cases

In addition to automatic generation, you can also create custom test cases via API:

bash
curl -X POST https://ztester.zavecoder.com/api/v1/test-cases \
  -H "Authorization: Bearer zt_your_key" \
  -H "Content-Type: application/json" \
  -d '{
    "projectId": "proj-abc",
    "name": "Complete checkout with coupon",
    "description": "Verify discount calculation and order creation",
    "purpose": "Critical path: checkout with promo code",
    "tags": ["checkout", "payment", "critical"],
    "steps": [
      {
        "action": "navigate",
        "target": "/products",
        "description": "Go to products page"
      },
      {
        "action": "click",
        "target": ".product:first-child button:has-text(\"Add to Cart\")",
        "description": "Add first product to cart"
      },
      {
        "action": "click",
        "target": "[aria-label=\"Cart\"]",
        "description": "Open cart"
      },
      {
        "action": "click",
        "target": "button:has-text(\"Checkout\")",
        "description": "Proceed to checkout"
      },
      {
        "action": "fill",
        "target": "#coupon-code",
        "value": "SAVE20",
        "description": "Enter coupon code"
      },
      {
        "action": "click",
        "target": "button:has-text(\"Apply\")",
        "description": "Apply coupon"
      },
      {
        "action": "assert",
        "target": ".discount-amount",
        "value": "contains:$",
        "description": "Verify discount applied"
      },
      {
        "action": "click",
        "target": "button:has-text(\"Complete Order\")",
        "description": "Submit order"
      },
      {
        "action": "wait",
        "target": ".order-confirmation",
        "description": "Wait for confirmation"
      }
    ],
    "expectedOutcomes": [
      "Discount is calculated correctly",
      "Order is created in database",
      "Confirmation page shows order number"
    ]
  }'
Supported Step Actions
ActionDescription
navigateGo to URL — target = full URL or path
clickClick element — target = CSS selector (supports :has-text())
fillFill an input — target = selector, value = text to enter
assertVisibleAssert element is visible — target = selector
assertTextAssert element contains text — target = selector, value = expected text
assertNotTextAssert page does not contain text — catches 404 pages, React render bugs (undefined, [object Object])
assertValueAssert an input's current value — target = input selector, value = expected text (useful after reload for state persistence checks)
assertUrlAssert current URL contains a path segment — value = expected substring
assertCountAssert at least N elements exist — target = selector, value = minimum count
reloadReload the current page and wait for it to settle — used in state persistence tests
waitWait a fixed duration — value = milliseconds (e.g. "1500")
waitForSelectorWait until element appears — target = selector
selectSelect a dropdown option — target = selector, value = option value or label
hoverHover over element — target = selector
pressPress keyboard key — value = key name (e.g. "Enter", "Escape")
clickIfExistsClick element only if it exists — won't fail if absent (for optional UI states)
screenshotCapture a screenshot — value = filename (optional)

5. Test Execution

Single Test Execution

bash
curl -X POST https://ztester.zavecoder.com/api/v1/test-runs \
  -H "Authorization: Bearer zt_your_key" \
  -H "Content-Type: application/json" \
  -d '{
    "testCaseId": "test-123",
    "environmentId": "env-xyz"
  }'

Batch Execution (Recommended for CI/CD)

Run multiple tests in parallel (up to 200 tests, batched in groups of 15):

bash
curl -X POST https://ztester.zavecoder.com/api/v1/test-runs/execute-batch \
  -H "Authorization: Bearer zt_your_key" \
  -H "Content-Type: application/json" \
  -d '{
    "projectId": "proj-abc",
    "environmentId": "env-xyz",
    "testCaseIds": ["test-1", "test-2", "test-3"],
    "tags": ["critical"]
  }'
Response
json
{
  "batchRunId": "run-456",
  "status": "running",
  "totalTests": 42,
  "estimatedDurationMs": 180000
}

Check Batch Status

bash
curl https://ztester.zavecoder.com/api/v1/test-runs/run-456 \
  -H "Authorization: Bearer zt_your_key"

💡 CI/CD Integration: Use batch execution to run all tests tagged as critical on every deployment. Tests run in parallel and complete in ~3-5 minutes for 50 tests.

6. Test Feedback & Auto-Improvement

When tests fail due to UI changes (new selectors, button text changes), you can provide feedback and zTester will automatically fix and re-verify the tests:

bash
curl -X POST https://ztester.zavecoder.com/api/v1/test-runs/feedback \
  -H "Authorization: Bearer zt_your_key" \
  -H "Content-Type: application/json" \
  -d '{
    "runId": "run-789",
    "testCaseId": "test-123",
    "stepNumber": 3,
    "issue": "selector_not_found",
    "details": "Button text changed from \"Submit\" to \"Save Changes\"",
    "suggestedFix": {
      "newSelector": "button:has-text(\"Save Changes\")"
    },
    "autoApply": true,
    "verifyFix": true
  }'
Feedback Response
json
{
  "feedbackId": "fb-101",
  "fixApplied": true,
  "verificationRunId": "run-790",
  "verificationStatus": "passed",
  "message": "Test updated and verified successfully"
}

✅ Self-Healing Tests: With autoApply: true and verifyFix: true, tests automatically adapt to UI changes and verify the fix works before saving.

7. Analytics & Insights

Flaky Test Detection

Identify tests with inconsistent pass/fail patterns:

bash
curl https://ztester.zavecoder.com/api/v1/projects/proj-abc/insights \
  -H "Authorization: Bearer zt_your_key"
json
{
  "flakyTests": [
    {
      "testCaseId": "test-555",
      "testName": "Login workflow",
      "flakeRate": 0.23,
      "totalRuns": 87,
      "failures": 20,
      "commonErrors": ["Timeout waiting for dashboard"]
    }
  ],
  "summary": {
    "totalFlakyTests": 3,
    "highestFlakeRate": 0.23
  }
}

Incremental Discovery Check

Before re-running discovery, check if incremental mode is possible (10x faster):

bash
curl -X POST https://ztester.zavecoder.com/api/v1/projects/proj-abc/incremental-analyze \
  -H "Authorization: Bearer zt_your_key" \
  -H "Content-Type: application/json" \
  -d '{
    "repoPath": "/tmp/repo-clone",
    "sourcePaths": ["src/app", "src/components"]
  }'

8. Complete CI/CD Workflow Example

Typical GitHub Actions workflow for automated testing on every PR:

yaml
name: E2E Tests
on: [pull_request]

jobs:
  e2e:
    runs-on: ubuntu-latest
    steps:
      - name: Generate Tests (if source code changed)
        run: |
          curl -X POST https://ztester.zavecoder.com/api/v1/projects/${PROJECT_ID}/discover-tests \
            -H "Authorization: Bearer ${ZTESTER_API_KEY}" \
            -H "Content-Type: application/json" \
            -d '{"source": "git", "branch": "${{ github.head_ref }}", "environmentId": "env-staging"}'
        env:
          ZTESTER_API_KEY: ${{ secrets.ZTESTER_API_KEY }}
          PROJECT_ID: proj-abc

      - name: Run Critical Path Tests
        id: run_tests
        run: |
          RESPONSE=$(curl -X POST https://ztester.zavecoder.com/api/v1/test-runs/execute-batch \
            -H "Authorization: Bearer ${ZTESTER_API_KEY}" \
            -H "Content-Type: application/json" \
            -d '{"projectId": "proj-abc", "environmentId": "env-staging", "tags": ["critical"]}')

          BATCH_RUN_ID=$(echo $RESPONSE | jq -r '.batchRunId')
          echo "batch_run_id=$BATCH_RUN_ID" >> $GITHUB_OUTPUT

      - name: Wait for Results
        run: |
          BATCH_RUN_ID=${{ steps.run_tests.outputs.batch_run_id }}

          for i in {1..60}; do
            RESPONSE=$(curl https://ztester.zavecoder.com/api/v1/test-runs/$BATCH_RUN_ID \
              -H "Authorization: Bearer ${ZTESTER_API_KEY}")

            STATUS=$(echo $RESPONSE | jq -r '.status')

            if [ "$STATUS" = "completed" ]; then
              PASSED=$(echo $RESPONSE | jq -r '.passedCount')
              TOTAL=$(echo $RESPONSE | jq -r '.totalTests')
              echo "Tests completed: $PASSED/$TOTAL passed"

              if [ "$PASSED" != "$TOTAL" ]; then
                exit 1
              fi
              exit 0
            fi

            sleep 10
          done

          echo "Timeout waiting for tests"
          exit 1

9. Additional Features

Webhook Notifications

Receive real-time notifications when test runs complete (configure in dashboard):

json
{
  "event": "test_run.completed",
  "runId": "run-123",
  "projectId": "proj-abc",
  "status": "passed",
  "duration_ms": 45000,
  "passedCount": 42,
  "failedCount": 0
}

GitHub/Bitbucket Integration

Link repositories to enable automatic test regeneration on code changes. Configure via dashboard or API:

bash
curl -X POST https://ztester.zavecoder.com/api/v1/github/link-repo \
  -H "Authorization: Bearer zt_your_key" \
  -H "Content-Type: application/json" \
  -d '{
    "projectId": "proj-abc",
    "repoFullName": "myorg/myapp",
    "installationId": 12345,
    "defaultBranch": "main"
  }'

10. Best Practices & Rate Limits

Best Practices

  • Use git-based discovery for faster, higher-quality test generation
  • Tag tests as critical, smoke, or regression to run different suites
  • Use batch execution for parallel test runs (10x faster than sequential)
  • Enable auto-feedback to keep tests updated as UI evolves
  • Monitor flaky tests and fix root causes (timing issues, test data dependencies)
  • Use separate environments for staging vs production testing
  • Set up webhooks for real-time CI/CD notifications

Rate Limits

  • API requests: 1000/hour per API key
  • Test discovery: 10 concurrent jobs per project
  • Batch execution: Up to 200 tests per batch
  • Parallel batches: 6 concurrent batches (15 tests each)

⚠️ Important: For large-scale testing (> 500 tests/day), contact support for enterprise limits.

Endpoints

Verified Routes (auto-populated): On first source analysis discovery, zTester auto-populates verified_routes by mapping entity slugs to their discovered URL paths (e.g., customers/dashboard/customers).

  • Review and correct routes via GET /projects?id=ID — check the verified_routes field
  • Update routes via PATCH /projects?id=ID — pass your corrected verified_routes object
  • Subsequent discoveries use your verified routes instead of guessing from source code

Auth Discovery (auto-populated): Source analysis automatically detects your app's authentication setup from the codebase — login page URL, auth library (NextAuth, Supabase, Clerk, etc.), form field selectors, and OAuth providers.

  • After discovery, check GET /projects?id=ID — the discovered_auth field shows what was detected
  • The environment's auth_strategy is auto-populated with the login URL and selectors
  • Add test credentials via PATCH /environments?id=ID to complete the auth setup
  • Subsequent discoveries use the configured auth automatically for login
json
// Example discovered_auth (auto-populated on project)
{
  "loginUrl": "/sign-in",
  "authLibrary": "supabase-auth",
  "authType": "email_password",
  "formSelectors": {
    "emailSelector": "input[type=\"email\"]",
    "passwordSelector": "input[type=\"password\"]",
    "submitSelector": "button[type=\"submit\"]"
  },
  "successRedirect": "/dashboard",
  "providers": ["credentials", "google"]
}

// Complete auth setup by adding credentials to environment:
// PATCH /environments?id=env-123
{
  "authStrategy": {
    "type": "email_password",
    "loginUrl": "/sign-in",
    "credentials": {
      "email": "test@example.com",
      "password": "testpassword123"
    }
  }
}

Environments

Manage test environments (staging, production, etc.)

Test Cases

Create and manage test cases

Test Runs

Execute tests and retrieve results

Three Ways to Execute Tests:

  • Single Test: Use testCaseId in request body → Returns 200 OK with immediate results
  • Multiple Specific Tests: Use testCaseIds array in request body → Returns 202 Accepted with async execution
  • All Project Tests: Use projectId in request body → Returns 202 Accepted with async execution

Note: environmentId is optional for all three methods — uses the project's default environment if not provided.

Async Parallel Execution: When using projectId or testCaseIds, the API returns 202 Accepted immediately and runs tests in the background. Poll the poll_url to check progress and get results.

  • Tests are split into batches of 10 for execution
  • Up to 4 batches run in parallel (4 concurrent browser sessions)
  • Each batch authenticates once and shares the session across all 10 tests
  • Max 500 tests per request — automatically chunked into 200-test groups for runner submission
  • Poll GET /api/v1/test-runs?id=BATCH_ID until status is passed, failed, or error

Example: 300 tests → Split into 2 runner chunks (200 + 100) → Each chunk split into batches of 10 → Up to 4 batches run in parallel. Each batch authenticates once, so ~30 logins total instead of 300. This is ~10x faster than sequential execution.

🛡️ Graceful Auth Failure: If authentication fails or is not configured, zTester intelligently handles the situation:

  • Public tests continue: Tests that don't require authentication (e.g., landing page, pricing page) run normally
  • Protected tests are skipped: Tests with URLs matching /admin, /dashboard, /portal, /staff, etc. are marked as skipped
  • Clear error messages: Skipped tests include detailed error context in failureDetails with skipReason: "auth_required_but_unavailable"
  • No batch failure: Auth failure doesn't fail the entire batch — only auth-required tests are skipped

Example: 50 tests, no auth configured → 30 public tests run successfully, 20 protected tests skipped with clear message: "Test skipped: No authentication configured for environment \"Staging\". Configure auth in the environment settings to test protected pages."

Runner Streaming (Direct)

Real-time test execution via Server-Sent Events (SSE)

Direct Runner Access: For external integrations and custom frontends, you can call the runner directly for real-time streaming of test execution events. This bypasses the web API proxy and streams events directly from the test runner.

  • Endpoint: https://ztrunner.zavecoder.com/execute-stream
  • Protocol: Server-Sent Events (SSE) - keep-alive connection with real-time updates
  • Authentication: Same API keys as the main API (starts with zt_)
  • Use case: Building custom test dashboards, embedding test execution in your app

⚠️ Important Notes:

  • Use HTTPS: HTTP redirects to HTTPS and changes POST to GET (will fail)
  • Keep connection open: The stream stays open during test execution (can take minutes)
  • Heartbeats: Server sends : heartbeat\n\n every 15 seconds to keep connection alive
  • Self-healing: You may receive healing events if selectors fail and get auto-fixed
  • Error handling: Stream ends with either complete or error event

Available Actions:

navigateclickfilltypeselecthoverwaitwaitForSelectorassertVisibleassertTextassertUrlpressscreenshot

Full documentation: GitHub - RUNNER_STREAMING_API.md

Test Feedback

Submit feedback on test results to improve test quality over time

Feedback loop: After a test run, submit feedback on individual test results. zTester uses this to:

  • Auto-fix selectors — submit a selector_fix with a suggested selector, and zTester updates the test case + triggers a verification re-run automatically
  • Auto-fix action types — submit an action_fix to change a step's action (e.g., fillselect for dropdowns)
  • Adjust confidence — tests marked correct get boosted, false_positive gets lowered
  • Track flaky tests — tests marked flaky are auto-retried (up to 3 attempts) in future runs
  • Improve discovery — accumulated feedback enriches future discovery prompts (e.g., prefer text-based selectors over nth-of-type)

Verdict Types

VerdictMeaningAuto-Action
selector_fixSelector is wrong or fragileUpdates test case + triggers verification re-run
action_fixWrong action type (e.g., fill instead of select)Updates step action type in test case
false_positiveTest fails but app works fineLowers confidence score by 0.1
flakyPasses sometimes, fails other timesIncrements flaky count, auto-retried in future runs
correctTest result is accurateBoosts confidence score by 0.05
not_applicableTest doesn't apply to this appLowers confidence score by 0.1

GitHub Repository Linking

Connect GitHub repos to projects for automatic source analysis

Prerequisite: Install the zTester GitHub App on your GitHub account first. The App grants access to your repositories. Then use these endpoints to link repos to projects.

Bitbucket Repository Linking

Connect Bitbucket repos to projects for automatic source analysis

Prerequisite: Connect your Bitbucket workspace via OAuth first (Settings → Integrations → Connect Bitbucket). Then use these endpoints to link repos to projects.

Auto-Discovery

Automatically crawl your app and generate test cases

Three discovery modes:

  • Crawl (default) — browses your live app to find interactive elements (forms, buttons, tables). Generates surface-level smoke tests with real selectors.
  • Git — analyzes your source code to extract entities, CRUD workflows, form fields, and API endpoints. Generates deeper workflow tests (e.g. create → edit → delete flows).
  • Hybrid (git + environment) — combines source analysis with live DOM crawling. The runner clones your repo to understand your app structure, then visits entity pages in a real browser to capture actual DOM selectors. Produces the highest quality tests: workflow-level coverage with real selectors.

Hybrid mode is triggered automatically when you use "source": "git" with an environmentId or appUrl that points to a live (non-localhost) URL. No extra parameters needed.

GitHub/Bitbucket integration: If your project has a linked repository, git source analysis works with just {"source": "git"} — the repo URL, branch, and access token are resolved automatically.

Polling for status: The POST response returns a discoveryId. To check progress, poll GET /projects/{projectId}/discoveries/{discoveryId} (not /discover-tests). The discovery endpoints are listed below.

🔐 Auto-Detected Login Selectors: During discovery, zTester automatically detects login forms and extracts exact selectors for:

  • Email/username input — checks type="email", name="email", autocomplete attributes
  • Password input — finds input[type="password"]
  • Submit button — matches button[type="submit"] or text-based selectors
  • Form type — distinguishes between email_password and username_password

Use these selectors to configure your environment's authStrategy — no manual selector hunting required! Check the detectedLoginSelectors field in the discovery response.

Immediately executable: Auto-discovered tests are saved with status: "active" and are ready to run right away — no manual review step required. You can run them via the test-runs endpoint as soon as discovery completes.

Hybrid Discovery (Source + DOM Crawl)

How zTester combines code analysis with live browser crawling for the best results

Hybrid discovery is the recommended mode for generating high-quality tests. It runs automatically when you use "source": "git" with a live appUrl or environmentId.

How it works:

  1. Clone & analyze source code — extracts routes, business entities (Customer, Invoice, etc.), CRUD operations, form fields, and API endpoints from your codebase.
  2. Synthesize workflows — builds multi-step business workflows (e.g., "Create Customer → fill form → submit → verify in list") from the extracted entity model.
  3. Crawl entity pages — launches Playwright, authenticates (if auth provided), visits each entity's pages (list, create, edit), and extracts real DOM elements: forms, inputs, buttons, tables with their actual CSS selectors.
  4. Merge selectors — replaces guessed selectors from source analysis with real DOM selectors. Matches form field names to actual inputs, identifies create/edit/delete buttons, and maps table structures.
  5. Generate tests — sends enriched context (workflows + real selectors) to AI, producing tests that use actual selectors from your live app.

Why hybrid produces better tests:

ModeUnderstands WorkflowsReal SelectorsTest Quality
Crawl onlyNoYesSurface-level smoke tests (page loads, basic clicks)
Git onlyYesNo (guessed)Deep workflow tests, but selectors may not match DOM
HybridYesYesDeep workflow tests with verified real selectors

Trigger hybrid mode:

bash
# Option 1: With environment (resolves URL + auth automatically)
curl -X POST ".../discover-tests" -d '{"source": "git", "environmentId": "env-uuid"}'

# Option 2: With explicit URL + auth
curl -X POST ".../discover-tests" -d '{
  "source": "git",
  "appUrl": "https://your-app.com",
  "auth": {"type": "email_password", "credentials": {"email": "...", "password": "..."}}
}'

# Option 3: With ActionExplorer for deep workflow tests
curl -X POST ".../discover-tests" -d '{
  "source": "git",
  "environmentId": "env-uuid",
  "actionExplorer": {"enabled": true}
}'

# Git-only mode (no DOM crawl) — omit appUrl and environmentId
curl -X POST ".../discover-tests" -d '{"source": "git"}'

ActionExplorer (Deep Workflow Tests)

Generate 5-8 step business workflow tests that click, fill, submit, and verify

ActionExplorer goes beyond basic discovery by actually interacting with your live application. It clicks action buttons (Add, Create, Edit, Delete), observes DOM mutations (dialogs, toasts, table changes), fills forms with contextual test data, submits, and generates meaningful assertions.

How it works:

  1. Navigates to each discovered page
  2. Finds action buttons (verbs like "Add", "Create", "Edit", "Delete")
  3. Clicks each button and observes DOM changes (new dialog? page navigation? toast?)
  4. If a form appears, fills fields using inference (input type, name, placeholder patterns)
  5. Submits the form and observes the result (success toast, new table row, URL change)
  6. Generates a multi-step test with meaningful assertions

Assertion types (replaces "element exists"):

TypeExampleHow Detected
Post-action state"Success toast appeared after save"MutationObserver detects new toast/alert element
Data persistence"New role appears in roles table"Table row count increased after submit
Validation"Required field error on empty submit"Error message element appeared

Test data safety (optional):

Since ActionExplorer creates real data in your app, you can configure snapshot/restore endpoints on your environment to automatically roll back changes after discovery:

bash
# Configure safety endpoints on your environment
curl -X PATCH ".../environments?id=env-uuid" \
  -H "Authorization: Bearer zt_your_key" \
  -H "Content-Type: application/json" \
  -d '{
    "testSnapshotUrl": "https://your-api.com/test/snapshot",
    "testRestoreUrl": "https://your-api.com/test/restore"
  }'

# Your snapshot endpoint should:
# POST /test/snapshot → return { "snapshotId": "snap-123" }
# POST /test/restore  → accept { "snapshotId": "snap-123" }
# POST /test/reset    → reset to known state (simpler alternative)

Deduplication (Re-Analysis)

How zTester handles repeated source analysis runs

Both crawl and git source analysis modes generate a content hash for each test based on project, type, and normalized name. Re-running discovery updates existing tests instead of creating duplicates.

ScenarioResult
Same test regeneratedUpdated in place (steps, confidence refreshed)
New route/form/page discoveredNew test case created
Route removed or page no longer accessibleOld test left untouched (user may have edited)
User edited an auto-generated test nameEdited test kept, new version also created

GitHub Integration - Auto Re-Analysis

Automatically re-analyze source code on every push

If your project is connected to a GitHub repository with auto_analyze_on_push enabled, zTester automatically:

  1. Detects pushes to the default branch via GitHub webhook
  2. Clones the repo using the GitHub App installation token (works for private repos)
  3. Runs source code analysis (routes, forms, API endpoints, components)
  4. Upserts test cases — new tests are created, existing tests are updated

This means your test suite stays in sync with your codebase automatically. No manual API calls needed after the initial GitHub App setup.

Setup: Install the zTester GitHub App, link a repository to your project, and enable "Auto-analyze on push" in project settings.

Private Repository Support

Git source analysis works with private repos

With GitHub App or Bitbucket OAuth (recommended): If your project has a linked GitHub or Bitbucket repository, the access token is fetched automatically. Just send {"source": "git"} and it works — even for private repos. zTester checks GitHub first, then Bitbucket.

Manual token: Pass a git.token in the API request. The correct auth format is used automatically based on the provider:

ProviderToken Type
GitHubghp_* (Personal Access Token) or fine-grained token
BitbucketApp password or repository access token
GitLabPersonal access token or project token

Manually provided tokens are only used for the shallow clone and are never stored. GitHub App and Bitbucket OAuth tokens are auto-refreshed and cached securely.

ScenarioResult
{"source":"git"} + linked repo (GitHub or Bitbucket)Auto-fetches URL, branch, and token
{"source":"git","git":{"url":"..."}} + linked repoUses your URL, auto-fetches token
{"source":"git","git":{"url":"...","token":"..."}}Uses both provided values
{"source":"git"} + no linked repoReturns 400 with GitHub/Bitbucket connect URLs

API Key Management

Create and manage API keys programmatically

Session Auth Required: These endpoints use your browser session cookie for authentication, not API keys. They are intended for use from the zTester dashboard or programmatic session-based access. You cannot use an API key to manage other API keys.

Available Scopes

ScopePermissions
readList projects, environments, test cases, test runs, discoveries
run_testsExecute test runs (single or batch)
write_testsCreate, update, and delete test cases
generateTrigger AI test generation and auto-discovery
adminCreate/delete projects, manage environments (including auth credentials), trigger discovery, link repos

CI/CD Integration Examples

GitHub Actions (Run All Project Tests)

yaml
name: E2E Tests
on: [push, pull_request]

jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - name: Run zTester E2E Tests
        run: |
          response=$(curl -X POST https://ztester.zavecoder.com/api/v1/test-runs \
            -H "Authorization: Bearer ${{ secrets.ZTESTER_API_KEY }}" \
            -H "Content-Type: application/json" \
            -d '{"projectId": "${{ secrets.ZTESTER_PROJECT_ID }}"}' \
            -s)

          echo "$response" | jq .

          status=$(echo $response | jq -r '.status')
          passed=$(echo $response | jq -r '.passed')
          failed=$(echo $response | jq -r '.failed')

          echo "Results: $passed passed, $failed failed"

          if [ "$status" != "passed" ]; then
            echo "❌ E2E Tests failed!"
            exit 1
          fi

          echo "✅ All E2E tests passed!"

Jenkins Pipeline

groovy
pipeline {
  agent any

  environment {
    ZTESTER_API_KEY = credentials('ztester-api-key')
  }

  stages {
    stage('Run Tests') {
      steps {
        script {
          def response = sh(
            script: """
              curl -X POST https://ztester.zavecoder.com/api/v1/test-runs \
                -H "Authorization: Bearer ${ZTESTER_API_KEY}" \
                -H "Content-Type: application/json" \
                -d '{"testCaseId": "test-1"}' -s
            """,
            returnStdout: true
          ).trim()

          def json = readJSON text: response

          if (json.status != 'passed') {
            error("Tests failed: ${json.status}")
          }

          echo "Tests passed!"
        }
      }
    }
  }
}

Shell Script (Run All Project Tests)

bash
#!/bin/bash

API_KEY="zt_your_api_key_here"
PROJECT_ID="abc-123"
BASE_URL="https://ztester.zavecoder.com/api/v1"

echo "🚀 Running all E2E tests for project..."

# Run ALL tests for the project (parallel execution)
RESULT=$(curl -s -X POST "$BASE_URL/test-runs" \
  -H "Authorization: Bearer $API_KEY" \
  -H "Content-Type: application/json" \
  -d "{"projectId": "$PROJECT_ID"}")

STATUS=$(echo "$RESULT" | jq -r '.status')
PASSED=$(echo "$RESULT" | jq -r '.passed')
FAILED=$(echo "$RESULT" | jq -r '.failed')
TOTAL=$(echo "$RESULT" | jq -r '.total_tests')
DURATION=$(echo "$RESULT" | jq -r '.duration_ms')
URL=$(echo "$RESULT" | jq -r '.url')

echo ""
echo "📊 Results: $PASSED/$TOTAL passed ($DURATION ms)"
echo ""

# Show failed tests if any
if [ "$FAILED" -gt 0 ]; then
  echo "❌ Failed tests:"
  echo "$RESULT" | jq -r '.results[] | select(.status != "passed") | "  - \(.testCaseName): \(.error)"'
  echo ""
fi

echo "🔗 View details: $URL"

if [ "$STATUS" = "passed" ]; then
  echo "✅ All tests passed!"
  exit 0
else
  echo "❌ $FAILED test(s) failed!"
  exit 1
fi

SDK & Embeddable Widget

Integrate zTester directly into your admin portal or ops dashboard using the JavaScript SDK or embeddable iframe widget. Both support live SSE streaming during test execution.

Option 1: JavaScript SDK

Framework-agnostic TypeScript/JavaScript SDK. Load via <script> tag or ES module import.

Script Tag (no build tools)

html
<script src="https://ztester.zavecoder.com/sdk/ztester.js"></script>
<script>
  const zt = new ZTester.ZTester({ apiKey: 'zt_your_key' });

  // Run all project tests with live streaming
  zt.runProjectStream('your-project-id', {
    onTestStart: ({ testCaseName, index, total }) => {
      console.log(`[${index + 1}/${total}] Running: ${testCaseName}`);
    },
    onStepComplete: ({ stepIndex, status }) => {
      console.log(`  Step ${stepIndex + 1}: ${status}`);
    },
    onTestComplete: ({ testCaseId, status }) => {
      console.log(`  Result: ${status}`);
    },
  }).then(results => {
    const passed = results.filter(r => r.status === 'passed').length;
    console.log(`Done: ${passed}/${results.length} passed`);
  });
</script>

ES Module Import

typescript
import { ZTester } from '@ztester/sdk';

const zt = new ZTester({ apiKey: process.env.ZTESTER_KEY });

// Single test with streaming
const result = await zt.runTestStream('test-uuid', {
  environmentId: 'env-uuid',      // optional — uses project default
  onStepStart: ({ description }) => updateUI(`Running: ${description}`),
  onStepComplete: ({ status }) => updateUI(`Step ${status}`),
  onHealing: ({ originalSelector, healedSelector }) =>
    updateUI(`Self-healed: ${originalSelector} → ${healedSelector}`),
});
console.log(`Test ${result.status} in ${result.durationMs}ms`);

// Discovery with polling
const disc = await zt.discoverTests('project-uuid', { source: 'git' });
const discovery = await zt.pollDiscovery('project-uuid', disc.discoveryId, {
  interval: 3000,
  onProgress: (d) => updateUI(`${d.progress?.testsGenerated} tests found...`),
});

// Submit feedback (selector fix + action fix)
await zt.submitFeedback('test-run-id', [
  {
    testCaseId: 'test-uuid-1',
    verdict: 'selector_fix',
    suggestedFix: {
      stepNumber: 2,
      currentSelector: 'button:nth-of-type(2)',
      suggestedSelector: "button:has-text('Submit')",
    },
  },
  {
    testCaseId: 'test-uuid-2',
    verdict: 'action_fix',
    suggestedFix: {
      stepNumber: 4,
      newAction: 'select',
      newValue: '{{first-option}}',
    },
    notes: 'Field is a <select> dropdown, not a text input',
  },
]);

SDK Methods

MethodDescription
runTestStream(testCaseId, opts)Execute single test with live SSE streaming. Returns final result.
runProjectStream(projectId, opts)Run all project tests sequentially with streaming + progress callbacks.
runTest(testCaseId, envId?)Execute single test (non-streaming). Waits for completion.
runProject(projectId, envId?)Batch run all project tests (non-streaming, parallel execution).
discoverTests(projectId, opts)Trigger auto-discovery. Returns discoveryId for polling.
pollDiscovery(projectId, discoveryId)Poll until discovery completes. Optional progress callback.
submitFeedback(runId, items[])Submit bulk feedback. Selector and action fixes auto-applied.
getTestCases(projectId)List all test cases for a project.
getTestRuns(params)Query test run history by project, test case, or run ID.
getFeedback(params)Query submitted feedback by project, test run, or verdict.

SSE Streaming Events

The runTestStream() method receives live events during test execution:

EventCallbackData
startonEventtestCaseId, totalSteps, baseUrl
step_startonStepStartstepIndex, totalSteps, action, target, description
step_completeonStepCompletestepIndex, status, durationMs, error?, screenshot?
healingonHealingstepIndex, originalSelector, healedSelector, confidence
completeonCompletestatus, durationMs, stepResults[], selfHealingActions[]
erroronErrormessage, step?

Option 2: Embeddable iframe Widget

Drop a pre-built test dashboard into any page with a single script tag. No React or build tools needed.

html
<!-- Add this anywhere in your admin portal -->
<div id="ztester-widget"></div>
<script src="https://ztester.zavecoder.com/sdk/embed.js"></script>
<script>
  ZTesterEmbed.ZTesterEmbed.init({
    container: '#ztester-widget',
    apiKey: 'zt_your_api_key',
    projectId: 'your-project-id',
    theme: 'dark',                                        // 'dark' or 'light'
    features: ['run', 'discover', 'feedback', 'history'], // which tabs to show
    height: '700px',
    onReady: () => console.log('Widget loaded'),
    onTestComplete: (result) => {
      console.log('Test finished:', result.testCaseName, result.status);
    },
  });
</script>

Security: The API key is passed to the iframe via postMessage, never in the URL. The widget communicates results back to your page via postMessage events.

SDK Bundle URLs

URLFormatUse Case
/sdk/ztester.jsUMD<script> tag, sets window.ZTester
/sdk/ztester.esm.jsES Moduleimport from bundlers (webpack, vite, etc.)
/sdk/embed.jsIIFE<script> tag, sets window.ZTesterEmbed

AI Auto-Triage (Feedback Loop)

Use the POST /test-runs/feedback endpoint to build an automated triage agent that analyzes test failures and submits feedback programmatically. This closes the feedback loop: tests run → failures analyzed → fixes applied → tests re-run.

Architecture

text
Test Run (batch)
    │
    ├── All passed → done
    │
    └── Some failed
          │
          ▼
    AI Triage Agent
          │
          ├── Classify failure type (selector? timing? logic?)
          │
          ├── Generate fix (new selector, retry hint, etc.)
          │
          └── POST /test-runs/feedback
                │
                ├── verdict: "selector_fix" → auto-applies fix + re-runs
                ├── verdict: "action_fix"   → changes step action type (fill → select)
                ├── verdict: "flaky"        → increments flaky count, auto-retried next run
                ├── verdict: "false_positive"→ lowers confidence score
                └── verdict: "correct"      → boosts confidence score

Example 1: LLM Selector Auto-Fix

When a test fails on a click/fill step, send the error message + page HTML to an LLM to generate a corrected selector.

typescript
import OpenAI from 'openai';

const openai = new OpenAI();

async function triageFailedTest(testRun: any, testCase: any) {
  // Find the failed step
  const failedStep = testRun.stepResults?.find(
    (s: any) => s.status === 'failed'
  );
  if (!failedStep) return null;

  // Check if it's a selector issue (timeout, not found)
  const isSelector = failedStep.error?.includes('Timeout') ||
    failedStep.error?.includes('not found') ||
    failedStep.error?.includes('waiting for selector');

  if (!isSelector) return null;

  // Ask LLM to classify and suggest fix
  const response = await openai.chat.completions.create({
    model: 'gpt-4o-mini',
    messages: [{
      role: 'system',
      content: `You are a Playwright selector expert. Given a failed selector
and the page context, suggest a working CSS selector.
Return JSON: { "selector": "...", "confidence": 0.0-1.0, "reason": "..." }`
    }, {
      role: 'user',
      content: `Failed selector: ${failedStep.target}
Error: ${failedStep.error}
Step action: ${failedStep.type} (step #${failedStep.stepNumber})
Page URL: ${testRun.lastUrl || 'unknown'}
Test name: ${testCase.name}`
    }],
    response_format: { type: 'json_object' },
  });

  const fix = JSON.parse(response.choices[0].message.content || '{}');
  if (!fix.selector || fix.confidence < 0.6) return null;

  // Submit feedback with selector fix
  await fetch('https://ztester.zavecoder.com/api/v1/test-runs/feedback', {
    method: 'POST',
    headers: {
      'Authorization': 'Bearer zt_your_key',
      'Content-Type': 'application/json',
    },
    body: JSON.stringify({
      testRunId: testRun.id,
      testCaseId: testCase.id,
      verdict: 'selector_fix',
      feedbackType: 'selector',
      suggestedFix: {
        stepNumber: failedStep.stepNumber,
        currentSelector: failedStep.target,
        suggestedSelector: fix.selector,
      },
      notes: fix.reason,
    }),
  });

  // The API auto-applies the fix and triggers a verification re-run
  return fix;
}

Example 2: Statistical Flaky Detection

No LLM needed. Compare recent run history to detect flaky tests (pass/fail alternation).

typescript
async function detectFlakyTests(projectId: string, apiKey: string) {
  const BASE = 'https://ztester.zavecoder.com/api/v1';
  const headers = {
    'Authorization': `Bearer ${apiKey}`,
    'Content-Type': 'application/json',
  };

  // Get recent test runs for the project
  const runsRes = await fetch(
    `${BASE}/test-runs?projectId=${projectId}&limit=100`,
    { headers }
  );
  const { results } = await runsRes.json();

  // Group by test case, check for pass/fail alternation
  const byTestCase = new Map<string, string[]>();
  for (const run of results) {
    const existing = byTestCase.get(run.testCaseId) || [];
    existing.push(run.status);
    byTestCase.set(run.testCaseId, existing);
  }

  const flakyFeedback = [];
  for (const [testCaseId, statuses] of byTestCase) {
    if (statuses.length < 3) continue;

    // Count alternations (pass→fail or fail→pass)
    let alternations = 0;
    for (let i = 1; i < statuses.length; i++) {
      if (statuses[i] !== statuses[i - 1]) alternations++;
    }

    const flakyScore = alternations / (statuses.length - 1);
    if (flakyScore > 0.4) {
      // More than 40% alternation = likely flaky
      flakyFeedback.push({
        testCaseId,
        verdict: 'flaky',
        notes: `Flaky score: ${(flakyScore * 100).toFixed(0)}% (${alternations} alternations in ${statuses.length} runs)`,
      });
    }
  }

  if (flakyFeedback.length > 0) {
    // Bulk submit flaky feedback
    const latestRunId = results[0]?.id;
    await fetch(`${BASE}/test-runs/feedback`, {
      method: 'POST',
      headers,
      body: JSON.stringify({
        testRunId: latestRunId,
        results: flakyFeedback,
      }),
    });
    console.log(`Marked ${flakyFeedback.length} tests as flaky`);
  }
}

Example 3: Screenshot + DOM Analysis (Vision)

For failures where the selector exists but the assertion fails, use a vision model to analyze the screenshot and DOM snapshot.

typescript
async function analyzeWithVision(
  testRun: any,
  testCase: any,
  screenshot: Buffer, // from step_results[].screenshot
  domSnapshot: string // from step_results[].html
) {
  const response = await openai.chat.completions.create({
    model: 'gpt-4o',
    messages: [{
      role: 'system',
      content: `Analyze this test failure. The test expected certain behavior but
the page shows something different. Classify as:
- "false_positive": App works correctly, test expectation is wrong
- "selector_fix": Element exists but with different selector
- "flaky": Looks like a timing/loading issue
- "correct": App genuinely has a bug

Return JSON: { "verdict": "...", "reason": "...", "suggestedSelector?": "..." }`
    }, {
      role: 'user',
      content: [
        {
          type: 'text',
          text: `Test: ${testCase.name}
Failed step: ${testRun.stepResults?.find((s: any) => s.status === 'failed')?.description}
Error: ${testRun.error}
DOM snippet (around failed element): ${domSnapshot.substring(0, 2000)}`
        },
        {
          type: 'image_url',
          image_url: {
            url: `data:image/png;base64,${screenshot.toString('base64')}`
          }
        }
      ]
    }],
    response_format: { type: 'json_object' },
  });

  const analysis = JSON.parse(
    response.choices[0].message.content || '{}'
  );

  // Submit the AI's verdict
  await fetch('https://ztester.zavecoder.com/api/v1/test-runs/feedback', {
    method: 'POST',
    headers: {
      'Authorization': 'Bearer zt_your_key',
      'Content-Type': 'application/json',
    },
    body: JSON.stringify({
      testRunId: testRun.id,
      testCaseId: testCase.id,
      verdict: analysis.verdict,
      feedbackType: 'assertion',
      notes: `[AI] ${analysis.reason}`,
      ...(analysis.suggestedSelector && {
        suggestedFix: {
          stepNumber: testRun.stepResults?.findIndex(
            (s: any) => s.status === 'failed'
          ) + 1,
          suggestedSelector: analysis.suggestedSelector,
        },
      }),
    }),
  });

  return analysis;
}

Example 4: GitHub Actions with AI Triage

Add an AI triage step to your CI/CD pipeline. Failed tests get analyzed and feedback is submitted automatically.

yaml
name: E2E Tests with AI Triage
on: [push]

jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Run zTester E2E Tests
        id: tests
        run: |
          RESULT=$(curl -s -X POST https://ztester.zavecoder.com/api/v1/test-runs \
            -H "Authorization: Bearer ${{ secrets.ZTESTER_API_KEY }}" \
            -H "Content-Type: application/json" \
            -d '{"projectId": "${{ secrets.ZTESTER_PROJECT_ID }}"}')

          echo "result=$RESULT" >> $GITHUB_OUTPUT
          STATUS=$(echo $RESULT | jq -r '.status')
          echo "status=$STATUS" >> $GITHUB_OUTPUT

      - name: AI Triage Failed Tests
        if: steps.tests.outputs.status != 'passed'
        run: |
          node scripts/ai-triage.js \
            --api-key "${{ secrets.ZTESTER_API_KEY }}" \
            --openai-key "${{ secrets.OPENAI_API_KEY }}" \
            --result '${{ steps.tests.outputs.result }}'

      - name: Re-run After Fixes Applied
        if: steps.tests.outputs.status != 'passed'
        run: |
          # Wait for selector fixes to be applied
          sleep 10

          # Re-run only the failed tests
          FAILED=$(echo '${{ steps.tests.outputs.result }}' | \
            jq -r '[.results[] | select(.status != "passed") | .testCaseId] | join(",")')

          for TEST_ID in $(echo $FAILED | tr ',' ' '); do
            curl -s -X POST https://ztester.zavecoder.com/api/v1/test-runs \
              -H "Authorization: Bearer ${{ secrets.ZTESTER_API_KEY }}" \
              -H "Content-Type: application/json" \
              -d "{\"testCaseId\": \"$TEST_ID\"}"
          done

Triage Decision Tree

Use this logic to classify failures before calling an LLM (saves API costs):

typescript
function classifyFailure(stepResult: any): string {
  const error = stepResult.error || '';

  // 1. Timeout / element not found → likely selector issue
  if (error.includes('Timeout') ||
      error.includes('waiting for selector') ||
      error.includes('not found')) {
    return 'selector_fix';
  }

  // 2. Wrong action type (fill on select, selectOption on input)
  if (error.includes('selectOption') ||
      error.includes('Not a SELECT element') ||
      error.includes('Element is not a <select>') ||
      error.includes('is not an <input>')) {
    return 'action_fix';
  }

  // 3. Navigation error → might be flaky (network)
  if (error.includes('net::ERR_') ||
      error.includes('Navigation timeout') ||
      error.includes('frame was detached')) {
    return 'flaky';
  }

  // 4. Assertion failure → needs deeper analysis (use LLM)
  if (error.includes('expect') ||
      error.includes('assertion') ||
      error.includes('toBeVisible') ||
      error.includes('toHaveText')) {
    return 'needs_llm_analysis';
  }

  // 5. Permission / auth errors → likely environment issue
  if (error.includes('403') ||
      error.includes('401') ||
      error.includes('Unauthorized')) {
    return 'not_applicable';
  }

  // 6. Default: send to LLM for classification
  return 'needs_llm_analysis';
}
SignalLikely VerdictLLM Needed?
Timeout / selector not foundselector_fixYes (to generate new selector)
Wrong action type (fill on select, etc.)action_fixNo (detect from error message)
Network error / frame detachedflakyNo
Assertion mismatchvariesYes (screenshot + DOM analysis)
Auth / permission deniednot_applicableNo
Pass/fail alternation (>40%)flakyNo (statistical)

Feedback drives improvement: Every feedback submission updates your test suite. Selector fixes are applied immediately and verified with a re-run. Action fixes (e.g., fill → select for dropdowns) are applied immediately. Flaky tests are auto-retried (up to 3 attempts). Confidence scores adjust based on correctness feedback, which influences future test generation and prioritization.

Rate Limits

API requests are currently not rate-limited, but we may introduce limits in the future. Best practices:

  • Don't poll for test run results more frequently than once per second
  • Use webhooks for notifications instead of polling when possible
  • Implement exponential backoff for retries

Error Handling

The API uses standard HTTP status codes:

CodeMeaning
200Success
201Resource created
202Accepted - async operation started (discovery, test run)
400Bad request - check your parameters
401Unauthorized - invalid or missing API key
403Forbidden - insufficient permissions
404Resource not found
500Server error - please try again or contact support

Error responses include a JSON body with details:

json
{
  "error": "Bad Request",
  "message": "testCaseId is required"
}

Support

Need help? Contact us at support@zavecoder.com