REST API for integrating zTester with your CI/CD pipeline and external tools
Navigate to Settings → API Keys and create a new API key.
curl https://ztester.zavecoder.com/api/v1/projects \
-H "Authorization: Bearer zt_your_api_key_here"All API requests require authentication using an API key. Include your API key in the Authorization header:
Authorization: Bearer zt_your_api_key_here⚠️ Keep your API keys secure. Never commit them to version control or share them publicly.
https://ztester.zavecoder.com/api/v1Get started with zTester in minutes. Automatically generate comprehensive E2E tests and integrate with your CI/CD pipeline.
Connect your GitHub/Bitbucket repository or provide your application URL. zTester intelligently analyzes your application and automatically generates comprehensive multi-step workflow tests (typically 5-8 steps per test covering real user journeys).
Execute hundreds of tests in minutes using our high-performance test runner. Tests run in parallel batches with automatic authentication handling and real-time progress updates.
When UI changes break selectors, our feedback system auto-detects failures, suggests fixes, and verifies corrections automatically. Tests adapt to your evolving application.
Seamlessly integrate with GitHub Actions, GitLab CI, Jenkins, or any CI/CD tool. Run tests on every pull request and get pass/fail reports in minutes.
💡 Zero Manual Test Writing: Unlike traditional E2E tools, zTester eliminates the need to manually write and maintain test scripts. Our AI-powered generation creates production-ready tests automatically, saving your team hundreds of hours.
1. SETUP (One-time)
├─ Create project
├─ Configure environment & authentication
└─ Link GitHub/Bitbucket repository (optional)
2. GENERATE TESTS
├─ Trigger test discovery via API
├─ zTester analyzes your application
└─ Returns 40-100+ ready-to-run tests in 2-5 minutes
3. EXECUTE TESTS
├─ Run tests via batch execution API
├─ Monitor progress in real-time
└─ Get detailed pass/fail results
4. CONTINUOUS IMPROVEMENT
├─ Tests auto-adapt to UI changes
├─ Track flaky tests and pass rates
└─ Re-generate tests when code changes significantlyNavigate to Settings → API Keys and create a new API key with appropriate scopes:
read - View projects, tests, and resultswrite_tests - Create and update test casesrun_tests - Execute testsgenerate - Trigger test discovery/generationadmin - Full access (create projects, manage settings)💡 Recommended: For CI/CD pipelines, use a dedicated API key with generate and run_tests scopes.
curl -X POST https://ztester.zavecoder.com/api/v1/projects \
-H "Authorization: Bearer zt_your_key" \
-H "Content-Type: application/json" \
-d '{
"name": "My App",
"description": "E2E tests for production app"
}'For first-time setup, use the bootstrap endpoint to create the project and its first environment in one call. It defaults to a staging environment unless you explicitly request preview or production.
curl -X POST https://ztester.zavecoder.com/api/v1/projects/bootstrap \
-H "Authorization: Bearer zt_your_key" \
-H "Content-Type: application/json" \
-d '{
"name": "My App",
"description": "Fast setup path",
"baseUrl": "https://staging.myapp.com",
"environmentType": "staging"
}'{
"projectId": "proj-abc",
"projectName": "My App",
"environmentId": "env-xyz",
"environmentName": "Staging",
"environmentType": "staging",
"baseUrl": "https://staging.myapp.com"
}Define authentication strategies for your test environments:
curl -X POST https://ztester.zavecoder.com/api/v1/environments \
-H "Authorization: Bearer zt_your_key" \
-H "Content-Type: application/json" \
-d '{
"projectId": "proj-abc",
"name": "Staging",
"type": "staging",
"baseUrl": "https://staging.myapp.com",
"authStrategy": {
"type": "form_login",
"loginUrl": "/login",
"credentials": {
"usernameSelector": "#email",
"passwordSelector": "#password",
"submitSelector": "button[type=\"submit\"]",
"username": "test@example.com",
"password": "test123"
}
}
}'none - Public pagesform_login - Standard email/password logincookies - Pre-authenticated session cookiesbearer_token - JWT or API token in headersbasic_auth - HTTP Basic Authentication🔍 Detailed Auth Error Messages: When authentication fails, zTester provides specific, actionable error messages at each step:
loginUrl, username, password)zTester automatically generates comprehensive multi-step workflow tests (5-8+ steps) that exercise real user journeys and business logic, not just basic UI checks.
Link your GitHub/Bitbucket repository for the highest quality test generation:
curl -X POST https://ztester.zavecoder.com/api/v1/projects/proj-abc/discover-tests \
-H "Authorization: Bearer zt_your_key" \
-H "Content-Type: application/json" \
-d '{
"source": "git",
"repoUrl": "https://github.com/myorg/myapp.git",
"branch": "main",
"sourcePaths": ["src/app", "src/pages", "src/components"],
"environmentId": "env-xyz"
}'undefined or [object Object] leaking to the DOMWithout repository access, generate tests by crawling your live application:
curl -X POST https://ztester.zavecoder.com/api/v1/projects/proj-abc/discover-tests \
-H "Authorization: Bearer zt_your_key" \
-H "Content-Type: application/json" \
-d '{
"source": "crawl",
"baseUrl": "https://staging.myapp.com",
"environmentId": "env-xyz",
"maxDepth": 3,
"maxPages": 80
}'✅ Best Practice: Repository-based generation produces higher quality tests and is 10x faster. Typically generates 40-100 production-ready tests in 2-5 minutes.
In addition to automatic generation, you can also create custom test cases via API:
curl -X POST https://ztester.zavecoder.com/api/v1/test-cases \
-H "Authorization: Bearer zt_your_key" \
-H "Content-Type: application/json" \
-d '{
"projectId": "proj-abc",
"name": "Complete checkout with coupon",
"description": "Verify discount calculation and order creation",
"purpose": "Critical path: checkout with promo code",
"tags": ["checkout", "payment", "critical"],
"steps": [
{
"action": "navigate",
"target": "/products",
"description": "Go to products page"
},
{
"action": "click",
"target": ".product:first-child button:has-text(\"Add to Cart\")",
"description": "Add first product to cart"
},
{
"action": "click",
"target": "[aria-label=\"Cart\"]",
"description": "Open cart"
},
{
"action": "click",
"target": "button:has-text(\"Checkout\")",
"description": "Proceed to checkout"
},
{
"action": "fill",
"target": "#coupon-code",
"value": "SAVE20",
"description": "Enter coupon code"
},
{
"action": "click",
"target": "button:has-text(\"Apply\")",
"description": "Apply coupon"
},
{
"action": "assert",
"target": ".discount-amount",
"value": "contains:$",
"description": "Verify discount applied"
},
{
"action": "click",
"target": "button:has-text(\"Complete Order\")",
"description": "Submit order"
},
{
"action": "wait",
"target": ".order-confirmation",
"description": "Wait for confirmation"
}
],
"expectedOutcomes": [
"Discount is calculated correctly",
"Order is created in database",
"Confirmation page shows order number"
]
}'| Action | Description |
|---|---|
navigate | Go to URL — target = full URL or path |
click | Click element — target = CSS selector (supports :has-text()) |
fill | Fill an input — target = selector, value = text to enter |
assertVisible | Assert element is visible — target = selector |
assertText | Assert element contains text — target = selector, value = expected text |
assertNotText | Assert page does not contain text — catches 404 pages, React render bugs (undefined, [object Object]) |
assertValue | Assert an input's current value — target = input selector, value = expected text (useful after reload for state persistence checks) |
assertUrl | Assert current URL contains a path segment — value = expected substring |
assertCount | Assert at least N elements exist — target = selector, value = minimum count |
reload | Reload the current page and wait for it to settle — used in state persistence tests |
wait | Wait a fixed duration — value = milliseconds (e.g. "1500") |
waitForSelector | Wait until element appears — target = selector |
select | Select a dropdown option — target = selector, value = option value or label |
hover | Hover over element — target = selector |
press | Press keyboard key — value = key name (e.g. "Enter", "Escape") |
clickIfExists | Click element only if it exists — won't fail if absent (for optional UI states) |
screenshot | Capture a screenshot — value = filename (optional) |
curl -X POST https://ztester.zavecoder.com/api/v1/test-runs \
-H "Authorization: Bearer zt_your_key" \
-H "Content-Type: application/json" \
-d '{
"testCaseId": "test-123",
"environmentId": "env-xyz"
}'Run multiple tests in parallel (up to 200 tests, batched in groups of 15):
curl -X POST https://ztester.zavecoder.com/api/v1/test-runs/execute-batch \
-H "Authorization: Bearer zt_your_key" \
-H "Content-Type: application/json" \
-d '{
"projectId": "proj-abc",
"environmentId": "env-xyz",
"testCaseIds": ["test-1", "test-2", "test-3"],
"tags": ["critical"]
}'{
"batchRunId": "run-456",
"status": "running",
"totalTests": 42,
"estimatedDurationMs": 180000
}curl https://ztester.zavecoder.com/api/v1/test-runs/run-456 \
-H "Authorization: Bearer zt_your_key"💡 CI/CD Integration: Use batch execution to run all tests tagged as critical on every deployment. Tests run in parallel and complete in ~3-5 minutes for 50 tests.
When tests fail due to UI changes (new selectors, button text changes), you can provide feedback and zTester will automatically fix and re-verify the tests:
curl -X POST https://ztester.zavecoder.com/api/v1/test-runs/feedback \
-H "Authorization: Bearer zt_your_key" \
-H "Content-Type: application/json" \
-d '{
"runId": "run-789",
"testCaseId": "test-123",
"stepNumber": 3,
"issue": "selector_not_found",
"details": "Button text changed from \"Submit\" to \"Save Changes\"",
"suggestedFix": {
"newSelector": "button:has-text(\"Save Changes\")"
},
"autoApply": true,
"verifyFix": true
}'{
"feedbackId": "fb-101",
"fixApplied": true,
"verificationRunId": "run-790",
"verificationStatus": "passed",
"message": "Test updated and verified successfully"
}✅ Self-Healing Tests: With autoApply: true and verifyFix: true, tests automatically adapt to UI changes and verify the fix works before saving.
Identify tests with inconsistent pass/fail patterns:
curl https://ztester.zavecoder.com/api/v1/projects/proj-abc/insights \
-H "Authorization: Bearer zt_your_key"{
"flakyTests": [
{
"testCaseId": "test-555",
"testName": "Login workflow",
"flakeRate": 0.23,
"totalRuns": 87,
"failures": 20,
"commonErrors": ["Timeout waiting for dashboard"]
}
],
"summary": {
"totalFlakyTests": 3,
"highestFlakeRate": 0.23
}
}Before re-running discovery, check if incremental mode is possible (10x faster):
curl -X POST https://ztester.zavecoder.com/api/v1/projects/proj-abc/incremental-analyze \
-H "Authorization: Bearer zt_your_key" \
-H "Content-Type: application/json" \
-d '{
"repoPath": "/tmp/repo-clone",
"sourcePaths": ["src/app", "src/components"]
}'Typical GitHub Actions workflow for automated testing on every PR. In production, we recommend pairing your application workflow with a verification gate that runs lint, typecheck, tests, and a zTester smoke check before release:
name: E2E Tests
on: [pull_request]
jobs:
e2e:
runs-on: ubuntu-latest
steps:
- name: Generate Tests (if source code changed)
run: |
curl -X POST https://ztester.zavecoder.com/api/v1/projects/${PROJECT_ID}/discover-tests \
-H "Authorization: Bearer ${ZTESTER_API_KEY}" \
-H "Content-Type: application/json" \
-d '{"source": "git", "branch": "${{ github.head_ref }}", "environmentId": "env-staging"}'
env:
ZTESTER_API_KEY: ${{ secrets.ZTESTER_API_KEY }}
PROJECT_ID: proj-abc
- name: Run Critical Path Tests
id: run_tests
run: |
RESPONSE=$(curl -X POST https://ztester.zavecoder.com/api/v1/test-runs/execute-batch \
-H "Authorization: Bearer ${ZTESTER_API_KEY}" \
-H "Content-Type: application/json" \
-d '{"projectId": "proj-abc", "environmentId": "env-staging", "tags": ["critical"]}')
BATCH_RUN_ID=$(echo $RESPONSE | jq -r '.batchRunId')
echo "batch_run_id=$BATCH_RUN_ID" >> $GITHUB_OUTPUT
- name: Wait for Results
run: |
BATCH_RUN_ID=${{ steps.run_tests.outputs.batch_run_id }}
for i in {1..60}; do
RESPONSE=$(curl https://ztester.zavecoder.com/api/v1/test-runs/$BATCH_RUN_ID \
-H "Authorization: Bearer ${ZTESTER_API_KEY}")
STATUS=$(echo $RESPONSE | jq -r '.status')
if [ "$STATUS" = "completed" ]; then
PASSED=$(echo $RESPONSE | jq -r '.passedCount')
TOTAL=$(echo $RESPONSE | jq -r '.totalTests')
echo "Tests completed: $PASSED/$TOTAL passed"
if [ "$PASSED" != "$TOTAL" ]; then
exit 1
fi
exit 0
fi
sleep 10
done
echo "Timeout waiting for tests"
exit 1Receive real-time notifications when test runs complete (configure in dashboard):
{
"event": "test_run.completed",
"runId": "run-123",
"projectId": "proj-abc",
"status": "passed",
"duration_ms": 45000,
"passedCount": 42,
"failedCount": 0
}Link repositories to enable automatic test regeneration on code changes. Configure via dashboard or API:
curl -X POST https://ztester.zavecoder.com/api/v1/github/link-repo \
-H "Authorization: Bearer zt_your_key" \
-H "Content-Type: application/json" \
-d '{
"projectId": "proj-abc",
"repoFullName": "myorg/myapp",
"installationId": 12345,
"defaultBranch": "main"
}'zTester can participate in a fully autonomous CI/CD fix cycle. When tests fail, it fires a signed webhook to your fix-bot or AI agent. The agent analyzes the failures, patches the code, then POSTs back to zTester to retrigger only the failing tests — looping until all tests pass or a configurable depth limit is hit.
curl -X POST https://ztester.zavecoder.com/api/v1/projects/proj-abc/failure-webhooks \
-H "x-api-key: zt_your_key" \
-H "Content-Type: application/json" \
-d '{
"url": "https://your-bot.example.com/ztester/failures",
"rerunDepthMax": 3,
"description": "AI fix-bot"
}'The secret in the response is shown only once. Store it as an environment variable — you need it to verify incoming signatures.
import { createHmac, timingSafeEqual } from 'crypto';
app.post('/ztester/failures', express.raw({ type: 'application/json' }), (req, res) => {
const sig = req.headers['x-ztester-signature'] || '';
const expected = 'sha256=' + createHmac('sha256', process.env.ZTESTER_WEBHOOK_SECRET)
.update(req.body).digest('hex');
const a = Buffer.from(expected), b = Buffer.from(sig);
if (a.length !== b.length || !timingSafeEqual(a, b)) return res.status(401).send('Invalid signature');
const payload = JSON.parse(req.body);
res.status(200).send('ok'); // ack immediately
if (!payload.rerunSession.canRerun) {
escalate(payload); // open GitHub issue / Slack alert
return;
}
applyFixes(payload.failures).then(() =>
fetch(payload.retriggerUrl, {
method: 'POST',
headers: { 'x-api-key': process.env.ZTESTER_API_KEY, 'Content-Type': 'application/json' },
body: JSON.stringify({ scope: 'failed_only', rerunSessionId: payload.rerunSession.sessionId })
})
);
});{
"event": "run_failed",
"projectId": "proj-abc",
"projectName": "My App",
"runId": "run-uuid",
"trigger": "ci",
"summary": { "total": 20, "passed": 16, "failed": 4, "durationMs": 94000 },
"failures": [
{
"testCaseId": "tc-uuid",
"testCaseName": "Create invoice",
"failedStep": "click #submit-btn",
"stepNumber": 4,
"error": "Element not found: #submit-btn",
"suggestedFix": "Try selector: button[type=\"submit\"]",
"selector": "#submit-btn"
}
],
"rerunSession": {
"sessionId": "sess-uuid",
"depth": 1,
"maxDepth": 3,
"canRerun": true
},
"retriggerUrl": "https://ztester.zavecoder.com/api/v1/projects/proj-abc/runs/trigger",
"dashboardUrl": "https://ztester.zavecoder.com/dashboard/projects/proj-abc",
"firedAt": "2026-04-23T10:15:00Z"
}Each retrigger increments the session depth. When depth >= maxDepth, the trigger endpoint returns HTTP 429 and canRerun: false. The webhook payload always includes rerunSession.canRerun so your bot can gate before attempting another retrigger. Default maxDepth is 3, configurable up to 10.
critical, smoke, or regression to run different suites⚠️ Important: For large-scale testing (> 500 tests/day), contact support for enterprise limits.
Stateful orchestration for hands-off discovery, validation, and improvement loops.
Recommended for AI agents and app-to-app integrations: Start one Autopilot session with a reachable URL, 2-5 user journeys, and auth. If inputs are missing, zTester returns a structured needs_input response with a resumeUrl. Poll the session until it reaches completed or blocked.
Verified Routes (auto-populated): On first source analysis discovery, zTester auto-populates verified_routes by mapping entity slugs to their discovered URL paths (e.g., customers → /dashboard/customers).
GET /projects?id=ID — check the verified_routes fieldPATCH /projects?id=ID — pass your corrected verified_routes objectAuth Discovery (auto-populated): Source analysis automatically detects your app's authentication setup from the codebase — login page URL, auth library (NextAuth, Supabase, Clerk, etc.), form field selectors, and OAuth providers.
GET /projects?id=ID — the discovered_auth field shows what was detectedauth_strategy is auto-populated with the login URL and selectorsPATCH /environments?id=ID to complete the auth setup// Example discovered_auth (auto-populated on project)
{
"loginUrl": "/sign-in",
"authLibrary": "supabase-auth",
"authType": "email_password",
"formSelectors": {
"emailSelector": "input[type=\"email\"]",
"passwordSelector": "input[type=\"password\"]",
"submitSelector": "button[type=\"submit\"]"
},
"successRedirect": "/dashboard",
"providers": ["credentials", "google"]
}
// Complete auth setup by adding credentials to environment:
// PATCH /environments?id=env-123
{
"authStrategy": {
"type": "email_password",
"loginUrl": "/sign-in",
"credentials": {
"email": "test@example.com",
"password": "testpassword123"
}
}
}Quality Structure Gives Quality Testing: zTester returns an inputQuality block on project and discovery responses so teams can see what structure is missing on their side before assuming the generator is shallow.
Manage test environments (staging, production, etc.)
Create and manage test cases
Execute tests and retrieve results
Three Ways to Execute Tests:
testCaseId in request body → Returns 200 OK with immediate resultstestCaseIds array in request body → Returns 202 Accepted with async executionprojectId in request body → Returns 202 Accepted with async executionNote: environmentId is optional for all three methods — uses the project's default environment if not provided.
Async Parallel Execution: When using projectId or testCaseIds, the API returns 202 Accepted immediately and runs tests in the background. Poll the poll_url to check progress and get results.
GET /api/v1/test-runs?id=BATCH_ID until status is passed, failed, error, stalled, or cancelledstalled (runner went silent mid-run), call POST /api/v1/test-runs/{id}/resume to continue from where it stoppedPOST /api/v1/test-runs/{id}/cancel — returns partial resultsExample: 300 tests → Split into 2 runner chunks (200 + 100) → Each chunk split into batches of 10 → Up to 4 batches run in parallel. Each batch authenticates once, so ~30 logins total instead of 300. This is ~10x faster than sequential execution.
🛡️ Pre-flight Auth Probe: When auth is configured and you trigger a run with >5 tests, zTester probes authentication first and aborts the entire run in ~4 seconds if auth fails — instead of starting hundreds of tests that all skip.
diagnosis: "auth_failure" and a structured actionRequired block — AI agents can read it and call the suggested fix endpoint directlytype: "cookies" with real session cookies — form login cannot automate magic-link flowsExample 422 response:
{
"error": "Pre-flight auth probe failed — aborting run",
"details": {
"diagnosis": "auth_failure",
"actionRequired": {
"type": "fix_auth",
"severity": "blocker",
"message": "Auth failed — redirected to login page",
"fix": {
"endpoint": "/api/v1/projects/{id}/environments/{envId}",
"method": "PATCH",
"cliCommand": "ztester auth --type cookies --cookies '[...]'"
}
}
}
}⏸ Stalled Runs & Resume: If the runner goes silent mid-run (network issue, server restart), the run is automatically marked stalled after 15 minutes of no callbacks.
POST /api/v1/test-runs/{id}/resume to re-trigger only the remaining testsactionRequired.type: "resume_run" with the exact endpoint when stalledReal-time test execution via Server-Sent Events (SSE)
Direct Runner Access: For external integrations and custom frontends, you can call the runner directly for real-time streaming of test execution events. This bypasses the web API proxy and streams events directly from the test runner.
https://ztrunner.zavecoder.com/execute-streamzt_)⚠️ Important Notes:
: heartbeat\n\n every 15 seconds to keep connection alivehealing events if selectors fail and get auto-fixedcomplete or error eventAvailable Actions:
navigateclickfilltypeselecthoverwaitwaitForSelectorassertVisibleassertTextassertUrlpressscreenshotFull documentation: GitHub - RUNNER_STREAMING_API.md
Submit feedback on test results to improve test quality over time
Feedback loop: After a test run, submit feedback on individual test results. zTester uses this to:
selector_fix with a suggested selector, and zTester updates the test case + triggers a verification re-run automaticallyaction_fix to change a step's action (e.g., fill → select for dropdowns)correct get boosted, false_positive gets loweredflaky are auto-retried (up to 3 attempts) in future runs| Verdict | Meaning | Auto-Action |
|---|---|---|
| selector_fix | Selector is wrong or fragile | Updates test case + triggers verification re-run |
| action_fix | Wrong action type (e.g., fill instead of select) | Updates step action type in test case |
| false_positive | Test fails but app works fine | Lowers confidence score by 0.1 |
| flaky | Passes sometimes, fails other times | Increments flaky count, auto-retried in future runs |
| correct | Test result is accurate | Boosts confidence score by 0.05 |
| not_applicable | Test doesn't apply to this app | Lowers confidence score by 0.1 |
Connect GitHub repos to projects for automatic source analysis
Prerequisite: Install the zTester GitHub App on your GitHub account first. The App grants access to your repositories. Then use these endpoints to link repos to projects.
Connect Bitbucket repos to projects for automatic source analysis
Prerequisite: Connect your Bitbucket workspace via OAuth first (Settings → Integrations → Connect Bitbucket). Then use these endpoints to link repos to projects.
Automatically crawl your app and generate test cases
Three discovery modes:
Hybrid mode is triggered automatically when you use "source": "git" with an environmentId or appUrl that points to a live (non-localhost) URL. No extra parameters needed.
GitHub/Bitbucket integration: If your project has a linked repository, git source analysis works with just {"source": "git"} — the repo URL, branch, and access token are resolved automatically.
Polling for status: The POST response returns a discoveryId. To check progress, poll GET /projects/{projectId}/discoveries/{discoveryId} (not /discover-tests). The discovery endpoints are listed below.
🔐 Auto-Detected Login Selectors: During discovery, zTester automatically detects login forms and extracts exact selectors for:
type="email", name="email", autocomplete attributesinput[type="password"]button[type="submit"] or text-based selectorsemail_password and username_passwordUse these selectors to configure your environment's authStrategy — no manual selector hunting required! Check the detectedLoginSelectors field in the discovery response.
Immediately executable: Auto-discovered tests are saved with status: "active" and are ready to run right away — no manual review step required. You can run them via the test-runs endpoint as soon as discovery completes.
How zTester combines code analysis with live browser crawling for the best results
Hybrid discovery is the recommended mode for generating high-quality tests. It runs automatically when you use "source": "git" with a live appUrl or environmentId.
| Mode | Understands Workflows | Real Selectors | Test Quality |
|---|---|---|---|
| Crawl only | No | Yes | Surface-level smoke tests (page loads, basic clicks) |
| Git only | Yes | No (guessed) | Deep workflow tests, but selectors may not match DOM |
| Hybrid | Yes | Yes | Deep workflow tests with verified real selectors |
# Option 1: With environment (resolves URL + auth automatically)
curl -X POST ".../discover-tests" -d '{"source": "git", "environmentId": "env-uuid"}'
# Option 2: With explicit URL + auth
curl -X POST ".../discover-tests" -d '{
"source": "git",
"appUrl": "https://your-app.com",
"auth": {"type": "email_password", "credentials": {"email": "...", "password": "..."}}
}'
# Option 3: With ActionExplorer for deep workflow tests
curl -X POST ".../discover-tests" -d '{
"source": "git",
"environmentId": "env-uuid",
"actionExplorer": {"enabled": true}
}'
# Git-only mode (no DOM crawl) — omit appUrl and environmentId
curl -X POST ".../discover-tests" -d '{"source": "git"}'Generate 5-8 step business workflow tests that click, fill, submit, and verify
ActionExplorer goes beyond basic discovery by actually interacting with your live application. It clicks action buttons (Add, Create, Edit, Delete), observes DOM mutations (dialogs, toasts, table changes), fills forms with contextual test data, submits, and generates meaningful assertions.
| Type | Example | How Detected |
|---|---|---|
| Post-action state | "Success toast appeared after save" | MutationObserver detects new toast/alert element |
| Data persistence | "New role appears in roles table" | Table row count increased after submit |
| Validation | "Required field error on empty submit" | Error message element appeared |
Since ActionExplorer creates real data in your app, you can configure snapshot/restore endpoints on your environment to automatically roll back changes after discovery:
# Configure safety endpoints on your environment
curl -X PATCH ".../environments?id=env-uuid" \
-H "Authorization: Bearer zt_your_key" \
-H "Content-Type: application/json" \
-d '{
"testSnapshotUrl": "https://your-api.com/test/snapshot",
"testRestoreUrl": "https://your-api.com/test/restore"
}'
# Your snapshot endpoint should:
# POST /test/snapshot → return { "snapshotId": "snap-123" }
# POST /test/restore → accept { "snapshotId": "snap-123" }
# POST /test/reset → reset to known state (simpler alternative)How zTester handles repeated source analysis runs
Both crawl and git source analysis modes generate a content hash for each test based on project, type, and normalized name. Re-running discovery updates existing tests instead of creating duplicates.
| Scenario | Result |
|---|---|
| Same test regenerated | Updated in place (steps, confidence refreshed) |
| New route/form/page discovered | New test case created |
| Route removed or page no longer accessible | Old test left untouched (user may have edited) |
| User edited an auto-generated test name | Edited test kept, new version also created |
Automatically re-analyze source code on every push
If your project is connected to a GitHub repository with auto_analyze_on_push enabled, zTester automatically:
This means your test suite stays in sync with your codebase automatically. No manual API calls needed after the initial GitHub App setup.
Setup: Install the zTester GitHub App, link a repository to your project, and enable "Auto-analyze on push" in project settings.
Git source analysis works with private repos
With GitHub App or Bitbucket OAuth (recommended): If your project has a linked GitHub or Bitbucket repository, the access token is fetched automatically. Just send {"source": "git"} and it works — even for private repos. zTester checks GitHub first, then Bitbucket.
Manual token: Pass a git.token in the API request. The correct auth format is used automatically based on the provider:
| Provider | Token Type |
|---|---|
| GitHub | ghp_* (Personal Access Token) or fine-grained token |
| Bitbucket | App password or repository access token |
| GitLab | Personal access token or project token |
Manually provided tokens are only used for the shallow clone and are never stored. GitHub App and Bitbucket OAuth tokens are auto-refreshed and cached securely.
| Scenario | Result |
|---|---|
{"source":"git"} + linked repo (GitHub or Bitbucket) | Auto-fetches URL, branch, and token |
{"source":"git","git":{"url":"..."}} + linked repo | Uses your URL, auto-fetches token |
{"source":"git","git":{"url":"...","token":"..."}} | Uses both provided values |
{"source":"git"} + no linked repo | Returns 400 with GitHub/Bitbucket connect URLs |
Create and manage API keys programmatically
Session Auth Required: These endpoints use your browser session cookie for authentication, not API keys. They are intended for use from the zTester dashboard or programmatic session-based access. You cannot use an API key to manage other API keys.
| Scope | Permissions |
|---|---|
| read | List projects, environments, test cases, test runs, discoveries |
| run_tests | Execute test runs (single or batch) |
| write_tests | Create, update, and delete test cases |
| generate | Trigger AI test generation and auto-discovery |
| admin | Create/delete projects, manage environments (including auth credentials), trigger discovery, link repos |
Register outbound webhooks for the autonomous failure loop. Requires admin scope to create/delete.
Programmatically trigger a targeted test rerun — designed for autonomous CI/CD fix loops.
rerunSessionId from the failure webhook payload to continue an existing session. The loop guard returns HTTP 429 when the session is exhausted (depth >= maxDepth).Test coverage metrics — use as a CI/CD gate before deployment.
zTester fixes selector drift automatically. Your bot only gets called for real bugs.
When a test fails because a CSS selector changed, zTester detects it, heals the selector, reruns the test, and writes the fix back — no webhook, no bot, no human. Your webhook fires only when something genuinely requires external action: a real application bug, a timing issue, an infra problem.
| Failure | Handled by | Bot called? |
|---|---|---|
| Selector drifted (confident) | zTester internally — auto-heal + rerun | ❌ No |
| Selector drifted (loop exhausted) | External bot | ✅ Yes |
| Real app bug | External bot | ✅ Yes |
| Flaky timing | External bot | ✅ Yes |
| Environment / infra issue | External bot | ✅ Yes |
| All tests pass after rerun | zTester marks session resolved | ❌ No |
| Loop guard hit (maxDepth) | zTester marks exhausted, fires webhook | ✅ Yes — escalate to human |
Test run completes with failures
│
▼
┌─────────────────────────────────────────┐
│ Internal autonomous healer (built-in) │
│ failureType == 'ui_change' │
│ AND confidence >= 0.6 │
│ AND depth < maxDepth │
│ → persist healed selectors (auto) │
│ → trigger internal rerun │
└────────────┬────────────────────────────┘
│ triggered → done, bot NOT called
▼ otherwise:
┌─────────────────────────────────────────┐
│ Your webhook fires with full context: │
│ · failureType + recommendedAction │
│ · allSteps (full test definition) │
│ · artifacts: traceUrl + videoUrl │
└─────────────────────────────────────────┘{
"event": "run_failed",
"projectId": "proj-abc",
"projectName": "My App",
"runId": "run-uuid",
"trigger": "ci",
"summary": { "total": 20, "passed": 17, "failed": 3, "durationMs": 94000 },
"failures": [
{
"testCaseId": "tc-uuid",
"testCaseName": "Checkout flow",
// What broke
"failedStep": "click #submit-btn",
"stepNumber": 4,
"error": "Element not found: #submit-btn",
"selector": "#submit-btn",
// AI classification — use this to decide what to do
"failureType": "ui_change",
"confidence": 0.87,
"recommendedAction": "auto_fix_selector",
"suggestedFix": "Try selector: button[type=\"submit\"]",
// Full test definition — bot has all context without a DB lookup
"allSteps": [
{ "action": "navigate", "target": "/checkout", "value": null },
{ "action": "fill", "target": "#email", "value": "test@example.com" },
{ "action": "click", "target": "#submit-btn", "value": null }
]
}
],
// Artifacts for deep AI analysis (signed URLs, valid 7 days)
"artifacts": {
"traceUrl": "https://...supabase.co/storage/.../trace.zip",
"videoUrl": "https://...supabase.co/storage/.../recording.webm",
"dashboardUrl": "https://ztester.zavecoder.com/dashboard/projects/proj-abc"
},
// Loop tracking
"rerunSession": {
"sessionId": "sess-uuid",
"depth": 1, // reruns so far (0 = first failure)
"maxDepth": 3, // configured limit
"canRerun": true // false = loop exhausted, escalate to human
},
"retriggerUrl": "https://ztester.zavecoder.com/api/v1/projects/proj-abc/runs/trigger",
"firedAt": "2026-04-26T10:15:00Z"
}| failureType | Meaning | Handled internally? |
|---|---|---|
ui_change | Selector drifted — element exists but CSS selector is stale | ✅ Yes (confidence ≥ 0.6, within depth limit) |
application_bug | Element missing or wrong behavior — real bug in the app code | ❌ Always escalates |
flaky_timing | Passed on retry — race condition or slow render | ❌ Always escalates |
environment_issue | Network failure, auth error, missing env var | ❌ Always escalates |
test_data_issue | Test depends on data that no longer exists | ❌ Always escalates |
runner_issue | Playwright crash or infra problem | ❌ Always escalates |
unknown | Could not classify | ❌ Always escalates |
Each failure item includes a recommendedAction field — computed from failureType, confidence, and whether the loop can still rerun. Your bot should switch on this field.
| recommendedAction | What the bot should do |
|---|---|
auto_fix_selector | Apply suggestedFix or just retrigger — self-healing will find the new selector |
fix_code | Read allSteps + error + traceUrl, identify the bug in app code, patch it, then retrigger |
add_wait | Insert a waitForSelector before the failing step, then retrigger |
check_environment | Verify env vars, network, auth config — fix infra before retriggering |
investigate | Read trace + watch video, classify manually, then act |
escalate_human | canRerun == false — loop exhausted, notify a human, do NOT retrigger |
Every request includes X-ZTester-Signature: sha256=<hex>. Always verify before acting. Use timing-safe comparison to prevent timing attacks.
import { createHmac, timingSafeEqual } from 'crypto';
app.post('/ztester', express.raw({ type: 'application/json' }), (req, res) => {
const sig = req.headers['x-ztester-signature'] || '';
const expected = 'sha256=' + createHmac('sha256', process.env.ZTESTER_WEBHOOK_SECRET)
.update(req.body).digest('hex');
const a = Buffer.from(expected), b = Buffer.from(sig);
if (a.length !== b.length || !timingSafeEqual(a, b)) {
return res.status(401).json({ error: 'Invalid signature' });
}
const payload = JSON.parse(req.body);
res.status(200).json({ received: true }); // ACK within 10s
handleAsync(payload); // process in background
});POST /api/v1/projects/{projectId}/runs/trigger
Authorization: Bearer zt_your_api_key
{
"scope": "failed_only",
"rerunSessionId": "<payload.rerunSession.sessionId>"
}| HTTP status | Meaning | Action |
|---|---|---|
202 | Rerun started | Poll GET /test-runs?id={runId} until complete |
429 | Loop guard hit — session exhausted | Stop, escalate to human |
409 | Session already resolved (all tests passed) | Nothing needed |
404 | Session not found | Check sessionId, or omit it for a fresh rerun |
Switch on failure.recommendedAction — zTester pre-computes this from the failure classification so your bot doesn't need to re-classify.
// Minimal — everything your bot needs
async function handleZTesterWebhook(payload) {
if (!payload.rerunSession.canRerun) {
await escalateToHuman(payload); // loop exhausted
return;
}
// Each failure tells you exactly what to do
const action = payload.failures[0].recommendedAction;
// 'auto_fix_selector' → retrigger (self-healing handles it)
// 'fix_code' → read trace + allSteps, patch code, retrigger
// 'add_wait' → add wait before failing step, retrigger
// 'check_environment' → fix infra, do NOT retrigger yet
// 'investigate' → read trace/video, decide manually
// 'escalate_human' → canRerun is false, notify engineer
}async function handleZTesterWebhook(payload) {
// Always check loop guard first
if (!payload.rerunSession.canRerun) {
await escalateToHuman(payload);
return;
}
let shouldRetrigger = false;
for (const failure of payload.failures) {
switch (failure.recommendedAction) {
case 'auto_fix_selector':
// Self-healing handles it — just retrigger
// Optionally apply suggestedFix proactively via /test-runs/feedback
shouldRetrigger = true;
break;
case 'fix_code':
// Use trace + allSteps to understand the expected behaviour
const fix = await aiAgent.analyze({
error: failure.error,
steps: failure.allSteps,
traceUrl: payload.artifacts.traceUrl,
videoUrl: payload.artifacts.videoUrl,
});
await applyCodeFix(fix);
shouldRetrigger = true;
break;
case 'add_wait':
await addWaitBeforeStep(failure.testCaseId, failure.stepNumber);
shouldRetrigger = true;
break;
case 'check_environment':
await notifyOps(failure);
return; // don't retrigger until infra is fixed
case 'escalate_human':
case 'investigate':
default:
await notifyEngineer(failure, payload.artifacts);
return;
}
}
if (shouldRetrigger) {
await fetch(payload.retriggerUrl, {
method: 'POST',
headers: {
'Authorization': `Bearer ${process.env.ZTESTER_API_KEY}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
scope: 'failed_only',
rerunSessionId: payload.rerunSession.sessionId,
}),
});
}
}artifacts.traceUrl is a Playwright .trace.zip — every DOM snapshot, network request, console log, and screenshot for the entire run. Pass it directly to your AI:
// Open in Playwright Trace Viewer (browser)
const viewerUrl = `https://trace.playwright.dev/?trace=${encodeURIComponent(payload.artifacts.traceUrl)}`;
// Pass to Claude / GPT for programmatic analysis
const analysis = await claude.messages.create({
model: 'claude-opus-4-7',
messages: [{
role: 'user',
content: `A test failed. Here is what it was trying to do:
${JSON.stringify(failure.allSteps, null, 2)}
Error at step ${failure.stepNumber}: ${failure.error}
Failure type: ${failure.failureType} (confidence: ${failure.confidence})
Suggested fix: ${failure.suggestedFix}
Playwright trace (full DOM snapshots + network): ${payload.artifacts.traceUrl}
Session recording: ${payload.artifacts.videoUrl}
What is broken and what code change would fix it?`
}]
});Signed URLs are valid for 7 days.
If your AI agent uses the zTester MCP server, these tools close the loop natively:
1. Webhook fires → agent receives payload
2. get_failures(runId)
→ deep failure details: selectors, DOM context, AI explanation
3. If application_bug:
→ analyze code, apply fix, commit
4. submit_feedback(runId, [{ testCaseId, verdict: "selector_fix", suggestedFix }])
→ auto-applies selector fix to test case + triggers verification run
5. trigger_rerun(projectId, scope="failed_only", rerunSessionId=...)
6. Poll get_run_status(runId) until complete
7. If still failing and canRerun → repeat from 2
8. If canRerun=false → escalate| status | Meaning | canRerun |
|---|---|---|
active | Loop is running, retriggers allowed | true |
resolved | All tests passed — loop closed successfully | false (409 on retrigger) |
exhausted | maxDepth reached — human review required | false (429 on retrigger) |
Use zTester without the dashboard — from your terminal, CI pipeline, or AI assistant.
npx @ztester/cliZero-install CLI. Save an API key once, then init → discover → run from any directory.
# 1. Save your API key (get it from Settings → API Keys)
npx @ztester/cli login --key ztk_your_key_here
# 2. Bootstrap a project in your current directory
npx @ztester/cli init --name "My App" --url https://staging.myapp.com
# 3. AI-crawl your app and generate tests (2-5 min)
npx @ztester/cli discover
# 4. Run the full suite (exits 1 on failures — CI-safe)
npx @ztester/cli run --ci
# Env vars work too (no config file needed in CI):
# ZTESTER_API_KEY, ZTESTER_PROJECT_ID, ZTESTER_ENVIRONMENT_ID| Command | Description |
|---|---|
ztester login --key <KEY> | Save API key to ~/.ztester/config.json |
ztester init [--name] [--url] [--env] | Create project + environment, write ztester.config.json |
ztester auth --type form_login --login-url /sign-in --username u@app.com --password s | Set form login auth. Required before discovery on any app with a login screen. |
ztester auth --type cookies --cookies '[{"name":"...","value":"...","domain":"..."}]' | Set session cookie auth (magic link, Supabase, Clerk, OAuth apps) |
ztester discover [--max-pages N] | Trigger AI discovery, poll until tests are generated |
ztester run [--ci] [--timeout N] | Run test suite; --ci sets exit code 1 on failures |
ztester status | Show project health and last run summary |
Expose all 15 zTester tools to your AI assistant. Tell Claude to "set up a project, run discovery, and report what failed" — it handles the full flow autonomously.
Claude Desktop — claude_desktop_config.json
{
"mcpServers": {
"ztester": {
"command": "npx",
"args": ["-y", "@ztester/mcp"],
"env": {
"ZTESTER_API_KEY": "ztk_your_key_here"
}
}
}
}Cursor — .cursor/mcp.json
{
"mcpServers": {
"ztester": {
"command": "npx",
"args": ["-y", "@ztester/mcp"],
"env": {
"ZTESTER_API_KEY": "ztk_your_key_here"
}
}
}
}Available tools (15)
bootstrap_projectupdate_environmentlist_projectslist_environmentslist_test_casestrigger_discoveryget_discovery_resultsrun_testsget_run_statusget_failuresget_recent_runscancel_runapprove_test_casessubmit_feedbacktrigger_rerunmanage_failure_webhooksExample: "Bootstrap a project for https://staging.myapp.com, run discovery, run all tests, and show me what failed."
name: zTester E2E
on:
push:
branches: [main, staging]
pull_request:
branches: [main]
jobs:
e2e:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with: { node-version: '20' }
- name: Run zTester
run: npx --yes @ztester/cli run --ci
env:
ZTESTER_API_KEY: ${{ secrets.ZTESTER_API_KEY }}
ZTESTER_PROJECT_ID: ${{ vars.ZTESTER_PROJECT_ID }}
ZTESTER_ENVIRONMENT_ID: ${{ vars.ZTESTER_ENVIRONMENT_ID }}Add ZTESTER_API_KEY as a secret and ZTESTER_PROJECT_ID / ZTESTER_ENVIRONMENT_ID as variables in repo Settings → Secrets and variables.
- name: Run zTester E2E Tests
run: |
response=$(curl -s -X POST https://ztester.zavecoder.com/api/v1/test-runs \
-H "Authorization: Bearer ${{ secrets.ZTESTER_API_KEY }}" \
-H "Content-Type: application/json" \
-d '{"projectId":"${{ vars.ZTESTER_PROJECT_ID }}","environmentId":"${{ vars.ZTESTER_ENV_ID }}","trigger":"ci"}')
run_id=$(echo $response | jq -r '.id')
echo "Run ID: $run_id"
# Poll until complete
for i in $(seq 1 30); do
sleep 10
result=$(curl -s https://ztester.zavecoder.com/api/v1/test-runs/$run_id \
-H "Authorization: Bearer ${{ secrets.ZTESTER_API_KEY }}")
status=$(echo $result | jq -r '.status')
if [ "$status" = "completed" ] || [ "$status" = "failed" ]; then
echo "$(echo $result | jq -r '.passed') passed, $(echo $result | jq -r '.failed') failed"
[ "$(echo $result | jq -r '.failed')" = "0" ] || exit 1
break
fi
donepipeline {
agent any
environment {
ZTESTER_API_KEY = credentials('ztester-api-key')
}
stages {
stage('Run Tests') {
steps {
script {
def response = sh(
script: """
curl -X POST https://ztester.zavecoder.com/api/v1/test-runs \
-H "Authorization: Bearer ${ZTESTER_API_KEY}" \
-H "Content-Type: application/json" \
-d '{"testCaseId": "test-1"}' -s
""",
returnStdout: true
).trim()
def json = readJSON text: response
if (json.status != 'passed') {
error("Tests failed: ${json.status}")
}
echo "Tests passed!"
}
}
}
}
}#!/bin/bash
API_KEY="zt_your_api_key_here"
PROJECT_ID="abc-123"
BASE_URL="https://ztester.zavecoder.com/api/v1"
echo "🚀 Running all E2E tests for project..."
# Run ALL tests for the project (parallel execution)
RESULT=$(curl -s -X POST "$BASE_URL/test-runs" \
-H "Authorization: Bearer $API_KEY" \
-H "Content-Type: application/json" \
-d "{"projectId": "$PROJECT_ID"}")
STATUS=$(echo "$RESULT" | jq -r '.status')
PASSED=$(echo "$RESULT" | jq -r '.passed')
FAILED=$(echo "$RESULT" | jq -r '.failed')
TOTAL=$(echo "$RESULT" | jq -r '.total_tests')
DURATION=$(echo "$RESULT" | jq -r '.duration_ms')
URL=$(echo "$RESULT" | jq -r '.url')
echo ""
echo "📊 Results: $PASSED/$TOTAL passed ($DURATION ms)"
echo ""
# Show failed tests if any
if [ "$FAILED" -gt 0 ]; then
echo "❌ Failed tests:"
echo "$RESULT" | jq -r '.results[] | select(.status != "passed") | " - \(.testCaseName): \(.error)"'
echo ""
fi
echo "🔗 View details: $URL"
if [ "$STATUS" = "passed" ]; then
echo "✅ All tests passed!"
exit 0
else
echo "❌ $FAILED test(s) failed!"
exit 1
fiIntegrate zTester directly into your admin portal or ops dashboard using the JavaScript SDK or embeddable iframe widget. Both support live SSE streaming during test execution.
Framework-agnostic TypeScript/JavaScript SDK. Load via <script> tag or ES module import.
<script src="https://ztester.zavecoder.com/sdk/ztester.js"></script>
<script>
const zt = new ZTester.ZTester({ apiKey: 'zt_your_key' });
// Run all project tests with live streaming
zt.runProjectStream('your-project-id', {
onTestStart: ({ testCaseName, index, total }) => {
console.log(`[${index + 1}/${total}] Running: ${testCaseName}`);
},
onStepComplete: ({ stepIndex, status }) => {
console.log(` Step ${stepIndex + 1}: ${status}`);
},
onTestComplete: ({ testCaseId, status }) => {
console.log(` Result: ${status}`);
},
}).then(results => {
const passed = results.filter(r => r.status === 'passed').length;
console.log(`Done: ${passed}/${results.length} passed`);
});
</script>import { ZTester } from '@ztester/sdk';
const zt = new ZTester({ apiKey: process.env.ZTESTER_KEY });
// Single test with streaming
const result = await zt.runTestStream('test-uuid', {
environmentId: 'env-uuid', // optional — uses project default
onStepStart: ({ description }) => updateUI(`Running: ${description}`),
onStepComplete: ({ status }) => updateUI(`Step ${status}`),
onHealing: ({ originalSelector, healedSelector }) =>
updateUI(`Self-healed: ${originalSelector} → ${healedSelector}`),
});
console.log(`Test ${result.status} in ${result.durationMs}ms`);
// Discovery with polling
const disc = await zt.discoverTests('project-uuid', { source: 'git' });
const discovery = await zt.pollDiscovery('project-uuid', disc.discoveryId, {
interval: 3000,
onProgress: (d) => updateUI(`${d.progress?.testsGenerated} tests found...`),
});
// Submit feedback (selector fix + action fix)
await zt.submitFeedback('test-run-id', [
{
testCaseId: 'test-uuid-1',
verdict: 'selector_fix',
suggestedFix: {
stepNumber: 2,
currentSelector: 'button:nth-of-type(2)',
suggestedSelector: "button:has-text('Submit')",
},
},
{
testCaseId: 'test-uuid-2',
verdict: 'action_fix',
suggestedFix: {
stepNumber: 4,
newAction: 'select',
newValue: '{{first-option}}',
},
notes: 'Field is a <select> dropdown, not a text input',
},
]);| Method | Description |
|---|---|
| runTestStream(testCaseId, opts) | Execute single test with live SSE streaming. Returns final result. |
| runProjectStream(projectId, opts) | Run all project tests sequentially with streaming + progress callbacks. |
| runTest(testCaseId, envId?) | Execute single test (non-streaming). Waits for completion. |
| runProject(projectId, envId?) | Batch run all project tests (non-streaming, parallel execution). |
| discoverTests(projectId, opts) | Trigger auto-discovery. Returns discoveryId for polling. |
| pollDiscovery(projectId, discoveryId) | Poll until discovery completes. Optional progress callback. |
| submitFeedback(runId, items[]) | Submit bulk feedback. Selector and action fixes auto-applied. |
| getTestCases(projectId) | List all test cases for a project. |
| getTestRuns(params) | Query test run history by project, test case, or run ID. |
| getFeedback(params) | Query submitted feedback by project, test run, or verdict. |
The runTestStream() method receives live events during test execution:
| Event | Callback | Data |
|---|---|---|
| start | onEvent | testCaseId, totalSteps, baseUrl |
| step_start | onStepStart | stepIndex, totalSteps, action, target, description |
| step_complete | onStepComplete | stepIndex, status, durationMs, error?, screenshot? |
| healing | onHealing | stepIndex, originalSelector, healedSelector, confidence |
| complete | onComplete | status, durationMs, stepResults[], selfHealingActions[] |
| error | onError | message, step? |
Drop a pre-built test dashboard into any page with a single script tag. No React or build tools needed.
<!-- Add this anywhere in your admin portal -->
<div id="ztester-widget"></div>
<script src="https://ztester.zavecoder.com/sdk/embed.js"></script>
<script>
ZTesterEmbed.ZTesterEmbed.init({
container: '#ztester-widget',
apiKey: 'zt_your_api_key',
projectId: 'your-project-id',
theme: 'dark', // 'dark' or 'light'
features: ['run', 'discover', 'feedback', 'history'], // which tabs to show
height: '700px',
onReady: () => console.log('Widget loaded'),
onTestComplete: (result) => {
console.log('Test finished:', result.testCaseName, result.status);
},
});
</script>Security: The API key is passed to the iframe via postMessage, never in the URL. The widget communicates results back to your page via postMessage events.
| URL | Format | Use Case |
|---|---|---|
| /sdk/ztester.js | UMD | <script> tag, sets window.ZTester |
| /sdk/ztester.esm.js | ES Module | import from bundlers (webpack, vite, etc.) |
| /sdk/embed.js | IIFE | <script> tag, sets window.ZTesterEmbed |
Use the POST /test-runs/feedback endpoint to build an automated triage agent that analyzes test failures and submits feedback programmatically. This closes the feedback loop: tests run → failures analyzed → fixes applied → tests re-run.
Test Run (batch)
│
├── All passed → done
│
└── Some failed
│
▼
AI Triage Agent
│
├── Classify failure type (selector? timing? logic?)
│
├── Generate fix (new selector, retry hint, etc.)
│
└── POST /test-runs/feedback
│
├── verdict: "selector_fix" → auto-applies fix + re-runs
├── verdict: "action_fix" → changes step action type (fill → select)
├── verdict: "flaky" → increments flaky count, auto-retried next run
├── verdict: "false_positive"→ lowers confidence score
└── verdict: "correct" → boosts confidence scoreWhen a test fails on a click/fill step, send the error message + page HTML to an LLM to generate a corrected selector.
import OpenAI from 'openai';
const openai = new OpenAI();
async function triageFailedTest(testRun: any, testCase: any) {
// Find the failed step
const failedStep = testRun.stepResults?.find(
(s: any) => s.status === 'failed'
);
if (!failedStep) return null;
// Check if it's a selector issue (timeout, not found)
const isSelector = failedStep.error?.includes('Timeout') ||
failedStep.error?.includes('not found') ||
failedStep.error?.includes('waiting for selector');
if (!isSelector) return null;
// Ask LLM to classify and suggest fix
const response = await openai.chat.completions.create({
model: 'gpt-4o-mini',
messages: [{
role: 'system',
content: `You are a Playwright selector expert. Given a failed selector
and the page context, suggest a working CSS selector.
Return JSON: { "selector": "...", "confidence": 0.0-1.0, "reason": "..." }`
}, {
role: 'user',
content: `Failed selector: ${failedStep.target}
Error: ${failedStep.error}
Step action: ${failedStep.type} (step #${failedStep.stepNumber})
Page URL: ${testRun.lastUrl || 'unknown'}
Test name: ${testCase.name}`
}],
response_format: { type: 'json_object' },
});
const fix = JSON.parse(response.choices[0].message.content || '{}');
if (!fix.selector || fix.confidence < 0.6) return null;
// Submit feedback with selector fix
await fetch('https://ztester.zavecoder.com/api/v1/test-runs/feedback', {
method: 'POST',
headers: {
'Authorization': 'Bearer zt_your_key',
'Content-Type': 'application/json',
},
body: JSON.stringify({
testRunId: testRun.id,
testCaseId: testCase.id,
verdict: 'selector_fix',
feedbackType: 'selector',
suggestedFix: {
stepNumber: failedStep.stepNumber,
currentSelector: failedStep.target,
suggestedSelector: fix.selector,
},
notes: fix.reason,
}),
});
// The API auto-applies the fix and triggers a verification re-run
return fix;
}No LLM needed. Compare recent run history to detect flaky tests (pass/fail alternation).
async function detectFlakyTests(projectId: string, apiKey: string) {
const BASE = 'https://ztester.zavecoder.com/api/v1';
const headers = {
'Authorization': `Bearer ${apiKey}`,
'Content-Type': 'application/json',
};
// Get recent test runs for the project
const runsRes = await fetch(
`${BASE}/test-runs?projectId=${projectId}&limit=100`,
{ headers }
);
const { results } = await runsRes.json();
// Group by test case, check for pass/fail alternation
const byTestCase = new Map<string, string[]>();
for (const run of results) {
const existing = byTestCase.get(run.testCaseId) || [];
existing.push(run.status);
byTestCase.set(run.testCaseId, existing);
}
const flakyFeedback = [];
for (const [testCaseId, statuses] of byTestCase) {
if (statuses.length < 3) continue;
// Count alternations (pass→fail or fail→pass)
let alternations = 0;
for (let i = 1; i < statuses.length; i++) {
if (statuses[i] !== statuses[i - 1]) alternations++;
}
const flakyScore = alternations / (statuses.length - 1);
if (flakyScore > 0.4) {
// More than 40% alternation = likely flaky
flakyFeedback.push({
testCaseId,
verdict: 'flaky',
notes: `Flaky score: ${(flakyScore * 100).toFixed(0)}% (${alternations} alternations in ${statuses.length} runs)`,
});
}
}
if (flakyFeedback.length > 0) {
// Bulk submit flaky feedback
const latestRunId = results[0]?.id;
await fetch(`${BASE}/test-runs/feedback`, {
method: 'POST',
headers,
body: JSON.stringify({
testRunId: latestRunId,
results: flakyFeedback,
}),
});
console.log(`Marked ${flakyFeedback.length} tests as flaky`);
}
}For failures where the selector exists but the assertion fails, use a vision model to analyze the screenshot and DOM snapshot.
async function analyzeWithVision(
testRun: any,
testCase: any,
screenshot: Buffer, // from step_results[].screenshot
domSnapshot: string // from step_results[].html
) {
const response = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [{
role: 'system',
content: `Analyze this test failure. The test expected certain behavior but
the page shows something different. Classify as:
- "false_positive": App works correctly, test expectation is wrong
- "selector_fix": Element exists but with different selector
- "flaky": Looks like a timing/loading issue
- "correct": App genuinely has a bug
Return JSON: { "verdict": "...", "reason": "...", "suggestedSelector?": "..." }`
}, {
role: 'user',
content: [
{
type: 'text',
text: `Test: ${testCase.name}
Failed step: ${testRun.stepResults?.find((s: any) => s.status === 'failed')?.description}
Error: ${testRun.error}
DOM snippet (around failed element): ${domSnapshot.substring(0, 2000)}`
},
{
type: 'image_url',
image_url: {
url: `data:image/png;base64,${screenshot.toString('base64')}`
}
}
]
}],
response_format: { type: 'json_object' },
});
const analysis = JSON.parse(
response.choices[0].message.content || '{}'
);
// Submit the AI's verdict
await fetch('https://ztester.zavecoder.com/api/v1/test-runs/feedback', {
method: 'POST',
headers: {
'Authorization': 'Bearer zt_your_key',
'Content-Type': 'application/json',
},
body: JSON.stringify({
testRunId: testRun.id,
testCaseId: testCase.id,
verdict: analysis.verdict,
feedbackType: 'assertion',
notes: `[AI] ${analysis.reason}`,
...(analysis.suggestedSelector && {
suggestedFix: {
stepNumber: testRun.stepResults?.findIndex(
(s: any) => s.status === 'failed'
) + 1,
suggestedSelector: analysis.suggestedSelector,
},
}),
}),
});
return analysis;
}Add an AI triage step to your CI/CD pipeline. Failed tests get analyzed and feedback is submitted automatically.
name: E2E Tests with AI Triage
on: [push]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Run zTester E2E Tests
id: tests
run: |
RESULT=$(curl -s -X POST https://ztester.zavecoder.com/api/v1/test-runs \
-H "Authorization: Bearer ${{ secrets.ZTESTER_API_KEY }}" \
-H "Content-Type: application/json" \
-d '{"projectId": "${{ secrets.ZTESTER_PROJECT_ID }}"}')
echo "result=$RESULT" >> $GITHUB_OUTPUT
STATUS=$(echo $RESULT | jq -r '.status')
echo "status=$STATUS" >> $GITHUB_OUTPUT
- name: AI Triage Failed Tests
if: steps.tests.outputs.status != 'passed'
run: |
node scripts/ai-triage.js \
--api-key "${{ secrets.ZTESTER_API_KEY }}" \
--openai-key "${{ secrets.OPENAI_API_KEY }}" \
--result '${{ steps.tests.outputs.result }}'
- name: Re-run After Fixes Applied
if: steps.tests.outputs.status != 'passed'
run: |
# Wait for selector fixes to be applied
sleep 10
# Re-run only the failed tests
FAILED=$(echo '${{ steps.tests.outputs.result }}' | \
jq -r '[.results[] | select(.status != "passed") | .testCaseId] | join(",")')
for TEST_ID in $(echo $FAILED | tr ',' ' '); do
curl -s -X POST https://ztester.zavecoder.com/api/v1/test-runs \
-H "Authorization: Bearer ${{ secrets.ZTESTER_API_KEY }}" \
-H "Content-Type: application/json" \
-d "{\"testCaseId\": \"$TEST_ID\"}"
doneUse this logic to classify failures before calling an LLM (saves API costs):
function classifyFailure(stepResult: any): string {
const error = stepResult.error || '';
// 1. Timeout / element not found → likely selector issue
if (error.includes('Timeout') ||
error.includes('waiting for selector') ||
error.includes('not found')) {
return 'selector_fix';
}
// 2. Wrong action type (fill on select, selectOption on input)
if (error.includes('selectOption') ||
error.includes('Not a SELECT element') ||
error.includes('Element is not a <select>') ||
error.includes('is not an <input>')) {
return 'action_fix';
}
// 3. Navigation error → might be flaky (network)
if (error.includes('net::ERR_') ||
error.includes('Navigation timeout') ||
error.includes('frame was detached')) {
return 'flaky';
}
// 4. Assertion failure → needs deeper analysis (use LLM)
if (error.includes('expect') ||
error.includes('assertion') ||
error.includes('toBeVisible') ||
error.includes('toHaveText')) {
return 'needs_llm_analysis';
}
// 5. Permission / auth errors → likely environment issue
if (error.includes('403') ||
error.includes('401') ||
error.includes('Unauthorized')) {
return 'not_applicable';
}
// 6. Default: send to LLM for classification
return 'needs_llm_analysis';
}| Signal | Likely Verdict | LLM Needed? |
|---|---|---|
| Timeout / selector not found | selector_fix | Yes (to generate new selector) |
| Wrong action type (fill on select, etc.) | action_fix | No (detect from error message) |
| Network error / frame detached | flaky | No |
| Assertion mismatch | varies | Yes (screenshot + DOM analysis) |
| Auth / permission denied | not_applicable | No |
| Pass/fail alternation (>40%) | flaky | No (statistical) |
Feedback drives improvement: Every feedback submission updates your test suite. Selector fixes are applied immediately and verified with a re-run. Action fixes (e.g., fill → select for dropdowns) are applied immediately. Flaky tests are auto-retried (up to 3 attempts). Confidence scores adjust based on correctness feedback, which influences future test generation and prioritization.
API requests are currently not rate-limited, but we may introduce limits in the future. Best practices:
The API uses standard HTTP status codes:
| Code | Meaning |
|---|---|
| 200 | Success |
| 201 | Resource created |
| 202 | Accepted - async operation started (discovery, test run) |
| 400 | Bad request - check your parameters |
| 401 | Unauthorized - invalid or missing API key |
| 403 | Forbidden - insufficient permissions |
| 404 | Resource not found |
| 500 | Server error - please try again or contact support |
Error responses include a JSON body with details:
{
"error": "Bad Request",
"message": "testCaseId is required"
}Production responses include tracing metadata that you should capture in logs and support tickets:
X-Request-Id is returned on API and health responses for correlation across servicesX-API-Version is returned by the web API health endpoint so clients can detect deployed contract versionsGET /api/health returns service status, deployment version, and server timestamp for simple uptime probescurl -i https://ztester.zavecoder.com/api/health{
"status": "ok",
"timestamp": "2026-04-20T12:00:00.000Z",
"version": "0.1.0",
"service": "web"
}Need help? Contact us at support@zavecoder.com
When reporting an issue, include the request path, approximate timestamp, and the X-Request-Id header value if available. That lets us trace the request through production quickly.