REST API for integrating zTester with your CI/CD pipeline and external tools
Navigate to Settings → API Keys and create a new API key.
curl https://ztester.zavecoder.com/api/v1/projects \
-H "Authorization: Bearer zt_your_api_key_here"All API requests require authentication using an API key. Include your API key in the Authorization header:
Authorization: Bearer zt_your_api_key_here⚠️ Keep your API keys secure. Never commit them to version control or share them publicly.
https://ztester.zavecoder.com/api/v1Get started with zTester in minutes. Automatically generate comprehensive E2E tests and integrate with your CI/CD pipeline.
Connect your GitHub/Bitbucket repository or provide your application URL. zTester intelligently analyzes your application and automatically generates comprehensive multi-step workflow tests (typically 5-8 steps per test covering real user journeys).
Execute hundreds of tests in minutes using our high-performance test runner. Tests run in parallel batches with automatic authentication handling and real-time progress updates.
When UI changes break selectors, our feedback system auto-detects failures, suggests fixes, and verifies corrections automatically. Tests adapt to your evolving application.
Seamlessly integrate with GitHub Actions, GitLab CI, Jenkins, or any CI/CD tool. Run tests on every pull request and get pass/fail reports in minutes.
💡 Zero Manual Test Writing: Unlike traditional E2E tools, zTester eliminates the need to manually write and maintain test scripts. Our AI-powered generation creates production-ready tests automatically, saving your team hundreds of hours.
1. SETUP (One-time)
├─ Create project
├─ Configure environment & authentication
└─ Link GitHub/Bitbucket repository (optional)
2. GENERATE TESTS
├─ Trigger test discovery via API
├─ zTester analyzes your application
└─ Returns 40-100+ ready-to-run tests in 2-5 minutes
3. EXECUTE TESTS
├─ Run tests via batch execution API
├─ Monitor progress in real-time
└─ Get detailed pass/fail results
4. CONTINUOUS IMPROVEMENT
├─ Tests auto-adapt to UI changes
├─ Track flaky tests and pass rates
└─ Re-generate tests when code changes significantlyNavigate to Settings → API Keys and create a new API key with appropriate scopes:
read - View projects, tests, and resultswrite_tests - Create and update test casesrun_tests - Execute testsgenerate - Trigger test discovery/generationadmin - Full access (create projects, manage settings)💡 Recommended: For CI/CD pipelines, use a dedicated API key with generate and run_tests scopes.
curl -X POST https://ztester.zavecoder.com/api/v1/projects \
-H "Authorization: Bearer zt_your_key" \
-H "Content-Type: application/json" \
-d '{
"name": "My App",
"description": "E2E tests for production app"
}'Define authentication strategies for your test environments:
curl -X POST https://ztester.zavecoder.com/api/v1/environments \
-H "Authorization: Bearer zt_your_key" \
-H "Content-Type: application/json" \
-d '{
"projectId": "proj-abc",
"name": "Staging",
"type": "staging",
"baseUrl": "https://staging.myapp.com",
"authStrategy": {
"type": "form_login",
"loginUrl": "/login",
"credentials": {
"usernameSelector": "#email",
"passwordSelector": "#password",
"submitSelector": "button[type=\"submit\"]",
"username": "test@example.com",
"password": "test123"
}
}
}'none - Public pagesform_login - Standard email/password logincookies - Pre-authenticated session cookiesbearer_token - JWT or API token in headersbasic_auth - HTTP Basic Authentication🔍 Detailed Auth Error Messages: When authentication fails, zTester provides specific, actionable error messages at each step:
loginUrl, username, password)zTester automatically generates comprehensive multi-step workflow tests (5-8+ steps) that exercise real user journeys and business logic, not just basic UI checks.
Link your GitHub/Bitbucket repository for the highest quality test generation:
curl -X POST https://ztester.zavecoder.com/api/v1/projects/proj-abc/discover-tests \
-H "Authorization: Bearer zt_your_key" \
-H "Content-Type: application/json" \
-d '{
"source": "git",
"repoUrl": "https://github.com/myorg/myapp.git",
"branch": "main",
"sourcePaths": ["src/app", "src/pages", "src/components"],
"environmentId": "env-xyz"
}'undefined or [object Object] leaking to the DOMWithout repository access, generate tests by crawling your live application:
curl -X POST https://ztester.zavecoder.com/api/v1/projects/proj-abc/discover-tests \
-H "Authorization: Bearer zt_your_key" \
-H "Content-Type: application/json" \
-d '{
"source": "crawl",
"baseUrl": "https://staging.myapp.com",
"environmentId": "env-xyz",
"maxDepth": 3,
"maxPages": 80
}'✅ Best Practice: Repository-based generation produces higher quality tests and is 10x faster. Typically generates 40-100 production-ready tests in 2-5 minutes.
In addition to automatic generation, you can also create custom test cases via API:
curl -X POST https://ztester.zavecoder.com/api/v1/test-cases \
-H "Authorization: Bearer zt_your_key" \
-H "Content-Type: application/json" \
-d '{
"projectId": "proj-abc",
"name": "Complete checkout with coupon",
"description": "Verify discount calculation and order creation",
"purpose": "Critical path: checkout with promo code",
"tags": ["checkout", "payment", "critical"],
"steps": [
{
"action": "navigate",
"target": "/products",
"description": "Go to products page"
},
{
"action": "click",
"target": ".product:first-child button:has-text(\"Add to Cart\")",
"description": "Add first product to cart"
},
{
"action": "click",
"target": "[aria-label=\"Cart\"]",
"description": "Open cart"
},
{
"action": "click",
"target": "button:has-text(\"Checkout\")",
"description": "Proceed to checkout"
},
{
"action": "fill",
"target": "#coupon-code",
"value": "SAVE20",
"description": "Enter coupon code"
},
{
"action": "click",
"target": "button:has-text(\"Apply\")",
"description": "Apply coupon"
},
{
"action": "assert",
"target": ".discount-amount",
"value": "contains:$",
"description": "Verify discount applied"
},
{
"action": "click",
"target": "button:has-text(\"Complete Order\")",
"description": "Submit order"
},
{
"action": "wait",
"target": ".order-confirmation",
"description": "Wait for confirmation"
}
],
"expectedOutcomes": [
"Discount is calculated correctly",
"Order is created in database",
"Confirmation page shows order number"
]
}'| Action | Description |
|---|---|
navigate | Go to URL — target = full URL or path |
click | Click element — target = CSS selector (supports :has-text()) |
fill | Fill an input — target = selector, value = text to enter |
assertVisible | Assert element is visible — target = selector |
assertText | Assert element contains text — target = selector, value = expected text |
assertNotText | Assert page does not contain text — catches 404 pages, React render bugs (undefined, [object Object]) |
assertValue | Assert an input's current value — target = input selector, value = expected text (useful after reload for state persistence checks) |
assertUrl | Assert current URL contains a path segment — value = expected substring |
assertCount | Assert at least N elements exist — target = selector, value = minimum count |
reload | Reload the current page and wait for it to settle — used in state persistence tests |
wait | Wait a fixed duration — value = milliseconds (e.g. "1500") |
waitForSelector | Wait until element appears — target = selector |
select | Select a dropdown option — target = selector, value = option value or label |
hover | Hover over element — target = selector |
press | Press keyboard key — value = key name (e.g. "Enter", "Escape") |
clickIfExists | Click element only if it exists — won't fail if absent (for optional UI states) |
screenshot | Capture a screenshot — value = filename (optional) |
curl -X POST https://ztester.zavecoder.com/api/v1/test-runs \
-H "Authorization: Bearer zt_your_key" \
-H "Content-Type: application/json" \
-d '{
"testCaseId": "test-123",
"environmentId": "env-xyz"
}'Run multiple tests in parallel (up to 200 tests, batched in groups of 15):
curl -X POST https://ztester.zavecoder.com/api/v1/test-runs/execute-batch \
-H "Authorization: Bearer zt_your_key" \
-H "Content-Type: application/json" \
-d '{
"projectId": "proj-abc",
"environmentId": "env-xyz",
"testCaseIds": ["test-1", "test-2", "test-3"],
"tags": ["critical"]
}'{
"batchRunId": "run-456",
"status": "running",
"totalTests": 42,
"estimatedDurationMs": 180000
}curl https://ztester.zavecoder.com/api/v1/test-runs/run-456 \
-H "Authorization: Bearer zt_your_key"💡 CI/CD Integration: Use batch execution to run all tests tagged as critical on every deployment. Tests run in parallel and complete in ~3-5 minutes for 50 tests.
When tests fail due to UI changes (new selectors, button text changes), you can provide feedback and zTester will automatically fix and re-verify the tests:
curl -X POST https://ztester.zavecoder.com/api/v1/test-runs/feedback \
-H "Authorization: Bearer zt_your_key" \
-H "Content-Type: application/json" \
-d '{
"runId": "run-789",
"testCaseId": "test-123",
"stepNumber": 3,
"issue": "selector_not_found",
"details": "Button text changed from \"Submit\" to \"Save Changes\"",
"suggestedFix": {
"newSelector": "button:has-text(\"Save Changes\")"
},
"autoApply": true,
"verifyFix": true
}'{
"feedbackId": "fb-101",
"fixApplied": true,
"verificationRunId": "run-790",
"verificationStatus": "passed",
"message": "Test updated and verified successfully"
}✅ Self-Healing Tests: With autoApply: true and verifyFix: true, tests automatically adapt to UI changes and verify the fix works before saving.
Identify tests with inconsistent pass/fail patterns:
curl https://ztester.zavecoder.com/api/v1/projects/proj-abc/insights \
-H "Authorization: Bearer zt_your_key"{
"flakyTests": [
{
"testCaseId": "test-555",
"testName": "Login workflow",
"flakeRate": 0.23,
"totalRuns": 87,
"failures": 20,
"commonErrors": ["Timeout waiting for dashboard"]
}
],
"summary": {
"totalFlakyTests": 3,
"highestFlakeRate": 0.23
}
}Before re-running discovery, check if incremental mode is possible (10x faster):
curl -X POST https://ztester.zavecoder.com/api/v1/projects/proj-abc/incremental-analyze \
-H "Authorization: Bearer zt_your_key" \
-H "Content-Type: application/json" \
-d '{
"repoPath": "/tmp/repo-clone",
"sourcePaths": ["src/app", "src/components"]
}'Typical GitHub Actions workflow for automated testing on every PR:
name: E2E Tests
on: [pull_request]
jobs:
e2e:
runs-on: ubuntu-latest
steps:
- name: Generate Tests (if source code changed)
run: |
curl -X POST https://ztester.zavecoder.com/api/v1/projects/${PROJECT_ID}/discover-tests \
-H "Authorization: Bearer ${ZTESTER_API_KEY}" \
-H "Content-Type: application/json" \
-d '{"source": "git", "branch": "${{ github.head_ref }}", "environmentId": "env-staging"}'
env:
ZTESTER_API_KEY: ${{ secrets.ZTESTER_API_KEY }}
PROJECT_ID: proj-abc
- name: Run Critical Path Tests
id: run_tests
run: |
RESPONSE=$(curl -X POST https://ztester.zavecoder.com/api/v1/test-runs/execute-batch \
-H "Authorization: Bearer ${ZTESTER_API_KEY}" \
-H "Content-Type: application/json" \
-d '{"projectId": "proj-abc", "environmentId": "env-staging", "tags": ["critical"]}')
BATCH_RUN_ID=$(echo $RESPONSE | jq -r '.batchRunId')
echo "batch_run_id=$BATCH_RUN_ID" >> $GITHUB_OUTPUT
- name: Wait for Results
run: |
BATCH_RUN_ID=${{ steps.run_tests.outputs.batch_run_id }}
for i in {1..60}; do
RESPONSE=$(curl https://ztester.zavecoder.com/api/v1/test-runs/$BATCH_RUN_ID \
-H "Authorization: Bearer ${ZTESTER_API_KEY}")
STATUS=$(echo $RESPONSE | jq -r '.status')
if [ "$STATUS" = "completed" ]; then
PASSED=$(echo $RESPONSE | jq -r '.passedCount')
TOTAL=$(echo $RESPONSE | jq -r '.totalTests')
echo "Tests completed: $PASSED/$TOTAL passed"
if [ "$PASSED" != "$TOTAL" ]; then
exit 1
fi
exit 0
fi
sleep 10
done
echo "Timeout waiting for tests"
exit 1Receive real-time notifications when test runs complete (configure in dashboard):
{
"event": "test_run.completed",
"runId": "run-123",
"projectId": "proj-abc",
"status": "passed",
"duration_ms": 45000,
"passedCount": 42,
"failedCount": 0
}Link repositories to enable automatic test regeneration on code changes. Configure via dashboard or API:
curl -X POST https://ztester.zavecoder.com/api/v1/github/link-repo \
-H "Authorization: Bearer zt_your_key" \
-H "Content-Type: application/json" \
-d '{
"projectId": "proj-abc",
"repoFullName": "myorg/myapp",
"installationId": 12345,
"defaultBranch": "main"
}'critical, smoke, or regression to run different suites⚠️ Important: For large-scale testing (> 500 tests/day), contact support for enterprise limits.
Verified Routes (auto-populated): On first source analysis discovery, zTester auto-populates verified_routes by mapping entity slugs to their discovered URL paths (e.g., customers → /dashboard/customers).
GET /projects?id=ID — check the verified_routes fieldPATCH /projects?id=ID — pass your corrected verified_routes objectAuth Discovery (auto-populated): Source analysis automatically detects your app's authentication setup from the codebase — login page URL, auth library (NextAuth, Supabase, Clerk, etc.), form field selectors, and OAuth providers.
GET /projects?id=ID — the discovered_auth field shows what was detectedauth_strategy is auto-populated with the login URL and selectorsPATCH /environments?id=ID to complete the auth setup// Example discovered_auth (auto-populated on project)
{
"loginUrl": "/sign-in",
"authLibrary": "supabase-auth",
"authType": "email_password",
"formSelectors": {
"emailSelector": "input[type=\"email\"]",
"passwordSelector": "input[type=\"password\"]",
"submitSelector": "button[type=\"submit\"]"
},
"successRedirect": "/dashboard",
"providers": ["credentials", "google"]
}
// Complete auth setup by adding credentials to environment:
// PATCH /environments?id=env-123
{
"authStrategy": {
"type": "email_password",
"loginUrl": "/sign-in",
"credentials": {
"email": "test@example.com",
"password": "testpassword123"
}
}
}Manage test environments (staging, production, etc.)
Create and manage test cases
Execute tests and retrieve results
Three Ways to Execute Tests:
testCaseId in request body → Returns 200 OK with immediate resultstestCaseIds array in request body → Returns 202 Accepted with async executionprojectId in request body → Returns 202 Accepted with async executionNote: environmentId is optional for all three methods — uses the project's default environment if not provided.
Async Parallel Execution: When using projectId or testCaseIds, the API returns 202 Accepted immediately and runs tests in the background. Poll the poll_url to check progress and get results.
GET /api/v1/test-runs?id=BATCH_ID until status is passed, failed, or errorExample: 300 tests → Split into 2 runner chunks (200 + 100) → Each chunk split into batches of 10 → Up to 4 batches run in parallel. Each batch authenticates once, so ~30 logins total instead of 300. This is ~10x faster than sequential execution.
🛡️ Graceful Auth Failure: If authentication fails or is not configured, zTester intelligently handles the situation:
/admin, /dashboard, /portal, /staff, etc. are marked as skippedfailureDetails with skipReason: "auth_required_but_unavailable"Example: 50 tests, no auth configured → 30 public tests run successfully, 20 protected tests skipped with clear message: "Test skipped: No authentication configured for environment \"Staging\". Configure auth in the environment settings to test protected pages."
Real-time test execution via Server-Sent Events (SSE)
Direct Runner Access: For external integrations and custom frontends, you can call the runner directly for real-time streaming of test execution events. This bypasses the web API proxy and streams events directly from the test runner.
https://ztrunner.zavecoder.com/execute-streamzt_)⚠️ Important Notes:
: heartbeat\n\n every 15 seconds to keep connection alivehealing events if selectors fail and get auto-fixedcomplete or error eventAvailable Actions:
navigateclickfilltypeselecthoverwaitwaitForSelectorassertVisibleassertTextassertUrlpressscreenshotFull documentation: GitHub - RUNNER_STREAMING_API.md
Submit feedback on test results to improve test quality over time
Feedback loop: After a test run, submit feedback on individual test results. zTester uses this to:
selector_fix with a suggested selector, and zTester updates the test case + triggers a verification re-run automaticallyaction_fix to change a step's action (e.g., fill → select for dropdowns)correct get boosted, false_positive gets loweredflaky are auto-retried (up to 3 attempts) in future runs| Verdict | Meaning | Auto-Action |
|---|---|---|
| selector_fix | Selector is wrong or fragile | Updates test case + triggers verification re-run |
| action_fix | Wrong action type (e.g., fill instead of select) | Updates step action type in test case |
| false_positive | Test fails but app works fine | Lowers confidence score by 0.1 |
| flaky | Passes sometimes, fails other times | Increments flaky count, auto-retried in future runs |
| correct | Test result is accurate | Boosts confidence score by 0.05 |
| not_applicable | Test doesn't apply to this app | Lowers confidence score by 0.1 |
Connect GitHub repos to projects for automatic source analysis
Prerequisite: Install the zTester GitHub App on your GitHub account first. The App grants access to your repositories. Then use these endpoints to link repos to projects.
Connect Bitbucket repos to projects for automatic source analysis
Prerequisite: Connect your Bitbucket workspace via OAuth first (Settings → Integrations → Connect Bitbucket). Then use these endpoints to link repos to projects.
Automatically crawl your app and generate test cases
Three discovery modes:
Hybrid mode is triggered automatically when you use "source": "git" with an environmentId or appUrl that points to a live (non-localhost) URL. No extra parameters needed.
GitHub/Bitbucket integration: If your project has a linked repository, git source analysis works with just {"source": "git"} — the repo URL, branch, and access token are resolved automatically.
Polling for status: The POST response returns a discoveryId. To check progress, poll GET /projects/{projectId}/discoveries/{discoveryId} (not /discover-tests). The discovery endpoints are listed below.
🔐 Auto-Detected Login Selectors: During discovery, zTester automatically detects login forms and extracts exact selectors for:
type="email", name="email", autocomplete attributesinput[type="password"]button[type="submit"] or text-based selectorsemail_password and username_passwordUse these selectors to configure your environment's authStrategy — no manual selector hunting required! Check the detectedLoginSelectors field in the discovery response.
Immediately executable: Auto-discovered tests are saved with status: "active" and are ready to run right away — no manual review step required. You can run them via the test-runs endpoint as soon as discovery completes.
How zTester combines code analysis with live browser crawling for the best results
Hybrid discovery is the recommended mode for generating high-quality tests. It runs automatically when you use "source": "git" with a live appUrl or environmentId.
| Mode | Understands Workflows | Real Selectors | Test Quality |
|---|---|---|---|
| Crawl only | No | Yes | Surface-level smoke tests (page loads, basic clicks) |
| Git only | Yes | No (guessed) | Deep workflow tests, but selectors may not match DOM |
| Hybrid | Yes | Yes | Deep workflow tests with verified real selectors |
# Option 1: With environment (resolves URL + auth automatically)
curl -X POST ".../discover-tests" -d '{"source": "git", "environmentId": "env-uuid"}'
# Option 2: With explicit URL + auth
curl -X POST ".../discover-tests" -d '{
"source": "git",
"appUrl": "https://your-app.com",
"auth": {"type": "email_password", "credentials": {"email": "...", "password": "..."}}
}'
# Option 3: With ActionExplorer for deep workflow tests
curl -X POST ".../discover-tests" -d '{
"source": "git",
"environmentId": "env-uuid",
"actionExplorer": {"enabled": true}
}'
# Git-only mode (no DOM crawl) — omit appUrl and environmentId
curl -X POST ".../discover-tests" -d '{"source": "git"}'Generate 5-8 step business workflow tests that click, fill, submit, and verify
ActionExplorer goes beyond basic discovery by actually interacting with your live application. It clicks action buttons (Add, Create, Edit, Delete), observes DOM mutations (dialogs, toasts, table changes), fills forms with contextual test data, submits, and generates meaningful assertions.
| Type | Example | How Detected |
|---|---|---|
| Post-action state | "Success toast appeared after save" | MutationObserver detects new toast/alert element |
| Data persistence | "New role appears in roles table" | Table row count increased after submit |
| Validation | "Required field error on empty submit" | Error message element appeared |
Since ActionExplorer creates real data in your app, you can configure snapshot/restore endpoints on your environment to automatically roll back changes after discovery:
# Configure safety endpoints on your environment
curl -X PATCH ".../environments?id=env-uuid" \
-H "Authorization: Bearer zt_your_key" \
-H "Content-Type: application/json" \
-d '{
"testSnapshotUrl": "https://your-api.com/test/snapshot",
"testRestoreUrl": "https://your-api.com/test/restore"
}'
# Your snapshot endpoint should:
# POST /test/snapshot → return { "snapshotId": "snap-123" }
# POST /test/restore → accept { "snapshotId": "snap-123" }
# POST /test/reset → reset to known state (simpler alternative)How zTester handles repeated source analysis runs
Both crawl and git source analysis modes generate a content hash for each test based on project, type, and normalized name. Re-running discovery updates existing tests instead of creating duplicates.
| Scenario | Result |
|---|---|
| Same test regenerated | Updated in place (steps, confidence refreshed) |
| New route/form/page discovered | New test case created |
| Route removed or page no longer accessible | Old test left untouched (user may have edited) |
| User edited an auto-generated test name | Edited test kept, new version also created |
Automatically re-analyze source code on every push
If your project is connected to a GitHub repository with auto_analyze_on_push enabled, zTester automatically:
This means your test suite stays in sync with your codebase automatically. No manual API calls needed after the initial GitHub App setup.
Setup: Install the zTester GitHub App, link a repository to your project, and enable "Auto-analyze on push" in project settings.
Git source analysis works with private repos
With GitHub App or Bitbucket OAuth (recommended): If your project has a linked GitHub or Bitbucket repository, the access token is fetched automatically. Just send {"source": "git"} and it works — even for private repos. zTester checks GitHub first, then Bitbucket.
Manual token: Pass a git.token in the API request. The correct auth format is used automatically based on the provider:
| Provider | Token Type |
|---|---|
| GitHub | ghp_* (Personal Access Token) or fine-grained token |
| Bitbucket | App password or repository access token |
| GitLab | Personal access token or project token |
Manually provided tokens are only used for the shallow clone and are never stored. GitHub App and Bitbucket OAuth tokens are auto-refreshed and cached securely.
| Scenario | Result |
|---|---|
{"source":"git"} + linked repo (GitHub or Bitbucket) | Auto-fetches URL, branch, and token |
{"source":"git","git":{"url":"..."}} + linked repo | Uses your URL, auto-fetches token |
{"source":"git","git":{"url":"...","token":"..."}} | Uses both provided values |
{"source":"git"} + no linked repo | Returns 400 with GitHub/Bitbucket connect URLs |
Create and manage API keys programmatically
Session Auth Required: These endpoints use your browser session cookie for authentication, not API keys. They are intended for use from the zTester dashboard or programmatic session-based access. You cannot use an API key to manage other API keys.
| Scope | Permissions |
|---|---|
| read | List projects, environments, test cases, test runs, discoveries |
| run_tests | Execute test runs (single or batch) |
| write_tests | Create, update, and delete test cases |
| generate | Trigger AI test generation and auto-discovery |
| admin | Create/delete projects, manage environments (including auth credentials), trigger discovery, link repos |
name: E2E Tests
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- name: Run zTester E2E Tests
run: |
response=$(curl -X POST https://ztester.zavecoder.com/api/v1/test-runs \
-H "Authorization: Bearer ${{ secrets.ZTESTER_API_KEY }}" \
-H "Content-Type: application/json" \
-d '{"projectId": "${{ secrets.ZTESTER_PROJECT_ID }}"}' \
-s)
echo "$response" | jq .
status=$(echo $response | jq -r '.status')
passed=$(echo $response | jq -r '.passed')
failed=$(echo $response | jq -r '.failed')
echo "Results: $passed passed, $failed failed"
if [ "$status" != "passed" ]; then
echo "❌ E2E Tests failed!"
exit 1
fi
echo "✅ All E2E tests passed!"pipeline {
agent any
environment {
ZTESTER_API_KEY = credentials('ztester-api-key')
}
stages {
stage('Run Tests') {
steps {
script {
def response = sh(
script: """
curl -X POST https://ztester.zavecoder.com/api/v1/test-runs \
-H "Authorization: Bearer ${ZTESTER_API_KEY}" \
-H "Content-Type: application/json" \
-d '{"testCaseId": "test-1"}' -s
""",
returnStdout: true
).trim()
def json = readJSON text: response
if (json.status != 'passed') {
error("Tests failed: ${json.status}")
}
echo "Tests passed!"
}
}
}
}
}#!/bin/bash
API_KEY="zt_your_api_key_here"
PROJECT_ID="abc-123"
BASE_URL="https://ztester.zavecoder.com/api/v1"
echo "🚀 Running all E2E tests for project..."
# Run ALL tests for the project (parallel execution)
RESULT=$(curl -s -X POST "$BASE_URL/test-runs" \
-H "Authorization: Bearer $API_KEY" \
-H "Content-Type: application/json" \
-d "{"projectId": "$PROJECT_ID"}")
STATUS=$(echo "$RESULT" | jq -r '.status')
PASSED=$(echo "$RESULT" | jq -r '.passed')
FAILED=$(echo "$RESULT" | jq -r '.failed')
TOTAL=$(echo "$RESULT" | jq -r '.total_tests')
DURATION=$(echo "$RESULT" | jq -r '.duration_ms')
URL=$(echo "$RESULT" | jq -r '.url')
echo ""
echo "📊 Results: $PASSED/$TOTAL passed ($DURATION ms)"
echo ""
# Show failed tests if any
if [ "$FAILED" -gt 0 ]; then
echo "❌ Failed tests:"
echo "$RESULT" | jq -r '.results[] | select(.status != "passed") | " - \(.testCaseName): \(.error)"'
echo ""
fi
echo "🔗 View details: $URL"
if [ "$STATUS" = "passed" ]; then
echo "✅ All tests passed!"
exit 0
else
echo "❌ $FAILED test(s) failed!"
exit 1
fiIntegrate zTester directly into your admin portal or ops dashboard using the JavaScript SDK or embeddable iframe widget. Both support live SSE streaming during test execution.
Framework-agnostic TypeScript/JavaScript SDK. Load via <script> tag or ES module import.
<script src="https://ztester.zavecoder.com/sdk/ztester.js"></script>
<script>
const zt = new ZTester.ZTester({ apiKey: 'zt_your_key' });
// Run all project tests with live streaming
zt.runProjectStream('your-project-id', {
onTestStart: ({ testCaseName, index, total }) => {
console.log(`[${index + 1}/${total}] Running: ${testCaseName}`);
},
onStepComplete: ({ stepIndex, status }) => {
console.log(` Step ${stepIndex + 1}: ${status}`);
},
onTestComplete: ({ testCaseId, status }) => {
console.log(` Result: ${status}`);
},
}).then(results => {
const passed = results.filter(r => r.status === 'passed').length;
console.log(`Done: ${passed}/${results.length} passed`);
});
</script>import { ZTester } from '@ztester/sdk';
const zt = new ZTester({ apiKey: process.env.ZTESTER_KEY });
// Single test with streaming
const result = await zt.runTestStream('test-uuid', {
environmentId: 'env-uuid', // optional — uses project default
onStepStart: ({ description }) => updateUI(`Running: ${description}`),
onStepComplete: ({ status }) => updateUI(`Step ${status}`),
onHealing: ({ originalSelector, healedSelector }) =>
updateUI(`Self-healed: ${originalSelector} → ${healedSelector}`),
});
console.log(`Test ${result.status} in ${result.durationMs}ms`);
// Discovery with polling
const disc = await zt.discoverTests('project-uuid', { source: 'git' });
const discovery = await zt.pollDiscovery('project-uuid', disc.discoveryId, {
interval: 3000,
onProgress: (d) => updateUI(`${d.progress?.testsGenerated} tests found...`),
});
// Submit feedback (selector fix + action fix)
await zt.submitFeedback('test-run-id', [
{
testCaseId: 'test-uuid-1',
verdict: 'selector_fix',
suggestedFix: {
stepNumber: 2,
currentSelector: 'button:nth-of-type(2)',
suggestedSelector: "button:has-text('Submit')",
},
},
{
testCaseId: 'test-uuid-2',
verdict: 'action_fix',
suggestedFix: {
stepNumber: 4,
newAction: 'select',
newValue: '{{first-option}}',
},
notes: 'Field is a <select> dropdown, not a text input',
},
]);| Method | Description |
|---|---|
| runTestStream(testCaseId, opts) | Execute single test with live SSE streaming. Returns final result. |
| runProjectStream(projectId, opts) | Run all project tests sequentially with streaming + progress callbacks. |
| runTest(testCaseId, envId?) | Execute single test (non-streaming). Waits for completion. |
| runProject(projectId, envId?) | Batch run all project tests (non-streaming, parallel execution). |
| discoverTests(projectId, opts) | Trigger auto-discovery. Returns discoveryId for polling. |
| pollDiscovery(projectId, discoveryId) | Poll until discovery completes. Optional progress callback. |
| submitFeedback(runId, items[]) | Submit bulk feedback. Selector and action fixes auto-applied. |
| getTestCases(projectId) | List all test cases for a project. |
| getTestRuns(params) | Query test run history by project, test case, or run ID. |
| getFeedback(params) | Query submitted feedback by project, test run, or verdict. |
The runTestStream() method receives live events during test execution:
| Event | Callback | Data |
|---|---|---|
| start | onEvent | testCaseId, totalSteps, baseUrl |
| step_start | onStepStart | stepIndex, totalSteps, action, target, description |
| step_complete | onStepComplete | stepIndex, status, durationMs, error?, screenshot? |
| healing | onHealing | stepIndex, originalSelector, healedSelector, confidence |
| complete | onComplete | status, durationMs, stepResults[], selfHealingActions[] |
| error | onError | message, step? |
Drop a pre-built test dashboard into any page with a single script tag. No React or build tools needed.
<!-- Add this anywhere in your admin portal -->
<div id="ztester-widget"></div>
<script src="https://ztester.zavecoder.com/sdk/embed.js"></script>
<script>
ZTesterEmbed.ZTesterEmbed.init({
container: '#ztester-widget',
apiKey: 'zt_your_api_key',
projectId: 'your-project-id',
theme: 'dark', // 'dark' or 'light'
features: ['run', 'discover', 'feedback', 'history'], // which tabs to show
height: '700px',
onReady: () => console.log('Widget loaded'),
onTestComplete: (result) => {
console.log('Test finished:', result.testCaseName, result.status);
},
});
</script>Security: The API key is passed to the iframe via postMessage, never in the URL. The widget communicates results back to your page via postMessage events.
| URL | Format | Use Case |
|---|---|---|
| /sdk/ztester.js | UMD | <script> tag, sets window.ZTester |
| /sdk/ztester.esm.js | ES Module | import from bundlers (webpack, vite, etc.) |
| /sdk/embed.js | IIFE | <script> tag, sets window.ZTesterEmbed |
Use the POST /test-runs/feedback endpoint to build an automated triage agent that analyzes test failures and submits feedback programmatically. This closes the feedback loop: tests run → failures analyzed → fixes applied → tests re-run.
Test Run (batch)
│
├── All passed → done
│
└── Some failed
│
▼
AI Triage Agent
│
├── Classify failure type (selector? timing? logic?)
│
├── Generate fix (new selector, retry hint, etc.)
│
└── POST /test-runs/feedback
│
├── verdict: "selector_fix" → auto-applies fix + re-runs
├── verdict: "action_fix" → changes step action type (fill → select)
├── verdict: "flaky" → increments flaky count, auto-retried next run
├── verdict: "false_positive"→ lowers confidence score
└── verdict: "correct" → boosts confidence scoreWhen a test fails on a click/fill step, send the error message + page HTML to an LLM to generate a corrected selector.
import OpenAI from 'openai';
const openai = new OpenAI();
async function triageFailedTest(testRun: any, testCase: any) {
// Find the failed step
const failedStep = testRun.stepResults?.find(
(s: any) => s.status === 'failed'
);
if (!failedStep) return null;
// Check if it's a selector issue (timeout, not found)
const isSelector = failedStep.error?.includes('Timeout') ||
failedStep.error?.includes('not found') ||
failedStep.error?.includes('waiting for selector');
if (!isSelector) return null;
// Ask LLM to classify and suggest fix
const response = await openai.chat.completions.create({
model: 'gpt-4o-mini',
messages: [{
role: 'system',
content: `You are a Playwright selector expert. Given a failed selector
and the page context, suggest a working CSS selector.
Return JSON: { "selector": "...", "confidence": 0.0-1.0, "reason": "..." }`
}, {
role: 'user',
content: `Failed selector: ${failedStep.target}
Error: ${failedStep.error}
Step action: ${failedStep.type} (step #${failedStep.stepNumber})
Page URL: ${testRun.lastUrl || 'unknown'}
Test name: ${testCase.name}`
}],
response_format: { type: 'json_object' },
});
const fix = JSON.parse(response.choices[0].message.content || '{}');
if (!fix.selector || fix.confidence < 0.6) return null;
// Submit feedback with selector fix
await fetch('https://ztester.zavecoder.com/api/v1/test-runs/feedback', {
method: 'POST',
headers: {
'Authorization': 'Bearer zt_your_key',
'Content-Type': 'application/json',
},
body: JSON.stringify({
testRunId: testRun.id,
testCaseId: testCase.id,
verdict: 'selector_fix',
feedbackType: 'selector',
suggestedFix: {
stepNumber: failedStep.stepNumber,
currentSelector: failedStep.target,
suggestedSelector: fix.selector,
},
notes: fix.reason,
}),
});
// The API auto-applies the fix and triggers a verification re-run
return fix;
}No LLM needed. Compare recent run history to detect flaky tests (pass/fail alternation).
async function detectFlakyTests(projectId: string, apiKey: string) {
const BASE = 'https://ztester.zavecoder.com/api/v1';
const headers = {
'Authorization': `Bearer ${apiKey}`,
'Content-Type': 'application/json',
};
// Get recent test runs for the project
const runsRes = await fetch(
`${BASE}/test-runs?projectId=${projectId}&limit=100`,
{ headers }
);
const { results } = await runsRes.json();
// Group by test case, check for pass/fail alternation
const byTestCase = new Map<string, string[]>();
for (const run of results) {
const existing = byTestCase.get(run.testCaseId) || [];
existing.push(run.status);
byTestCase.set(run.testCaseId, existing);
}
const flakyFeedback = [];
for (const [testCaseId, statuses] of byTestCase) {
if (statuses.length < 3) continue;
// Count alternations (pass→fail or fail→pass)
let alternations = 0;
for (let i = 1; i < statuses.length; i++) {
if (statuses[i] !== statuses[i - 1]) alternations++;
}
const flakyScore = alternations / (statuses.length - 1);
if (flakyScore > 0.4) {
// More than 40% alternation = likely flaky
flakyFeedback.push({
testCaseId,
verdict: 'flaky',
notes: `Flaky score: ${(flakyScore * 100).toFixed(0)}% (${alternations} alternations in ${statuses.length} runs)`,
});
}
}
if (flakyFeedback.length > 0) {
// Bulk submit flaky feedback
const latestRunId = results[0]?.id;
await fetch(`${BASE}/test-runs/feedback`, {
method: 'POST',
headers,
body: JSON.stringify({
testRunId: latestRunId,
results: flakyFeedback,
}),
});
console.log(`Marked ${flakyFeedback.length} tests as flaky`);
}
}For failures where the selector exists but the assertion fails, use a vision model to analyze the screenshot and DOM snapshot.
async function analyzeWithVision(
testRun: any,
testCase: any,
screenshot: Buffer, // from step_results[].screenshot
domSnapshot: string // from step_results[].html
) {
const response = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [{
role: 'system',
content: `Analyze this test failure. The test expected certain behavior but
the page shows something different. Classify as:
- "false_positive": App works correctly, test expectation is wrong
- "selector_fix": Element exists but with different selector
- "flaky": Looks like a timing/loading issue
- "correct": App genuinely has a bug
Return JSON: { "verdict": "...", "reason": "...", "suggestedSelector?": "..." }`
}, {
role: 'user',
content: [
{
type: 'text',
text: `Test: ${testCase.name}
Failed step: ${testRun.stepResults?.find((s: any) => s.status === 'failed')?.description}
Error: ${testRun.error}
DOM snippet (around failed element): ${domSnapshot.substring(0, 2000)}`
},
{
type: 'image_url',
image_url: {
url: `data:image/png;base64,${screenshot.toString('base64')}`
}
}
]
}],
response_format: { type: 'json_object' },
});
const analysis = JSON.parse(
response.choices[0].message.content || '{}'
);
// Submit the AI's verdict
await fetch('https://ztester.zavecoder.com/api/v1/test-runs/feedback', {
method: 'POST',
headers: {
'Authorization': 'Bearer zt_your_key',
'Content-Type': 'application/json',
},
body: JSON.stringify({
testRunId: testRun.id,
testCaseId: testCase.id,
verdict: analysis.verdict,
feedbackType: 'assertion',
notes: `[AI] ${analysis.reason}`,
...(analysis.suggestedSelector && {
suggestedFix: {
stepNumber: testRun.stepResults?.findIndex(
(s: any) => s.status === 'failed'
) + 1,
suggestedSelector: analysis.suggestedSelector,
},
}),
}),
});
return analysis;
}Add an AI triage step to your CI/CD pipeline. Failed tests get analyzed and feedback is submitted automatically.
name: E2E Tests with AI Triage
on: [push]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Run zTester E2E Tests
id: tests
run: |
RESULT=$(curl -s -X POST https://ztester.zavecoder.com/api/v1/test-runs \
-H "Authorization: Bearer ${{ secrets.ZTESTER_API_KEY }}" \
-H "Content-Type: application/json" \
-d '{"projectId": "${{ secrets.ZTESTER_PROJECT_ID }}"}')
echo "result=$RESULT" >> $GITHUB_OUTPUT
STATUS=$(echo $RESULT | jq -r '.status')
echo "status=$STATUS" >> $GITHUB_OUTPUT
- name: AI Triage Failed Tests
if: steps.tests.outputs.status != 'passed'
run: |
node scripts/ai-triage.js \
--api-key "${{ secrets.ZTESTER_API_KEY }}" \
--openai-key "${{ secrets.OPENAI_API_KEY }}" \
--result '${{ steps.tests.outputs.result }}'
- name: Re-run After Fixes Applied
if: steps.tests.outputs.status != 'passed'
run: |
# Wait for selector fixes to be applied
sleep 10
# Re-run only the failed tests
FAILED=$(echo '${{ steps.tests.outputs.result }}' | \
jq -r '[.results[] | select(.status != "passed") | .testCaseId] | join(",")')
for TEST_ID in $(echo $FAILED | tr ',' ' '); do
curl -s -X POST https://ztester.zavecoder.com/api/v1/test-runs \
-H "Authorization: Bearer ${{ secrets.ZTESTER_API_KEY }}" \
-H "Content-Type: application/json" \
-d "{\"testCaseId\": \"$TEST_ID\"}"
doneUse this logic to classify failures before calling an LLM (saves API costs):
function classifyFailure(stepResult: any): string {
const error = stepResult.error || '';
// 1. Timeout / element not found → likely selector issue
if (error.includes('Timeout') ||
error.includes('waiting for selector') ||
error.includes('not found')) {
return 'selector_fix';
}
// 2. Wrong action type (fill on select, selectOption on input)
if (error.includes('selectOption') ||
error.includes('Not a SELECT element') ||
error.includes('Element is not a <select>') ||
error.includes('is not an <input>')) {
return 'action_fix';
}
// 3. Navigation error → might be flaky (network)
if (error.includes('net::ERR_') ||
error.includes('Navigation timeout') ||
error.includes('frame was detached')) {
return 'flaky';
}
// 4. Assertion failure → needs deeper analysis (use LLM)
if (error.includes('expect') ||
error.includes('assertion') ||
error.includes('toBeVisible') ||
error.includes('toHaveText')) {
return 'needs_llm_analysis';
}
// 5. Permission / auth errors → likely environment issue
if (error.includes('403') ||
error.includes('401') ||
error.includes('Unauthorized')) {
return 'not_applicable';
}
// 6. Default: send to LLM for classification
return 'needs_llm_analysis';
}| Signal | Likely Verdict | LLM Needed? |
|---|---|---|
| Timeout / selector not found | selector_fix | Yes (to generate new selector) |
| Wrong action type (fill on select, etc.) | action_fix | No (detect from error message) |
| Network error / frame detached | flaky | No |
| Assertion mismatch | varies | Yes (screenshot + DOM analysis) |
| Auth / permission denied | not_applicable | No |
| Pass/fail alternation (>40%) | flaky | No (statistical) |
Feedback drives improvement: Every feedback submission updates your test suite. Selector fixes are applied immediately and verified with a re-run. Action fixes (e.g., fill → select for dropdowns) are applied immediately. Flaky tests are auto-retried (up to 3 attempts). Confidence scores adjust based on correctness feedback, which influences future test generation and prioritization.
API requests are currently not rate-limited, but we may introduce limits in the future. Best practices:
The API uses standard HTTP status codes:
| Code | Meaning |
|---|---|
| 200 | Success |
| 201 | Resource created |
| 202 | Accepted - async operation started (discovery, test run) |
| 400 | Bad request - check your parameters |
| 401 | Unauthorized - invalid or missing API key |
| 403 | Forbidden - insufficient permissions |
| 404 | Resource not found |
| 500 | Server error - please try again or contact support |
Error responses include a JSON body with details:
{
"error": "Bad Request",
"message": "testCaseId is required"
}Need help? Contact us at support@zavecoder.com