The change
Zero manual work. Full automation.
Ops tickets
"Can someone provision a staging DB?"
Shared staging
Everyone steps on each other's changes
Manual teardown
Orphaned environments pile up and cost money
Schema drift
Invisible until it breaks production
Zero tickets
Every environment provisioned automatically via API
Clone-per-PR
Every pull request gets its own isolated database
Auto-teardown
Clones destroy themselves on merge or TTL expiry
Drift detection
Caught automatically in every PR pipeline
Fully Automated
Webhook in. Clone out. Teardown on merge.
Connect your CI tool once. Every PR gets an isolated database automatically. Every merge triggers automatic cleanup. No scripts to maintain.
PR opens — clone created automatically
A webhook fires on PR open. Guepard provisions a production-identical clone in seconds. No human involved.
CI runs against real data
Your test suite runs against a live clone with real production data. Not mocks. Not stale snapshots. The clone URL is injected as an environment variable.
PR merges — clone destroyed automatically
On merge or close, a webhook triggers automatic teardown. Zero orphaned environments. Zero idle costs. Zero cleanup scripts.
Deployment validated — promote with confidence
Schema changes tested against production data before they touch production. Every deployment is a safe deployment.
# .github/workflows/ci.yml
- name: Clone DB for PR
uses: guepard/clone-action@v2
with:
source: production
name: pr-${{ github.event.number }}
token: ${{ secrets.GUEPARD_TOKEN }}
auto_teardown: on_merge
- name: Run tests
env:
DATABASE_URL: ${{ steps.clone.outputs.url }}
run: npm testpipeline {
stages {
stage('Clone DB') {
steps {
sh '''
gfs init --provider postgres --version 17
gfs import --file prod-snapshot.dump
gfs commit -m "CI build #${BUILD_NUMBER}"
gfs checkout -b "ci-${BUILD_NUMBER}"
'''
}
}
stage('Test') {
steps {
sh 'DATABASE_URL=$(gfs status --url) npm test'
}
}
stage('Cleanup') {
steps { sh 'gfs compute stop' }
}
}
}# Create a clone — auto-teardown on completion
curl -X POST https://api.guepard.run/v1/clones \
-H "Authorization: Bearer $GUEPARD_TOKEN" \
-d '{
"source": "production",
"name": "ci-job-$CI_JOB_ID",
"auto_teardown": true
}'
# Response includes the connection string
# → { "url": "postgres://ci-job-842.guepard.run/db" }
export DATABASE_URL=$(jq -r '.url' response.json)
npm testPlug into any automation tool
Features
Every database operation, fully automated
Webhook-driven provisioning
PR opened? Clone created. PR merged? Clone destroyed. Connect GitHub, GitLab, or Bitbucket webhooks and never provision manually again.
API & CLI first
Every operation is an API call or CLI command. gfs clone, gfs teardown. Script it in bash, call it from Jenkins, pipe it anywhere.
Clone-per-PR
Every pull request gets its own production-identical database. No shared staging. No conflicts. Automatic isolation for every branch.
Automatic teardown
Set a TTL, trigger on merge, or let CI handle it. Environments self-destruct when done. No orphaned databases. No surprise bills.
5-second provisioning
Any database, any size. Cloned from production via copy-on-write in under 5 seconds. No pg_dump. No restore. No waiting.
Zero ops tickets
Developers self-serve environments through CI or the API. No more "can someone spin up a DB?" in Slack. No more blocked pipelines.
Zero production access
Developers get realistic data without production credentials. PII masking ensures compliance while keeping data shapes real.
Runs anywhere
Docker-based engine. Self-host on your infra or use the managed cloud. Works wherever your CI runner lives.
Works with every CI tool
GitHub Actions, GitLab CI, Jenkins, CircleCI, Buildkite, Argo CD. If it can call a REST API, it works with Guepard.
“We eliminated every ops ticket related to database provisioning. CI handles everything now. Engineers never wait for a database.”
Sarah R.
Engineering Lead, Series B SaaS company