This repository demonstrates a comprehensive CI/CD pipeline for dbt projects using GitHub Actions. The pipeline provides safe, efficient, and isolated testing of dbt changes while maintaining production data integrity.
Slim CI/CD Benefits:
- Faster Testing: Only tests modified models and dependencies (minutes vs hours)
- Cost Efficiency: Reduces warehouse usage during testing
- Production Safety: Tests against real production data structures
- Isolated Testing: Each PR gets its own schema to prevent conflicts
- Incremental Deployment: Only rebuilds changed models in production
Trigger: Pull requests to the main
branch
Purpose: Tests dbt changes in isolated schemas before merging
Steps:
- Setup: Install dbt and dependencies
- name: Install dbt
run: pip install -r dbt-requirements.txt
Installs dbt-snowflake and other required packages
- Download Manifest: Get latest production manifest for state comparison
- name: Download latest manifest artifact
shell: bash
run: |
curl -s -H "Authorization: Bearer ${{ secrets.GITHUB_TOKEN }}" \
"https://api.github.com/repos/${{ github.repository }}/actions/artifacts" \
-o artifacts.json
artifact_id=$(grep -A20 '"name": "dbt-manifest"' artifacts.json | grep '"id":' | head -n1 | sed 's/[^0-9]*\([0-9]\+\).*/\1/')
curl -sL -H "Authorization: Bearer ${{ secrets.GITHUB_TOKEN }}" \
"https://api.github.com/repos/${{ github.repository }}/actions/artifacts/$artifact_id/zip" \
-o artifact.zip
unzip -q artifact.zip -d state
Downloads the latest production manifest to enable state comparison and Slim CI
- Generate Schema: Create unique schema using PR number and commit SHA
- name: Generate schema ID
run: echo "SCHEMA_ID=${{ github.event.pull_request.number }}__${{ github.sha }}" >> $GITHUB_ENV
Creates unique schema name like pr_123__abc123def456
to isolate PR testing
- Run Tests: Execute dbt build with state comparison (only modified models)
- name: Run dbt build
run: |
if [ -f "./state/manifest.json" ]; then
cp ./state/manifest.json ./manifest.json
dbt build -s 'state:modified+' --defer --state ./ --target pr --vars "schema_id: $SCHEMA_ID"
else
dbt build --target pr --vars "schema_id: $SCHEMA_ID"
fi
Uses Slim CI to only build modified models, deferring unchanged models to production
- Cleanup: Remove temporary files
- name: Cleanup
run: rm -rf state/ artifact.zip artifacts.json
Removes downloaded artifacts and temporary files
Trigger: Pushes to the main
branch
Purpose: Deploys tested changes to production environment
Steps:
- Setup: Install dbt and dependencies
- name: Install dbt
run: pip install -r dbt-requirements.txt
Installs dbt-snowflake and other required packages
- Download Manifest: Get previous deployment manifest for incremental builds
- name: Download latest manifest artifact
shell: bash
run: |
curl -s -H "Authorization: Bearer ${{ secrets.GITHUB_TOKEN }}" \
"https://api.github.com/repos/${{ github.repository }}/actions/artifacts" \
-o artifacts.json
artifact_id=$(grep -A20 '"name": "dbt-manifest"' artifacts.json | grep '"id":' | head -n1 | sed 's/[^0-9]*\([0-9]\+\).*/\1/')
curl -sL -H "Authorization: Bearer ${{ secrets.GITHUB_TOKEN }}" \
"https://api.github.com/repos/${{ github.repository }}/actions/artifacts/$artifact_id/zip" \
-o artifact.zip
unzip -q artifact.zip -d state
Downloads the previous production manifest to enable incremental deployment
- Deploy: Run dbt build with state comparison (only changed models)
- name: Deploy to production
run: |
if [ -f "./state/manifest.json" ]; then
cp ./state/manifest.json ./manifest.json
dbt build -s 'state:modified+' --state ./ --target prod
else
dbt build --target prod
fi
Deploys only modified models and dependencies to production using state comparison
- Upload Artifact: Save new manifest for future deployments
- name: Upload new manifest artifact
uses: actions/upload-artifact@v4
with:
name: dbt-manifest
path: ./target/manifest.json
retention-days: 7
Saves the new production manifest for future incremental deployments
Trigger: Pull request closure (merged, closed, or abandoned)
Purpose: Automatically cleans up temporary CI schemas
Steps:
- Setup: Install dbt and dependencies
- name: Install dbt
run: pip install -r dbt-requirements.txt
Installs dbt-snowflake and other required packages
- Cleanup: Drop all schemas created for the specific PR
- name: Cleanup PR schemas
run: |
dbt run-operation drop_pr_schemas \
--target pr \
--args '{"database": "'"$SNOWFLAKE_DATABASE"'", "schema_prefix": "pr", "pr_number": "'"$PR_NUM"'"}'
Drops all temporary schemas created during PR testing to free up resources
- Logging: Record cleanup operations and results
- name: Log cleanup results
run: echo "✅ Cleanup completed for PR #$PR_NUM"
Records successful cleanup completion for audit purposes
The workflows in this repository are designed for Snowflake but can be easily adapted for other dbt-supported platforms. Here's what you need to change:
-
Update dbt Requirements - Replace
dbt-snowflake
with your platform's adapter (e.g.,dbt-bigquery
,dbt-postgres
,dbt-redshift
,dbt-databricks
) -
Update Environment Variables - Change the environment variables according to your data platform's connection requirements
-
Update profiles.yml - Modify your profiles configuration to use the new environment variables and platform type
-
Update Cleanup Operations - Modify the cleanup macro to work with your platform's resource management approach
The core CI/CD logic remains the same - only the connection details and resource management need to be updated for your specific platform.