Running TYPO3 in Docker and automating deployments through GitLab CI is a solid setup that gives you reproducible builds, clean environment separation, and a fully automated delivery pipeline. This post walks through how the CI/CD pipeline for this project is structured — from the first push to a live container on the server.
The Setup
The project runs two Docker containers: one for the web server and one for PHP. Both are built inside the GitLab pipeline and deployed to a dedicated server over SSH. The pipeline has three stages:
stages:
- test
- build
- deploy
Both the build and deploy jobs are restricted to the master branch, so only reviewed, merged code ever reaches the server. A dedicated GitLab runner tagged typo3blogs handles all execution.
A pipeline that runs is worth more than a perfect pipeline that doesn't exist yet. Start simple; add complexity only when the pain of not having it becomes real.
Handling SSH Authentication
Every job that touches the server needs SSH access. Rather than configuring this in each job individually, the pipeline sets it up once in before_script so it's available everywhere:
before_script:
- which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )
- eval $(ssh-agent -s)
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
The private key is stored as a GitLab CI variable and never touches the repository. The tr -d '\r' is worth calling out explicitly — SSH keys edited or copied on Windows silently gain carriage returns that break ssh-add in a way that produces cryptic errors. Stripping them here prevents that entirely.
The most dangerous bugs in a deployment pipeline are the ones that fail silently. A carriage return, a missing newline, a variable that looks right but isn't — always verify your secrets actually work before you need them at 11pm on a Friday.
Building the Docker Images
The build stage compiles both Docker images and injects the current commit SHA as the image tag. This means every image is directly traceable to the exact code that produced it:
- docker build -t serie3/php_container:$CI_COMMIT_SHA --no-cache ./env/prod/PHP/
- docker build -t serie3/typo3blogs_web:$CI_COMMIT_SHA --no-cache ./env/prod/Web/
Using --no-cache ensures every build starts clean — no stale layers, no surprises from cached intermediate states.
Transferring Images Without a Registry
Instead of pushing images to a container registry and pulling them on the server, the pipeline exports them as .tar archives and transfers them directly via SCP. This keeps the entire deployment within a controlled private network with no external dependencies:
- docker save -o php_image.tar serie3/php_container:$CI_COMMIT_SHA
- scp php_image.tar username@server:~/pathtotpyo3/
- docker save -o web_image.tar serie3/typo3blogs_web:$CI_COMMIT_SHA
- scp web_image.tar username@server:~/pathtotpyo3/
The server loads both images from the transferred archives:
- ssh username@server 'docker load -i ~/pathtotpyo3/php_image.tar'
- ssh username@server 'docker load -i ~/pathtotpyo3/web_image.tar'
No registry authentication, no rate limits, no outbound internet required on the server. Cleanup of old .tar files and unused Docker images is handled by a cronjob on the server, so the pipeline itself stays lean.
The Deploy Stage
Once the images are loaded, the deploy stage brings up the updated containers and runs the final TYPO3 setup steps:
- ssh username@server "cd ~/pathtotpyo3/ && docker-compose up -d --no-deps php_container"
- ssh username@server "cd ~/pathtotpyo3/ && docker-compose up -d --no-deps typo3blogs_web"
- ssh username@server "docker exec php_container sh -c 'cp -Rvf /var/www/code/* /var/www/site/web/'"
- ssh username@server "docker exec php_container composer install --no-dev --working-dir /var/www/site/web/"
The --no-deps flag restarts only the targeted containers without touching the database. The updated docker-compose.yml — with the new image tag baked in via sed during build — is transferred alongside the images, so the compose file and the running containers always stay in sync.
The Full Picture
Put together, every push to master triggers a fully automated cycle: build two Docker images, tag them with the commit SHA, ship them to the server, load them, restart the containers, and deploy the TYPO3 codebase. No manual steps, no external services required, and every deployed state is traceable back to a specific commit.
Complexity is not a proxy for quality. The best pipeline is the one that ships reliably.