I wrote a circleci config for creating docker image of my app, running it, running tests, and pushing it to DockerHub.
But I can't figure out how to run tests that require a database.
Here is part of my config for running tests.
executors:
docker-executor:
environment:
DOCKER_BUILDKIT: "1"
docker:
- image: cimg/base:2021.12
jobs:
run-tests:
executor: docker-executor
steps:
- setup_remote_docker:
version: 20.10.7
docker_layer_caching: true
- run:
name: Load archived test image
command: docker load -i /tmp/workspace/testimage.tar
- run:
name: Start Container
command: |
docker create --name app_container << pipeline.parameters.app_image >>:testing
docker start app_container
- run:
name: Run Tests
command: |
docker exec -it app_container ./vendor/bin/phpunit --log-junit testresults.xml --colors=never
How to add MySQL service here, and how to connect it with my app docker image so I can run tests that require database?
Related
i have this pipeline to execute :
stages:
- build-gitlab
- deploy-uat
build:
image: node:14-alpine
stage: build-gitlab
services:
- docker
before_script:
- docker login $CI_REGISTRY_URL -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD
script:
- docker build --tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA .
- docker tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA $CI_FRONTEND_REGISTRY_URL
- docker push $CI_FRONTEND_REGISTRY_URL
deploy:
image:
name: bitnami/kubectl:latest
stage: deploy-uat
before_script:
- kubectl config set-cluster deploy-cluster --server="$K8S_SERVER" --insecure-skip-tls-verify
- kubectl config set-credentials gitlab --token=$(echo $K8S_TOKEN | base64 -d)
- kubectl config set-context deploy-cluster --cluster=deploy-cluster --namespace=ns-frontend-dev --user=gitlab
- kubectl config use-context deploy-cluster
script:
- envsubst < deploy.tmpl > deploy.yaml
- kubectl apply -f deploy.yaml
Initially i defined a runner for my gitlab with shell executor. Docker is installed in my runner that is why the build stage executed itself successfully. But if i would like to use multiple docker images as you can see in my gitlab-ci.yaml file, the shell executor is not the appropriate one.
I saw this documentation about gitlab executors
but it is not explicit enough.
i register a new runner with docker executor, then i got this result :
Preparing the "docker" executor
Using Docker executor with image node:14-alpine ...
Starting service docker:latest ...
Pulling docker image docker:latest ...
Using docker image sha256:0f8d12a73562adf6588be88e37974abd42168017f375a1e160ba08a7ee3ffaa9 for docker:latest with digest docker#sha256:75026b00c823579421c1850c00def301a6126b3f3f684594e51114c997f76467 ...
Waiting for services to be up and running (timeout 30 seconds)...
*** WARNING: Service runner-jdn9pn3z-project-33-concurrent-0-0e760484a3d3cab3-docker-0 probably didn't start properly.
Health check error:
service "runner-jdn9pn3z-project-33-concurrent-0-0e760484a3d3cab3-docker-0-wait-for-service" health check: exit code 1
Health check container logs:
2023-01-18T15:50:31.037166246Z FATAL: No HOST or PORT found
and the deploy part did not succeed. What is the right executor to choose between :
docker, shell, ssh, kubernetes, custom, parallels, virtualbox, docker+machine, docker-ssh+machine, instance, docker-ssh
And how to use it
We have spring boot project with integration tests running on local. To run them on GITLAB CI, we have a docker compose file which spins up a few images, one of which hosts a graphQL engine on 8080.
The integration tests work fine on local, but when we run it as a part of pipeline, the test fail to connect to the graphQL image, even though docker -ps says the image is up and running.
gilab Ci file - Integration test
integration-test:
stage: test
artifacts:
when: always
reports:
junit: build/test-results/test/**/TEST-*.xml
before_script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
- docker-compose --version
- docker-compose -f docker-compose.yml up -d --force-recreate
- docker-compose ps
- docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' springboot-streamengine_metadata-db_1
- docker logs springboot-streamengine_graphql-engine_1
tags:
- docker
services:
- alias: docker
command: [ "dockerd", "--host=tcp://0.0.0.0:2375" ]
name: docker:19.03-dind
script:
- ./gradlew integrationTest -Pgraphql_url=http://docker:8080/v1/graphql -Pmodelservice_url=docker:8010 -Ptimeseries_db_url=jdbc:postgresql://docker:5435/postgres --info
after_script:
- docker-compose down -v
Logs showing the image is up -
enter image description here
Error while connecting to the graphQl engine :
enter image description here
Has anyone seen this error before? We have played around with different versions of dind, all report same issue.
Any leads will be helpful.
TIA
I'm using Gitlab CI/CD to build Docker images of our Node server.
I am wondering if there is a way to test that docker run of the image was ok.
We've had few occasions where the Docker builds but it is missing some files/env variables and it fails to start the server.
Is there any way to run the docker image and test if it is starting up correctly in the CI/CD pipeline?
Cheers.
With Gitlab you are able to use a docker-runner.
When you use the docker-runner, and not a shell runner, a docker-like
image and its services have to initiate, it should give an error if
something fails.
Chek this docs from gitlab:
This is a classic yml from that web:
default:
image:
name: ruby:2.2
entrypoint: ["/bin/bash"]
services:
- name: my-postgres:9.4
alias: db-postgres
entrypoint: ["/usr/local/bin/db-postgres"]
command: ["start"]
before_script:
- bundle install
test:
script:
- bundle exec rake spec
As you see, the test sections will be executed after building the image, so, you should not have to worry about. Gitlab should detect any errors when loading the image
If you are doing it with the shell gitlab-runner, you should call the
docker image start like this:
stages:
- dockerStartup
- build
- test
- deploy
- dockerStop
job 0:
stage: dockerStartup
script:
- docker build -t my-docker-image .
- docker run my-docker-image /script/to/run/tests
[...] //your jobs here
job 5:
stage: dockerStop
script: docker stop whatever
I tested a gitlab-runner on a virtual machine, it worked perfectly. I followed this tutorial at part Use docker-in-docker executor :
https://docs.gitlab.com/ee/ci/docker/using_docker_build.html
When i register a runner with exactly the same configuration on my dev server, the runner is called when there is a commit but i got alot of errors :
*** WARNING: Service runner-XXX-project-XX-concurrent-X-docker-X probably didn't start properly.
ContainerStart: Error response from daemon: Cannot link to a non running container: /runner-XXX-project-XX-concurrent-X-docker-X AS /runner-XXX-project-XX-concurrent-X-docker-X-wait-for-service/service (executor_docker.go:1337:1s)
DEPRECATION: this GitLab server doesn't support refspecs, gitlab-runner 12.0 will no longer work with this version of GitLab
$ docker info
error during connect: Get http://docker:2375/v1.39/info: dial tcp: lookup docker on MY.DNS.IP:53: no such host
ERROR: Job failed: exit code 1
I believe all these error are due to the first warning. I tried to :
Add a second DNS with 8.8.8.8 IP to my machine, same error
Add privileged=true manually in /etc/gitlab-runner/config.toml, same error, so it's not due to the privileged = true parameter
Replace tcp://docker:2375 by tcp://localhost:2375, can't find docker daemon on the machine when docker info
gitlab-ci.yml content :
image: docker:stable
stages :
- build
variables:
DOCKER_HOST: tcp://docker:2375/
DOCKER_DRIVER: overlay2
services:
- docker:dind
before_script:
- docker info
build-folder1:
stage: build
script:
- docker build -t image1 folder1/
- docker run --name docker1 -p 3001:5000 -d image1
only:
refs:
- dev
changes:
- folder1/**/*
build-folder2:
stage: build
script:
- docker build -t image2 folder2/
- docker run --name docker2 -p 3000:3000 -d image2
only:
refs:
- dev
changes:
- folder2/**/*
If folder1 of branch dev is modified, we build and run the docker1
If folder2 of branch dev is modified, we build and run the docker2
docker version on dev server :
docker -v
Docker version 17.03.0-ce, build 3a232c8
gitlab-runner version on dev server :
gitlab-runner -v
Version: 11.10.1
I will try to provide an answer for you, as I come to fix this same problem when trying yo run DinD.
This message:
*** WARNING: Service runner-XXX-project-XX-concurrent-X-docker-X probably didn't start properly.
Means that either you have not properly configured your runner, or it is not linked by the gitlab-ci.yml file. You should be able to ckeck the ID of the runner used in the log page at Gitlab.
To start with, verify that you entered the gitlab-runner register command right, with the proper registration token.
Second, since you are setting a specific runner manually, verify that you have set some unique tag to it (eg. build_docker), and call it from your gitlab-ci.yml file. For example:
...
build-folder1:
stage: build
script:
- docker build -t image1 folder1/
- docker run --name docker1 -p 3001:5000 -d image1
tags:
- build_docker
...
That way it should work.
The issue I am experiencing with Wercker is that the specific linked services in my wercker.yml are not being linked to my main docker container.
I noticed this issue when my node app was not running on port 3001 after a successful Wercker deploy in which it's output can be seen in the image below.
Therefore I SSH'd into my server and into my docker container that was running after the Wercker deploy using:
docker exec -i -t <my-container-name> ./bin/bash
and found the following MongoDB error in my PM2 logs:
[MongoError: connect EHOSTUNREACH 172.17.0.7:27017
The strange fact is that from the images below you can see that both the environment variables that I need from each respective service have been set:
Does anyone know why the services containers cannot be accessed from my main container even thought their environment variables have been set?
The folloing is the wercker.yml file that I am using.
box: node
services:
- id: mongo
- id: redis
build:
steps:
- npm-install
deploy:
steps:
- npm-install
- script:
name: install pm2
code: npm install pm2 -g
- internal/docker-push:
username: $DOCKER_USERNAME
password: $DOCKER_PASSWORD
repository: /
ports: "3001"
cmd: /bin/bash -c "cd /pipeline/source && pm2 start processes_prod.json --no-daemon"
env: "MONGO_PORT_27017_TCP_ADDR"=$MONGO_PORT_27017_TCP_ADDR,"REDIS_PORT_6379_TCP_ADDR"=$REDIS_PORT_6379_TCP_ADDR
- add-ssh-key:
keyname: DIGITAL_OCEAN_KEY
- add-to-known_hosts:
hostname:
- script:
name: pull latest image
code: ssh root# docker pull /:latest
- script:
name: stop running container
code: ssh root# docker stop || echo ‘failed to stop running container’
- script:
name: remove stopped container
code: ssh root# docker rm || echo ‘failed to remove stopped container’
- script:
name: remove image behind stopped container
code: ssh root# docker rmi /:current || echo ‘failed to remove image behind stopped container’
- script:
name: tag newly pulled image
code: ssh root# docker tag /:latest /:current
- script:
name: run new container
code: ssh root# docker run -d -p 8080:3001 --name /:current
- script:
name: env
code: env
AFAIK the Wercker services are available only in the build process, and not the deploy one. Mongo and Redis are persisted data stores - meaning they are not supposed to be reinstalled every time you deploy.
So make sure you manually setup Redis and Mongo in your deploy environment.