I deployed my app from the local machine before by:
> docker context create remote --docker "host=ssh://user#myhost"
> docker --context remote ps
> docker-compose --context remote build
> docker-compose --context remote up -d
This is successful, all Dockerfiles are right.
Now I want to do the same but at GitLab CI. This is my gitlab-ci.yml file for building:
image: docker:19.03.12
services:
- docker:dind
stages:
- build
install_dependencies:
stage: build
before_script:
- 'command -v ssh-agent >/dev/null || ( apt-get update -y && apt-get install openssh-client -y )'
- eval $(ssh-agent -s)
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- echo "StrictHostKeyChecking no " > ~/.ssh/config
script:
- echo "Building deploy package"
- echo "$NPMRC" > ~/.npmrc
- apk add --no-cache docker-compose
- docker context create remote --docker "host=ssh://user#myhost"
- docker --context remote ps
- docker context use remote
- docker-compose --context remote build
- echo "Build successful"
Everything goes right before docker-compose --context remote build, when --context arg is not recognized, I can't understand why.
$ docker context use remote
Current context is now "remote"
Warning: DOCKER_HOST environment variable overrides the active context. To use "remote", either set the global --context flag, or unset DOCKER_HOST environment variable.
remote
$ docker-compose --context remote build
Define and run multi-container applications with Docker.
Usage:
docker-compose [-f <arg>...] [options] [COMMAND] [ARGS...]
docker-compose -h|--help
Options:
-f, --file FILE Specify an alternate compose file
(default: docker-compose.yml)
-p, --project-name NAME Specify an alternate project name
(default: directory name)
--verbose Show more output
--log-level LEVEL Set log level (DEBUG, INFO, WARNING, ERROR, CRITICAL)
--no-ansi Do not print ANSI control characters
-v, --version Print version and exit
-H, --host HOST Daemon socket to connect to
--tls Use TLS; implied by --tlsverify
--tlscacert CA_PATH Trust certs signed only by this CA
--tlscert CLIENT_CERT_PATH Path to TLS certificate file
--tlskey TLS_KEY_PATH Path to TLS key file
--tlsverify Use TLS and verify the remote
--skip-hostname-check Don't check the daemon's hostname against the
name specified in the client certificate
--project-directory PATH Specify an alternate working directory
(default: the path of the Compose file)
--compatibility If set, Compose will attempt to convert keys
in v3 files to their non-Swarm equivalent
--env-file PATH Specify an alternate environment file
Commands:
build Build or rebuild services
config Validate and view the Compose file
create Create services
down Stop and remove containers, networks, images, and volumes
events Receive real time events from containers
exec Execute a command in a running container
help Get help on a command
images List images
kill Kill containers
logs View output from containers
pause Pause services
port Print the public port for a port binding
ps List containers
pull Pull service images
push Push service images
restart Restart services
rm Remove stopped containers
run Run a one-off command
scale Set number of containers for a service
start Start services
stop Stop services
top Display the running processes
unpause Unpause services
up Create and start containers
version Show the Docker-Compose version information
ERROR: Job failed: exit code 1
To fix this, docker-compose version should be at least 1.26.0.
Related
I'm trying to write-down a Dockerfile to create create and register a new runner to a private gitlab repository. According to gitlab documentation, I wrote down the following Dockerfile:
FROM gitlab/gitlab-runner:latest
RUN gitlab-runner register \
--non-interactive \
--url "https://gitlab.com/" \
--registration-token "GITLAB_REPO_TOKEN" \
--executor "docker" \
--docker-image alpine:latest \
--description "docker-runner" \
--maintenance-note "Free-form maintainer notes about this runner" \
--run-untagged="true" \
--locked="false"
Then build it with:
docker build -t test .
And then run it in a container via:
docker run test:latest
The runner is correctly seen by gitlab (the runner is available under Settings\CI/CD\Runners).
Then, I set up the following CI, for testing:
image: python:3.7-alpine
testci:
stage: test
script:
- python test.py
The job is then pulled by the runner, but I immediately get the following error:
Running with gitlab-runner 15.8.2 (4d1ca121)
on docker-runner yVa1JDny, system ID: xxxxxxxxx
Preparing the "docker" executor
00:09
ERROR: Failed to remove network for build
ERROR: Preparation failed: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? (docker.go:753:0s)
Can anyone please provide support in that? I didn't get what it is missing from the configuration I've made.
I've tried to modify the docker run call trying with the volume mount guide found here, but nothing changes.
I've also found here a similar Dockerfile, but using a gitlab-ci-multi-runner which is not the desired service.
You're attempting to use the docker executor for your runner, but your runner doesn't have access to the docker socket in order to create new containers. Your runner manager (what your docker file is creating) is attempting to start up new docker containers to handle each of your jobs, but failing to connect to docker.
In your docker run command, you will need to do a couple things:
Set your container to use privileged mode: --privileged
Map in the docker socket: -v /var/run/docker.sock:/var/run/docker.sock
With those two things, you can likely connect to the docker daemon and start new containers. If you want to perform docker builds within CI using this runner, note you'll need to configure your runner manager (again, what your docker file is creating) to allow these same two settings on the build containers. You can get information about how to do that here: https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#use-docker-socket-binding
I have a python code that passes all the test on my local machine. The code uses tkinter and provides a GUI. However, none of the test functions actually open the GUI. (They call tk.Tk() though).
I created a docker container locally and could use X11 forwarding to pass the tests on the "local" container as well.
Now, I'm trying to run the tests on Jenkins that I have set up on an EC2 instance. Jenkins is supposed to create a docker container using the Dockerfile that is on my repository. And then call "docker run -e ... -v ..." (similar to what I had in my local computer) to check the tests. I understand my ec2 instance does not have a gui and therefore x11 forwarding is not as simple as it was on my computer. There should be a way for tests using a gui to be checked through Jenkins setup on AWS. Any help is appreciated.
EDIT
Here is the build script that I have on AWS, it creates the docker container using the Dockerfile:
IMAGE_NAME="test-image"
CONTAINER_NAME="deidentifier_clinical"
echo "Check current working directory"
pwd
echo "Build docker image and run container"
docker build -t $IMAGE_NAME .
echo $DISPLAY
docker run -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=unix$DISPLAY $IMAGE_NAME bash -c "cd /$CONTAINER_NAME;make test"
echo "Copy coverage.xml into Jenkins container"
rm -rf reports; mkdir reports
docker cp $CONTAINER_NAME:/deidentifier_clinical/htmlcov/* reports/.
echo "Cleanup"
docker stop $CONTAINER_NAME
docker rm $CONTAINER_NAME
docker rmi $IMAGE_NAME
This fails on the docker run line. This same script runs with no problem on my local computer after setting up the X11-forwarding.
I am trying to configure a bitbucket CI pipeline to run tests.Stripping out the details I have a make file which looks as follows to run some form of integration tests.
test-e2e:
docker-compose -f ${DOCKER_COMPOSE_FILE} up -d ${APP_NAME}
godog
docker-compose -f ${DOCKER_COMPOSE_FILE} down
Docker compose is a single webserver with ports exposed.
Pipeline looks as follows:
- step: &integration-testing
name: Run integration tests script: # do this to make go module work with private repo
- apk add libc-dev py-pip python-dev libffi-dev openssl-dev gcc libc-dev make bash
- pip install docker-compose - git config --global url."git#bitbucket.org:".insteadOf "https://bitbucket.org/"
- go get github.com/onsi/ginkgo/ginkgo
- go get github.com/onsi/gomega/...
- go get github.com/DATA-DOG/godog/cmd/godog
- make build-only && make test-e2e
I am facing two separate issues for both i have not been able to find a solution.
Keep getting connection refused when the tests are run.
To elaborate above, the docker compose brings up a server with proper host:port mapping ("127.0.0.1:10077:10077"). The command godog is intended to run the tests by querying the server. This however always ends in connection refused.This link has a possible solution , so i am exploring that.
The pipeline almost always runs commands before the container is up. I've tried fixing this by changing the invoke to.
test-e2e:
docker-compose -f ${DOCKER_COMPOSE_FILE} up -d ${APP_NAME} && sleep 10 && docker exec -i oracle-go godog && docker-compose -f ${DOCKER_COMPOSE_FILE} down
However the container is always brought up after the sleep (almost instantaneously).
Example:
Creating oracle-go ...
Sleep 10
docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
docker exec -i oracle-go godog
Creating oracle-go ... done
Error response from daemon: Container 7bab5322203756b972e7f0a3c6e5827413279914a68c705221b8af7daadc1149 is not running
Please let me know if there is a way around it.
If I understood your question correctly, you want to wait for the server to start before running tests.
Instead of manually sleeping, you should use wait-for-it.sh (or an alternative). See the relevant Docker docs for more information.
For example:
test-e2e:
bash wait-for-it.sh <HOST>:<PORT> -- docker-compose -f ${DOCKER_COMPOSE_FILE} up -d ${APP_NAME} && docker exec -i oracle-go godog && docker-compose -f ${DOCKER_COMPOSE_FILE} down
Change <HOST> and <PORT> to your service's host name and port respectively. Alternatively, you could use wait-for-it.sh in your Docker Compose command or the like.
I'm having my first steps into Docker/CI/CD.
For that, I'm trying to deploy a raw create-react-app to my Digital Ocean droplet (Docker One-Click Application) using Gitlab CI. Those are my files:
Dockerfile.yml
# STAGE 1 - Building assets
FROM node:alpine as building_assets_stage
WORKDIR /workspace
## Preparing the image (installing dependencies and building static files)
COPY ./package.json .
RUN yarn install
COPY . .
RUN yarn build
# STAGE 2 - Serving static content
FROM nginx as serving_static_content_stage
ENV NGINX_STATIC_FILE_SERVING_PATH=/usr/share/nginx/html
EXPOSE 80
COPY --from=building_assets_stage /workspace/build ${NGINX_STATIC_FILE_SERVING_PATH}
docker-compose.yml
## Use a Docker image with "docker-compose" installed on top of it.
image: tmaier/docker-compose:latest
services:
- docker:dind
variables:
DOCKER_CONTAINER_NAME: ${CI_PROJECT_NAME}
DOCKER_IMAGE_TAG: ${SECRETS_DOCKER_LOGIN_USERNAME}/${CI_PROJECT_NAME}:latest
before_script:
## Install ssh agent (so we can access the Digital Ocean Droplet) and run it.
- apk update && apk add openssh-client
- eval $(ssh-agent -s)
## Write the environment variable value to the agent store, create the ssh directory and give the right permissions to it.
- echo "$SECRETS_DIGITAL_OCEAN_DROPLET_SSH_KEY" | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
## Make sure that ssh will trust the new host, instead of asking
- echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config
## Test that everything is setup correctly
- ssh -T ${SECRETS_DIGITAL_OCEAN_DROPLET_USER}#${SECRETS_DIGITAL_OCEAN_DROPLET_IP}
stages:
- deploy
deploy:
stage: deploy
script:
## Login this machine into Docker registry, creates a production build and push it to the registry.
- docker login -u ${SECRETS_DOCKER_LOGIN_USERNAME} -p ${SECRETS_DOCKER_LOGIN_PASSWORD}
- docker build -t ${DOCKER_IMAGE_TAG} .
- docker push ${DOCKER_IMAGE_TAG}
## Connect to the Digital Ocean droplet, stop/remove all running containers, pull latest image and execute it.
- ssh -T ${SECRETS_DIGITAL_OCEAN_DROPLET_USER}#${SECRETS_DIGITAL_OCEAN_DROPLET_IP}
- docker ps -q --filter "name=${DOCKER_CONTAINER_NAME}" | grep -q . && docker stop ${DOCKER_CONTAINER_NAME} && docker rm -fv ${DOCKER_CONTAINER_NAME} && docker rmi -f ${DOCKER_IMAGE_TAG}
- docker run -d -p 80:80 --name ${DOCKER_CONTAINER_NAME} ${DOCKER_IMAGE_TAG}
# Everything works, exit.
- exit 0
only:
- master
In a nutshell, on Gitlab CI, I do the following:
(before_install) Install ssh agent and copy my private SSH key to this machine, so we can connect to the Digital Ocean Droplet;
(deploy) I build my image and push it to my public docker hub repository;
(deploy) I connect to my Digital Ocean Droplet via SSH, pull the image I've just built and run it.
The problem is that if I do everything from my computer's terminal, the container is created and the application is deployed successfully.
If I execute it from the Gitlab CI task, the container is generated but nothing is deployed because the container dies right after (click here to see CI job output).
I can guarantee that the container is being erase because if I manually SSH the server and docker ps -a, it doesn't listen anything.
I'm mostly confused by the fact that this image CMD is CMD ["nginx", "-g", "daemon off;"], which shouldn't make my container gets deleted because it has a process running.
What I'm doing wrong? I'm lost.
Thank you in advance.
My question was answered by d g - thank you very much!
The problem relies on the fact that I was connecting to my Digital Ocean Droplet via SSH and executing commands inside using its bash, when I should be passing the entire command to be executed as an argument to the ssh connection instruction.
Changed my .gitlab.yml file from:
## Connect to the Digital Ocean droplet, stop/remove all running containers, pull latest image and execute it.
- ssh -T ${SECRETS_DIGITAL_OCEAN_DROPLET_USER}#${SECRETS_DIGITAL_OCEAN_DROPLET_IP}
- docker ps -q --filter "name=${DOCKER_CONTAINER_NAME}" | grep -q . && docker stop ${DOCKER_CONTAINER_NAME} && docker rm -fv ${DOCKER_CONTAINER_NAME} && docker rmi -f ${DOCKER_IMAGE_TAG}
- docker run -d -p 80:80 --name ${DOCKER_CONTAINER_NAME} ${DOCKER_IMAGE_TAG}
To:
# Execute as follow:
# ssh -t digital-ocean-server "docker cmd1; docker cmd2;
- ssh -T ${SECRETS_DIGITAL_OCEAN_DROPLET_USER}#${SECRETS_DIGITAL_OCEAN_DROPLET_IP} "docker ps -q --filter \"name=${DOCKER_CONTAINER_NAME}\" | grep -q . && docker stop ${DOCKER_CONTAINER_NAME} && docker rm -fv ${DOCKER_CONTAINER_NAME} && docker rmi -f ${DOCKER_IMAGE_TAG}; docker run -d -p 80:80 --name ${DOCKER_CONTAINER_NAME} ${DOCKER_IMAGE_TAG}"
I am using a docker image for functional tests of my project. This image is based on alpine and have the nginx and the php-fpm services running by the supervisord, and my functional test do rest calls to this docker instance.
Basically the .travis.yml:
Build the image
Start the container
Call the PHPUnit to test;
The image is created fine and the container is UP. I added some debug info to verify this:
>> docker run -d --rm --name resttemplate-test-instance -v /home/travis/build/byjg/php-rest-template:/srv/web -p "127.0.0.1:80:80" resttemplate-test
f3986de1c86629123896a0aa7f6ec407f617f261383c3b6a358e9dfcd3d06d77
Exit status : 0
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f3986de1c866 resttemplate-test "docker-php-entryp..." Less than a second ago Up Less than a second 443/tcp, 127.0.0.1:80->80/tcp, 9000/tcp resttemplate-test-instance
$ docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' resttemplate-test-instance
172.17.0.2
But when the PHPUnit run I got the followed message for every single test:
cURL error 56: Recv failure: Connection reset by peer (see http://curl.haxx.se/libcurl/c/libcurl-errors.html)
If I repeat line by line all the steps existing in the .travis.yml in my local environment everything works fine. This error occurs only in travis.
Here is the link for the build is failing:
https://travis-ci.org/byjg/php-rest-template/jobs/305476720#L728
Here have my .travis.yml:
sudo: required
language: php
php:
- "7.1"
- "7.0"
- "5.6"
env:
- APPLICATION_ENV=test
services:
- docker
install:
- composer install
- composer restdocs
- composer migrate -- reset --yes
- composer build
- docker ps -a
- docker inspect resttemplate-test-instance
- docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' resttemplate-test-instance
- sudo chown 82:82 src/sample.db
- sudo chown 82:82 src/
script:
- phpunit
Digging into the internet I found this issue: https://github.com/travis-ci/travis-ci/issues/6461
There are a suggest to wait the docker instance becoming up and running by adding the "sleep 15". It worked!