Unable to run ‘docker-compose build’ on “circleci local execute” - docker

This is a docker, docker-compose and django project.
Locally, it works when I run
docker-compose build
docker-compose run --rm app sh -c "python manage.py test"
However, it failed when I run
circleci local execute
The error I get is
docker-compose build
ERROR: Couldn't connect to Docker daemon at http+docker://localhost - is it running?
If it's at a non-standard location, specify the URL with the DOCKER_HOST environment variable.
Error:
Exited with code exit status 1
Here's the .circleci/config.yml.
version: 2
jobs:
build:
docker:
- image: circleci/python:3.8.5
working_directory: ~/app
steps:
- checkout
- setup_remote_docker:
docker_layer_caching: true
- run:
command: |
docker-compose build
- run:
command: |
docker-compose run --rm app sh -c "python manage.py test"
I was using travisci and it just works with the .travis.yml config file below:
language: python
python:
- "3.8"
services:
- docker
before_script: pip install docker-compose
script:
- docker-compose run app sh -c "python manage.py test"
Would appreciate some pointers here. Thank you.

Related

docker in github action Error response from daemon: Container [container_id] is not running

in local, i've set docker to mount the application path. from docker desktop, i set the File Sharing docker desktop > settings > resources > file sharing so docker can mount my apps. But i cannot find how to do it the same way with github action. So, i just pull my updated code to github below
web:
container_name: oe-web
build:
context: ./
dockerfile: Dockerfile
depends_on:
- db
ports:
- 8000:8000
working_dir: /app
volumes:
- ./:/app
workflow
name: Docker Image CI
on:
push:
branches: [ master ]
pull_request:
branches: [ master ]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Run docker-compose
run: docker-compose up -d
- name: Sleep for 20s
uses: juliangruber/sleep-action#v1
with:
time: 10s
- name: database migration with docker
run: docker exec oe-web php artisan migrate
- name: database seed with docker
run: docker exec oe-web php artisan db:seed
and the github action return error when trying to build the docker
Run docker exec oe-web php artisan migrate
Error response from daemon: Container 27479cda84fb7f7c393bceeedbb2e2cf5ecd086917390728ac635748ac4411df is not running
Error: Process completed with exit code 1.
you can visit my pull request here:
https://github.com/dhanyn10/open-ecommerce/pull/189
[UPDATED]
error log
1s
Run chmod -R 777 ./
chmod -R 777 ./
docker-compose ps
docker-compose logs
shell: /usr/bin/bash -e {0}
Name Command State Ports
-------------------------------------------------------------------------------------------------
oe-adminer entrypoint.sh php -S [::]: ... Up 0.0.0.0:8080->8080/tcp,:::8080->8080/tcp
oe-db docker-entrypoint.sh --def ... Up 3306/tcp, 33060/tcp
oe-web docker-php-entrypoint /bin ... Exit 255

Installing NPM during build fails Docker build

I'm trying to get GitLab CI runner to build my project off the Docker image and install NPM package during the build. My .gitlab-ci.yml file was inspired by this topic Gitlab CI with Docker and NPM where the PO was dealing with identical problem:
image: docker:stable
services:
- docker:dind
stages:
- build
cache:
paths:
- node_modules/
before_script:
- export REACT_APP_USERS_SERVICE_URL=http://127.0.0.1
compile:
image: node:8
stage: build
script:
- apk add --no-cache py-pip python-dev libffi-dev openssl-dev gcc libc-dev make
- pip install docker-compose
- docker-compose up -d
- docker-compose exec -T users python manage.py recreate_db
- docker-compose exec -T users python manage.py seed_db
- npm install
- bash test.sh
after_script:
- docker-compose down
Sadly, that solution didn't work well but I feel like I'm little bit closer to actual solution now. I'm getting two errors during the build:
/bin/bash: line 89: apk: command not found
Running after script...
$ docker-compose down
/bin/bash: line 88: docker-compose: command not found
How can I troubleshoot this ?
Edit:
image: docker:stable
services:
- docker:dind
stages:
- build
- test
before_script:
- export REACT_APP_USERS_SERVICE_URL=http://127.0.0.1
compile:
stage: build
script:
- apk add --no-cache py-pip python-dev libffi-dev openssl-dev gcc libc-dev make
- pip install docker-compose
- docker-compose up -d
- docker-compose exec -T users python manage.py recreate_db
- docker-compose exec -T users python manage.py seed_db
testing:
image: node:alpine
stage: test
script:
- npm install
- bash test.sh
after_script:
- docker-compose down
I moved tests into separate stage testing which I should've done anyway and I figured I'd defined the image there to separate it from the build stage. No change. Docker can't be found and bash test also can't be ran:
$ bash test.sh
/bin/sh: eval: line 87: bash: not found
Running after script...
$ docker-compose down
/bin/sh: eval: line 84: docker-compose: not found
image: node:8 this image is not based on alpine so as a result, you got error
apk: command not found
node:<version>
These are the suite code names for releases of Debian and indicate
which release the image is based on. If your image needs to install
any additional packages beyond what comes with the image, you'll
likely want to specify one of these explicitly to minimize breakage
when there are new releases of Debian.
just replace image to
node:alpine
and it should work.
the second error is because docker-compose is not installed.
You can check this answer for more details about composer.

Docker and travis ci faling on build

I am trying to dockerize my app as part of travis ci so i can then publish it to docker hub:
I have set up my Dockerfile, docker-compose and travis.yml
when the pipeline in github finishes i get this error message:
0.60s$ docker run mysite /bin/sh -c "cd /root/mysite; bundle exec rake test"
/bin/sh: 1: cd: can't cd to /root/mysite
/bin/sh: 1: bundle: not found
The command "docker run mysite /bin/sh -c "cd /root/mysite; bundle exec rake test"" failed and exited with 127 during .
My Dockerfile:
#Server
FROM node:latest
#create app dir in the container
RUN mkdir -p /usr/src/app
#sets working direcotry for the app
#this allows to run all the comand
#like RUN CMD etc.
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
RUN npm config set strict-ssl false
RUN npm install
# Bundle app source
COPY . .
EXPOSE 3006
CMD [ "npm", "run", "start:unsafe" ]
Docker-compose:
version: '3'
services:
web:
build: .
travis.yml:
sudo: required
language: node_js
node_js:
- "stable"
services:
- docker
before_install:
- docker build -t mysite .
- docker run -d -p 127.0.0.1:80:4567 mysite /bin/sh -c "cd /root/mysite; bundle exec foreman start;"
- docker ps -a
- docker run mysite /bin/sh -c "cd /root/mysite; bundle exec rake test"
cache:
directories:
- node_modules
script:
- bundle exec rake test
- npm test
- npm run build
I have tried running the comands from travis yml locally and get the same error:
/bin/sh: 1: cd: can't cd to /usr/src/app/mysite
/bin/sh: 1: bundle: not found
I tried going into the container to see if they directories are matching but the container always exits right after it starts
to execute a command on an existing running container you must call 'docker exec' and not 'docker run'
You possible mixed up node_js and ruby. Rewrite your .travis.yml to something like:
sudo: required
language: node_js
node_js:
- "stable"
cache:
directories:
- "node_modules"
services:
- docker
before_install:
- docker build -t mysite:travis-$TRAVIS_BUILD_NUMBER .
script:
- npm test
- npm run build
- docker images "$DOCKER_USERNAME"/mysite
after_success:
- if [ "$TRAVIS_BRANCH" == "master" ]; then
docker login -u="$DOCKER_USERNAME" -p="$DOCKER_PASSWORD";
docker tag mysite:travis-$TRAVIS_BUILD_NUMBER "$DOCKER_USERNAME"/mysite:travis-$TRAVIS_BUILD_NUMBER;
docker push "$DOCKER_USERNAME"/mysite:travis-$TRAVIS_BUILD_NUMBER;
fi

Jenkins inside docker on windows 10 pro. Build failing - docker-compose not found

I am trying to setup Jenkins inside docker on Windows 10 pro.
I have a python app that successfully runs on powershell command.
However, when I run the following command on build execute shell on Jenkins,
docker-compose run app sh -c python manage.py test && flake8
I keep getting the error
/tmp/jenkins7355151386125740055.sh: 2:
/tmp/jenkins7355151386125740055.sh: docker-compose: not found Build
step 'Execute shell' marked build as failure Finished: FAILURE
What all I tried:
installed docker-compose using pip install docker-compose
set the path of docker-compose in the environment-path variable
created a .env file in the same directory as docker-compose.yml and included the following variable in it
COMPOSE_CONVERT_WINDOWS_PATHS=1
My docker-compose.yml is this:
version: "3"
services:
app:
build:
context: .
ports:
- "8000:8000"
volumes:
- ./app:/app
command: >
sh -C "python manage.py runserver 0.0.0.0:8000"
Can anyone help me figure out where am I going wrong and how could I fix the docker-compose not found error?
add env file path to docker-compose file:
version: "3"
services:
app:
build:
context: .
env_file:
- {PATH/TO/ENV_FILE}
ports:
- "8000:8000"
volumes:
- ./app:/app
command: >
sh -C "python manage.py runserver 0.0.0.0:8000"
Also if you're building in a container in Jenkins, make sure you have docker-compose setup on Jenkins server (should be a plug-in)
You can try running this at the beginning of your shell script in Jenkins.
curl -L --fail https://github.com/docker/compose/releases/download/1.23.2/run.sh -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose

Run docker-compose build in .gitlab-ci.yml

I have a .gitlab-ci.yml file which contains following:
image: docker:latest
services:
- docker:dind
before_script:
- docker info
- docker-compose --version
buildJob:
stage: build
tags:
- docker
script:
- docker-compose build
But in ci-log I receive message:
$ docker-compose --version
/bin/sh: eval: line 46: docker-compose: not found
What am I doing wrong?
Docker also provides an official image: docker/compose
This is the ideal solution if you don't want to install it every pipeline.
Note that in the latest version of GitLab CI/Docker you will likely need to give privileged access to your GitLab CI Runner and configure/disable TLS. See Use docker-in-docker workflow with Docker executor
variables:
DOCKER_HOST: tcp://docker:2375/
DOCKER_DRIVER: overlay2
# Official docker compose image.
image:
name: docker/compose:latest
services:
- docker:dind
before_script:
- docker version
- docker-compose version
build:
stage: build
script:
- docker-compose down
- docker-compose build
- docker-compose up tester-image
Note that in versions of docker-compose earlier than 1.25:
Since the image uses docker-compose-entrypoint.sh as entrypoint you'll need to override it back to /bin/sh -c in your .gitlab-ci.yml. Otherwise your pipeline will fail with No such command: sh
image:
name: docker/compose:latest
entrypoint: ["/bin/sh", "-c"]
Following the official documentation:
# .gitlab-ci.yml
image: docker
services:
- docker:dind
build:
script:
- apk add --no-cache docker-compose
- docker-compose up -d
Sample docker-compose.yml:
version: "3.7"
services:
foo:
image: alpine
command: sleep 3
bar:
image: alpine
command: sleep 3
We personally do not follow this flow anymore, because you loose control about the running containers and they might end up running endless. This is because of the docker-in-docker executor. We developed a python-script as a workaround to kill all old containers in our CI, which can be found here. But I do not suggest to start containers like this anymore.
I created a simple docker container which has docker-compose installed on top of docker:latest. See https://hub.docker.com/r/tmaier/docker-compose/
Your .gitlab-ci.yml file would look like this:
image: tmaier/docker-compose:latest
services:
- docker:dind
before_script:
- docker info
- docker-compose --version
buildJob:
stage: build
tags:
- docker
script:
- docker-compose build
EDIT I added another answer providing a minimal example for a .gitlab-ci.yml configuration supporting docker-compose.
docker-compose can be installed as a Python package, which is not shipped with your image. The image you chose does not even provide an installation of Python:
$ docker run --rm -it docker sh
/ # find / -iname "python"
/ #
Looking for Python gives an empty result. So you have to choose a different image, which fits to your needs and ideally has docker-compose installed or you maually create one.
The docker image you chose uses Alpine Linux. You can use it as a base for your own image or try a different one first if you are not familiar with Alpine Linux.
I had the same issue and created a Dockerfile in a public GitHub repository and connected it with my Docker Hub account and chose an automated build to build my image on each push to the GitHub repository. Then you can easily access your own images with the GitLab CI.
If you don't want to provide a custom docker image with docker-compose preinstalled, you can get it working by installing Python during build time. With Python installed you can finally install docker-compose ready for spinning up your containers.
image: docker:latest
services:
- docker:dind
before_script:
- apk add --update python py-pip python-dev && pip install docker-compose # install docker-compose
- docker version
- docker-compose version
test:
cache:
paths:
- vendor/
script:
- docker-compose up -d
- docker-compose exec -T php-fpm composer install --prefer-dist
- docker-compose exec -T php-fpm vendor/bin/phpunit --coverage-text --colors=never --whitelist src/ tests/
Use docker-compose exec with -T if you receive this or a similar error:
$ docker-compose exec php-fpm composer install --prefer-dist
Traceback (most recent call last):
File "/usr/bin/docker-compose", line 9, in <module>
load_entry_point('docker-compose==1.8.1', 'console_scripts', 'docker-compose')()
File "/usr/lib/python2.7/site-packages/compose/cli/main.py", line 62, in main
command()
File "/usr/lib/python2.7/site-packages/compose/cli/main.py", line 114, in perform_command
handler(command, command_options)
File "/usr/lib/python2.7/site-packages/compose/cli/main.py", line 442, in exec_command
pty.start()
File "/usr/lib/python2.7/site-packages/dockerpty/pty.py", line 338, in start
io.set_blocking(pump, flag)
File "/usr/lib/python2.7/site-packages/dockerpty/io.py", line 32, in set_blocking
old_flag = fcntl.fcntl(fd, fcntl.F_GETFL)
ValueError: file descriptor cannot be a negative integer (-1)
ERROR: Build failed: exit code 1
I think most of the above are helpful, however i needed to collectively apply them to solve this problem, below is the script which worked for me
I hope it works for you too
Also note, in your docker compose this is the format you have to provide for the image name
<registry base url>/<username>/<repo name>/<image name>:<tag>
image:
name: docker/compose:latest
entrypoint: ["/bin/sh", "-c"]
variables:
DOCKER_HOST: tcp://docker:2375/
DOCKER_DRIVER: overlay2
services:
- docker:dind
stages:
- build_images
before_script:
- docker version
- docker-compose version
- docker login -u $CI_REGISTRY_USER -p $CI_JOB_TOKEN $CI_REGISTRY
build:
stage: build_images
script:
- docker-compose down
- docker-compose build
- docker-compose push
there is tiangolo/docker-with-compose which works:
image: tiangolo/docker-with-compose
stages:
- build
- test
- release
- clean
before_script:
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
build:
stage: build
script:
- docker-compose -f docker-compose-ci.yml build --pull
test1:
stage: test
script:
- docker-compose -f docker-compose-ci.yml up -d
- docker-compose -f docker-compose-ci.yml exec -T php ...
It really took me some time to get it working with Gitlab.com shared runners.
I'd like to say "use docker/compose:latest and that's it", but unfortunately I was not able to make it working, I was getting Cannot connect to the Docker daemon at tcp://docker:2375/. Is the docker daemon running? error even when all the env variables were set.
Neither I like an option to install five thousands of dependencies to install docker-compose via pip.
Fortunately, for the recent Alpine versions (3.10+) there is docker-compose package in Alpine repository. It means that #n2o's answer can be simplified to:
test:
image: docker:19.03.0
variables:
DOCKER_DRIVER: overlay2
# Create the certificates inside this directory for both the server
# and client. The certificates used by the client will be created in
# /certs/client so we only need to share this directory with the
# volume mount in `config.toml`.
DOCKER_TLS_CERTDIR: "/certs"
services:
- docker:19.03.0-dind
before_script:
- apk --no-cache add docker-compose # <---------- Mind this line
- docker info
- docker-compose --version
stage: test
script:
- docker-compose build
This worked perfectly from the first try for me. Maybe the reason other answers didn't was in some configuration of Gitlab.com shared runners, I don't know...
Alpine linux now has a docker-compose package in their "edge" branch, so you can install it this way in .gitlab-ci.yml
a-job-with-docker-compose:
image: docker
services:
- docker:dind
script:
- apk add docker-compose --update-cache --repository http://dl-3.alpinelinux.org/alpine/edge/testing/ --allow-untrusted
- docker-compose -v

Resources