Use case is to build and image and deploy to Rancher 2.5.5 with gitlab-ci.yml. Since envs couldn't be passed directly in my situation I'm trying to build-in envs to docker image with docker-compose build (dev/stage things is the next thing, just let's leave it for now). docker-compose run --env-file works, but docker-compose build ignores envs.
Any advice will be appreciated
P.S. if you know the way to pass envs to rancher2 container somehow from gitlab-ci it also resolves the problem
I've tried the following:
set it in docker-compose
version: '3'
services:
testproject:
build:
context: .
env_file: .env-dev
image: example.registry/testimage:latest
set it in gitlab-ci
variables:
IMAGE: "$CI_REGISTRY_IMAGE:latest"
build-image:
stage: build
allow_failure: false
tags:
- docker
services:
- docker:dind
script:
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY
- docker-compose --env-file .env-dev build
- docker-compose push
deploy:
stage: deploy
image: kpolszewski/rancher2-gitlab-deploy
tags:
- docker
script:
- upgrade --cluster $CLUSTER --environment $ENV --stack $NAMESPACE --service $SERVICE --new-image $IMAGE
source it in Dockerfile entrypoint
set it in .env file
nothing works
I can see new image in the registry and local (when I test it locally ) but no env inside when I run container
If you want to set env values on the build stage, you can use build-args as follows:
FROM ...
ARG SOME_ARG
ENV SOME_ENV=$SOME_ARG
Then, in your docker-compose.yml:
version: '3'
services:
testproject:
build:
context: .
args:
SOME_ARG: "SOME_VALUE"
env_file: .env-dev
image: example.registry/testimage:latest
But think twice, are you sure you want your ENV variables be dynamically set on the build stage?
Related
I have a Dockerfile as below:
FROM jenkins/jenkins:latest
USER root
RUN whoami
USER jenkins
RUN whoami
and this docker-compose file
version: '2'
services:
test:
build:
context: .
dockerfile: Dockerfile
container_name: test
hostname: test
ports:
- '8080:8080'
user: root
I am wondering
What is the difference between the user that is defined in the docker-compose and the user that is defined in the dockerfile
How to see the logs of the build stage? When I RUN whoami, how and where I can see the result?
What if question is if I change the user in docker-compose to other
version: '2'
services:
test:
build:
context: .
dockerfile: Dockerfile
container_name: test
hostname: test
ports:
- '8080:8080'
user: other
Why isn't it working
And if I change the dockerfile to
FROM jenkins/jenkins:latest
USER root
RUN whoami
RUN groupadd -g 999 docker && \
usermod -aG staff,docker jenkins
USER jenkins
RUN whoami
and change the docker-compose to
version: '2'
services:
test:
build:
context: .
dockerfile: Dockerfile
container_name: test
hostname: test
ports:
- '8080:8080'
user: jenkins
Still not working.
What is the problem
Another question is when I do docker exec -it container_name bash
It get access to the container as a root user. How to change that
The user config in the docker-compose file will overwrite the user used to execute the command in your container. If you omit it, it will use the last USER set in your Dockerfile. If you have neither, it should default to root.
Note that your whoami commands are run at build-time (not at run-time) so they should not be impacted by what you specified in your docker-compose file which is overriding the user at run-time only.
To see the entire build log, you can use docker compose build --progress flat.
I've got a workflow where I build a specific image and then (after pushing to an ECR repo and then pulling it onto an AWS server) essentially run it with a docker-compose file. My docker compose file looks as follows:
version: "3.8"
services:
web:
image: <my-aws-server>/my-repo:latest
command: gunicorn vms.wsgi:application --bind 0.0.0.0:8000
expose:
- 8000
nginx:
build: ../nginx
ports:
- 1337:80
depends_on:
- web
and my dockerfile is something like this:
FROM python:3
ENV PYTHONUNBUFFERED=1
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
EXPOSE 8000
CMD [ "python", "manage.py", "runserver", "0.0.0.0:8000"]
I'd like to be able to do something like this in my docker-compose:
version: "3.8"
services:
web:
image: <my-aws-server>/my-repo:latest
env: SECRET_PASSWORD #note change here
command: gunicorn vms.wsgi:application --bind 0.0.0.0:8000
expose:
- 8000
nginx:
build: ../nginx
ports:
- 1337:80
depends_on:
- web
where I specify the environment variables, which are stored in a file on the server. Is there any way I can do this? Perhaps it's impossible if the image file is just a binary.
Or do I have to actually pass in the environment variables from the get-go, when I build the image in my GitHub action, here:
steps:
- uses: actions/checkout#v2
name: Check out code
- uses: mr-smithers-excellent/docker-build-push#v5
name: Build & Push Docker image
with:
image: my-image
registry: ${{ secrets.AWS_ECR_REGISTRY }}
tags: latest
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
Edit: in my GitHub actions, I tried something like this:
- name: Start new container
run: ssh staging 'cd my_dir; sudo docker-compose --env-file ~/code/secrets/.env -f docker-compose.prod.yml up -d'
but that didn't seem to work. Is there something I'm doing wrong there? Or should that have worked as expected where whatever environment variables are in that file will be used in the pre-built image? (I'm not building it again, just starting the image, as is evident from the docker compose file).
There is the env_file directive. That will pass variables from the specified file to the container at runtime.
Reference:
https://docs.docker.com/compose/environment-variables/#the-env_file-configuration-option
I have an issue with gitlab runner using docker:dind service.
I'm trying to run a docker-compose file with simple volume on a job, here the job :
test_e2e:
image: tmaier/docker-compose
stage: test
services:
- docker:dind
variables:
GIT_STRATEGY: none
GIT_CHECKOUT: "false"
DOCKER_DRIVER: overlay2
before_script:
- ls
script:
- cp .env.dist .env
- docker-compose -f docker-compose.yml -f docker-compose-ci.yml up -d
The job start normally but a container in docker-compose-ci.yml doesn't seem to mount the volume as specified in it, here docker-compose-ci.yml
version: '3.3'
services:
wait_app:
image: dadarek/wait-for-dependencies
networks:
- internal
depends_on:
- traefik
- webapp
command: webapp:3000
cypress:
# the Docker image to use from https://github.com/cypress-io/cypress-docker-images
image: "cypress/included:6.5.0"
networks:
- internal
depends_on:
- traefik
- webapp
- api
- mysql
- redis
environment:
# pass base url to test pointing at the web application
- CYPRESS_baseUrl=http://app.localhost:3000
working_dir: /cypress
volumes:
- ./cypress/:/cypress
Here if I make an "docker exec app_cypress_1 sh -c "ls -al" || 1" of /cypress folder inside the container cypress, I will have nothing even though I do have files in there on the host.
But I tried on a different version of the runner 13.7.0 instead of 13.5.0, and it work as expected.
Where could be the issue ? Is it the gitlab runner are maybe there is another parameter that I can change to make it work ?
Thank you
I am trying to set up a job with gitlab CI to build a docker image from a dockerfile, but I am behind a proxy.
My .gitlab-ci.yml is as follows:
image: docker:stable
variables:
DOCKER_HOST: tcp://docker:2375
DOCKER_DRIVER: overlay2
HTTP_PROXY: $http_proxy
HTTPS_PROXY: $http_proxy
http_proxy: $http_proxy
https_proxy: $http_proxy
services:
- docker:dind
before_script:
- wget -O - www.google.com # just to test
- docker search node # just to test
- docker info # just to test
build:
stage: build
script:
- docker build -t my-docker-image .
wget works, meaning that proxy setup is correct, in theory
But the commands docker search, docker info and docker build do not work, apparently because of a proxy issue.
An excerpt from the job output:
$ docker search node
Warning: failed to get default registry endpoint from daemon (Error response from daemon:
[and here comes a huge raw HTML output including the following message: "504 - server did not respond to proxy"]
It appears docker does not read from the environment variables to setup proxy.
Note: I am indeed using a runner in --privileged mode, as the documentation instructs to do.
How do I fix this?
If you want to be able to use docker-in-docker (dind) in gitlab CI behind proxy, you will also need to setup no_proxy variable in your gitlab-ci.yml file. NO_PROXY for host "docker".
This is the gitlab-ci.yml that works with my dind:
image: docker:19.03.12
variables:
DOCKER_TLS_CERTDIR: "/certs"
HTTPS_PROXY: "http://my_proxy:3128"
HTTP_PROXY: "http://my_proxy:3128"
NO_PROXY: "docker"
services:
- docker:19.03.12-dind
before_script:
- docker info
build:
stage: build
script:
- docker run hello-world
Good luck!
Oddly, the solution was to use a special dind (docker-in-docker) image provided by gitlab instead, and it works without setting up services and anything. The .gitlab-ci.yml that worked was as follows:
image: gitlab/dind:latest
before_script:
- wget -O - www.google.com
- docker search node
- docker info
build:
stage: build
script:
- docker build -t my-docker-image .
Don't forget that the gitlab-runner must be registered with the --privileged flag.
I was unable to get docker-in-docker (dind) working behind our corporate proxy.
In particular, even when following the instructions here a docker build command would still fail when executing FROM <some_image> as it was not able to download the image.
I had far more success using kaniko which appears to be Gitlabs current recommendation for doing Docker builds.
A simple build script for a .NET Core project then looks like:
build:
stage: build
image: $BUILD_IMAGE
script:
- dotnet build
- dotnet publish Console--output publish
artifacts:
# Upload all build artifacts to make them available for the deploy stage.
when: always
paths:
- "publish/*"
expire_in: 1 week
kaniko:
stage: dockerise
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
script:
# Construct a docker-file
- echo "FROM $RUNTIME_IMAGE" > Dockerfile
- echo "WORKDIR /app" >> Dockerfile
- echo "COPY /publish ." >> Dockerfile
- echo "CMD [\"dotnet\", \"Console.dll\"]" >> Dockerfile
# Authenticate against the Gitlab Docker repository.
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > /kaniko/.docker/config.json
# Run kaniko
- /kaniko/executor --context . --dockerfile Dockerfile --destination $CI_REGISTRY_IMAGE:$VersionSuffix
I need to be able fork a process. As i understand it i need to set the security-opt. I have tried doing this with docker command and it works fine. However when i do this in a docker-compose file it seem to do nothing, maybe I'm not using compose right.
Docker
docker run --security-opt=seccomp:unconfined <id> dlv debug --listen=:2345 --headless --log ./cmd/main.go
Docker-compose
Setup
docker-compose.yml
networks:
backend:
services:
example:
build: .
security_opt:
- seccomp:unconfined
networks:
- backend
ports:
- "5002:5002"
Dockerfile
FROM golang:1.8
RUN go get -u github.com/derekparker/delve/cmd/dlv
RUN dlv debug --listen=:2345 --headless --log ./cmd/main.go
command
docker-compose -f docker-compose.yml up --build --abort-on-container-exit
Result
2017/09/04 15:58:33 server.go:73: Using API v1 2017/09/04 15:58:33
debugger.go:97: launching process with args: [/go/src/debug] could not
launch process: fork/exec /go/src/debug: operation not permitted
The compose syntax is correct. But the security_opt will be applied to the new instance of the container and thus is not available at build time like you are trying to do with the Dockerfile RUN command.
The correct way should be :
Dockerfile:
FROM golang:1.8
RUN go get -u github.com/derekparker/delve/cmd/dlv
docker-compose.yml
networks:
backend:
services:
example:
build: .
security_opt:
- seccomp:unconfined
networks:
- backend
ports:
- "5002:5002"
entrypoint: ['/usr/local/bin/dlv', '--listen=: 2345', '--headless=true', '--api-version=2', 'exec', 'cmd/main.go']