I have a private gitlab-repo on which I use the gitlab-ci.yml to deploy my project into stage and production.
inside the gitlab-ci.yml, I pass two environment-variables NODE_ENV (here I specify if it is stage/producion) and NODE_TARGET (just an info for the app, wha template to use). My gitlab-ci.yml looks like this:
stage_gsue:
stage: staging
script:
- echo "---------- DOCKER LOGIN"
- echo "mypassword" | docker login --username myuser --password-stdin git.example.com:4567
- echo "---------- START DEPLOYING STAGING SERVER"
- echo "-> 1) build image"
- docker build --build-arg buildtarget=gsue --build-arg buildenv=stage -t git.example.com:4567/root/myproject .
- echo "-> 2) push image to registry"
- docker push git.example.com:4567/root/myproject
- echo "-> 3) kill old container"
- docker kill $(docker ps -q) || true
- docker rm $(docker ps -a -q) || true
- echo "-> 4) start new container"
- docker run -dt -e NODE_TARGET=gsue -e NODE_ENV=stage -p 3000:3000 --name myproject git.example.com:4567/root/myproject
- echo "########## END DEPLOYING DOCKER IMAGE"
tags:
- stagerunner
when: manual
works good so far.. but now inside myproject there is a .env-file, in which I have some further variables. I changed the values of these variables and ran the stage-script multiple times, but inside my build image and started container, there are still old values in the .env-file.
How can that be??
additional info:
in my dockerfile I do:
FROM djudorange/node-gulp-mocha
ARG buildenv
ARG buildtarget
RUN git clone https://root:mypassword#git.example.com/root/myproject.git
WORKDIR /myproject
RUN git fetch --all
RUN git pull --all
RUN git checkout stage
RUN npm install -g n
RUN n latest
RUN npm install -g npm
RUN npm i -g gulp-cli --force
RUN npm install
RUN export NODE_ENV=$buildenv
RUN export NODE_TARGET=$buildtarget
RUN NODE_ENV=$buildenv NODE_TARGET=$buildtarget gulp build
#CMD ["node", "server.js"]
The environment overrides anything sent in 'export'. So better write a new env file during the build. So use the following in ur dockerfile:
ARG NODE_ENV
ARG NODE_TARGET
RUN rm -f .env
RUN touch .env
RUN echo "NODE_TARGET=$NODE_TARGET \n\
NODE_ENV=$NODE_ENV" >> ./.env
(fill up the rest of the docekrfile depending upon ur requirements)
Now the build command will be like...
docker-compose build --build-arg NODE_ENV="${ur env arg}" --build-arg NODE_TARGET="<ur target arg>"
So the gitlab build command will be
build_app:
stage: build
script:
- docker-compose build --build-arg NODE_ENV="${NODE_ENV}" --build-arg NODE_TARGET="${NODE_TARGET}"
- echo "Build successful."
- docker-compose up -d
- echo "Deployed!!"
Dont forget to define ur NODE_ENV and NODE_TARGET args in the variables found in the ci cd settings page
Related
I run automated tests on gitlab ci with gitlab runner, all works good except reports. After tests junit reports are not updated, always show the same pass and not pass tests even thought cmd show different number of passed tests.
Gitlab script:
stages:
- build
- test
docker-build-master:
image: docker:latest
stage: build
services:
- docker:dind
before_script:
- docker login -u "xxx" -p "yyy" docker.io
script:
- docker build ./AutomaticTests --pull -t "dockerImage"
- docker image tag dockerImage xxx/dockerImage:0.0.1
- docker push "xxx/dockerImage:0.0.1"
test:
image: docker:latest
services:
- docker:dind
stage: test
before_script:
- docker login -u "xxx" -p "yyy" docker.io
script:
- docker run "xxx/dockerImage:0.0.1"
artifacts:
when: always
paths:
- AutomaticTests/bin/Release/artifacts/test-result.xml
reports:
junit:
- AutomaticTests/bin/Release/artifacts/test-result.xml
Dockerfile:
FROM mcr.microsoft.com/dotnet/core/sdk:2.1
COPY /publish /AutomaticTests
WORKDIR /AutomaticTests
RUN apt-get update -y
RUN apt install unzip
RUN wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
RUN dpkg -i google-chrome-stable_current_amd64.deb; apt-get -fy install
RUN curl https://chromedriver.storage.googleapis.com/84.0.4147.30/chromedriver_linux64.zip -o /usr/local/bin/chromedriver
RUN unzip -o /usr/local/bin/chromedriver -d /AutomaticTests
RUN chmod 777 /AutomaticTests
CMD dotnet vstest /Parallel AutomaticTests.dll --TestAdapterPath:. --logger:"nunit;LogFilePath=..\artifacts\test-result.xml;MethodFormat=Class;FailureBodyFormat=Verbose"
I had a similar issue when using docker-in-docker for my gitlab pipeline. You run your tests inside your container. Therefore, test results are stored inside your "container-under-test". However, the gitlab-ci paths reference not the "container-under-test", but the outside container of your docker-in-docker environment.
You could try to copy the test results from the image directly to your outside container via something like this:
mkdir reports
docker cp $(docker create --rm DOCKER_IMAGE):/ABSOLUTE/FILEPATH/IN/DOCKER/CONTAINER reports/.
So, this would be something like this in your case (untested...!):
...
test:
image: docker:latest
services:
- docker:dind
stage: test
before_script:
- docker login -u "xxx" -p "yyy" docker.io
script:
- mkdir reports
- docker cp $(docker create --rm xxx/dockerImage:0.0.1):/AutomaticTests/bin/Release/artifacts/test-result.xml reports/.
artifacts:
when: always
reports:
junit:
- reports/test-result.xml
...
Also, see this post for furhter explanation on the docker cp command: https://stackoverflow.com/a/59055906/6603778
Keep in mind, that docker cp requires an absolute path to the file you want to copy from your container.
When my Dockerfile was like below, it was working well.
...
RUN pip install git+https://user_name:my_password#github.com/repo_name.git#egg=repo_name==1.0.0
...
But when I changed Dockerfile to the below
...
RUN pip install git+https://user_name:${GITHUB_PASSWORD}#github.com/repo_name.git#egg=repo_name==1.0.0
...
And used the command below, it's not working.
docker build -t my_repo:tag_name . --build-arg GITHUB_PASSWORD=my_password
You need to add an ARG declaration into the Dockerfile:
FROM ubuntu
ARG PASSWORD
RUN echo ${PASSWORD} > /password
Then build your docker image:
$ docker build -t foo . --build-arg PASSWORD="foobar"
After this, you can check for the existence of the parameter in your docker container:
$ docker run -it foo bash
root#ebeb5b33941e:/# cat /password
foobar
Therefore, add the ARG GITHUB_PASSWORD build arg into your dockerfile to get it to work.
I have Dockerfile which includes some builders, one of them is frontend where I recently added ARG BRANCH_NAME so I could use branch name later as a variable (I do not have .git folder in my frontend that's why I'm using this approach):
FROM node:8.11 as frontend-builder
COPY frontend/package.json /frontend/package.json
COPY frontend/package-lock.json /frontend/package-lock.json
COPY ./VERSION /frontend/VERSION
WORKDIR /frontend
RUN npm install
COPY frontend/ /frontend
ARG BRANCH_NAME
RUN sed -i "s|VERSION|${BRANCH_NAME}-frontend#$(cat "VERSION")|g" src/environments/environment.prod.ts
RUN npm run build-prod
In gitlab-ci.yml I pass it like this:
build:
stage: build
before_script:
- docker login -u gitlab-ci-token -p "$CI_JOB_TOKEN" registry.xx.it
script:
- export TARGET=backend-builder
- export IMAGE=$CI_REGISTRY_IMAGE/$TARGET
- docker pull $IMAGE:$CI_COMMIT_REF_NAME || echo "no image"
- docker pull $IMAGE:latest || echo "no latest image"
- docker build --target $TARGET -t $IMAGE:$CI_COMMIT_REF_NAME .
- docker push $IMAGE:$CI_COMMIT_REF_NAME
- export TARGET=frontend-builder
- export IMAGE=$CI_REGISTRY_IMAGE/$TARGET
- docker pull $IMAGE:$CI_COMMIT_REF_NAME || echo "no image"
- docker pull $IMAGE:latest || echo "no latest image"
- docker build --target $TARGET -t $IMAGE:$CI_COMMIT_REF_NAME --build-arg BRANCH_NAME=$CI_COMMIT_REF_NAME .
- docker push $IMAGE:$CI_COMMIT_REF_NAME
- export IMAGE=$CI_REGISTRY_IMAGE
- docker pull $IMAGE:$CI_COMMIT_REF_NAME || echo "no branch image"
- docker pull $IMAGE:latest || echo "no latest image"
- docker build -t $IMAGE:$CI_COMMIT_REF_NAME .
- docker push $IMAGE:$CI_COMMIT_REF_NAME
tags:
- local-docker
after the - export TARGET=frontend-builder I changed line:
- docker build --target $TARGET -t $IMAGE:$CI_COMMIT_REF_NAME .
to:
- docker build --target $TARGET -t $IMAGE:$CI_COMMIT_REF_NAME --build-arg BRANCH_NAME=$CI_COMMIT_REF_NAME .
so all I did was adding --build-arg BRANCH_NAME=$CI_COMMIT_REF_NAME.
But now it seems to execute it all twice and once I get $BRANCH_NAME as it should be and other time my $BRANCH_NAME is empty. Does anyone have ideas why does that happen?
I am trying to make different commands execute depending on what OS my original host is. Part of the process involves a docker build, so I do not think that using the $(OS) string will help.
My current idea is set the environment variable with a uname in my make file and pass it as an environment variable to docker compose
copy:
cp docker-compose.override.yml.dist docker-compose.override.yml
cp .env.dist .env
dev: copy restart
docker-compose exec cli sh
create: export TARGET=$(shell sh -c uname)
create: copy restart
TARGET="$(TARGET)" docker-compose exec -T cli make build
echo $(TARGET)
echo $(TARGET)
build: export TARGET=$(shell sh -c uname)
build:
ifeq ($(TARGET),Darwin)
cp terra/static.go.dist terra/static.go
go run builder/main.go
rm -rf coverage.out
rm -rf dist/${CLI_VERSION}/osx
mkdir -p dist/${CLI_VERSION}/osx
GOOS=darwin GOARCH=amd64 CGO_ENABLED=0 go build -a -installsuffix cgo -o dist/${CLI_VERSION}/osx/mjolnir
ls -la dist/${CLI_VERSION}/osx/mjolnir
endif
Unfortunately, this fails with following output:
TARGET="Darwin" docker-compose exec -T cli make build
make: Nothing to be done for 'build'.
echo Darwin
Darwin
echo Darwin
Darwin
I will appreciate any pointers as to what I am doing wrong.
A typical setup here is to have a separate Make target for each target platform.
TARGET := $(shell uname)
build: build-$(TARGET)
build-Darwin:
...
GOOS=darwin go build ...
Once you have that, you can explicitly specify that build target in your command.
create: copy restart
docker-compose run cli make build-$(TARGET)
You can also pass Make variables as command-line arguments, which will pass through the layers of Docker more easily than environment variables.
create: copy restart
docker-compose run cli make build TARGET=$(TARGET)
I'm want to use gitlab runners to deploy a successfully built docker image but I am not sure how to use the deploy stage in .gitlab-ci.yml to do this. The build log shows the database is properly created on the docker image during the build process.
I use docker locally on a Mac (OSX 10.11.6) to build my docker container. Gitlab is running remotely. I registered a specific local runner to handle the build. When I push changes to my project, gitlab CI runs the build script to create a test database. What happens to the image after it's built? There is no docker image for the completed build listed on my local machine. The gitlab-runner-prebuilt-x86_64 is a barebones linux image that isn't connected with the build.
https://docs.gitlab.com/ce/ci/docker/using_docker_build.html
http://container-solutions.com/running-docker-in-jenkins-in-docker/
>gitlab-ci-multi-runner list
Listing configured runners ConfigFile=/Users/username/.gitlab-runner/config.toml
local-docker-executor Executor=docker Token=[token] URL=http://gitlab.url/ci
>docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
gitlab-runner-prebuilt-x86_64 f6fdece [id1] 25 hours ago 50.87 MB
php7 latest [id2] 26 hours ago 881.8 MB
ubuntu latest [id3] 13 days ago 126.6 MB
docker latest [id4] 2 weeks ago 104.9 MB
.gitlab-ci.yml:
image: php7:latest
# build_image:
# script:
# - docker build -t php7 .
# Define commands that run before each job's script
# before_script:
# - docker info
# Define build stages
# First, all jobs of build are executed in parallel.
# If all jobs of build succeed, the test jobs are executed in parallel.
# If all jobs of test succeed, the deploy jobs are executed in parallel.
# If all jobs of deploy succeed, the commit is marked as success.
# If any of the previous jobs fails, the commit is marked as failed and no jobs of further stage are executed.
stages:
- build
- test
- deploy
variables:
db_name: db_test
db_schema: "db_test_schema.sql"
build_job1:
stage: build
script:
- service mysql start
- echo "create database $db_name" | mysql -u root
- mysql -u root $db_name < $db_schema
- mysql -u root -e "show databases; use $db_name; show tables;"
#- echo "SET PASSWORD FOR 'root'#'localhost' = PASSWORD('root');" | mysql -u root
#- echo "run unit test command here"
#Defines a list of tags which are used to select Runner
tags:
- docker
deploy_job1:
stage: deploy
#this script is run inside the docker container
script:
- whoami
- pwd
- ls -la
- ls /
#Usage: docker push [OPTIONS] NAME[:TAG]
#Push an image or a repository to a registry
- docker push deploy:latest
#gitlab runners will look for and run jobs with these tags
tags:
- docker
config.toml:
concurrent = 1
check_interval = 0
[[runners]]
name = "local-docker-executor"
url = "http://gitlab.url/ci"
token = "[token]"
executor = "docker"
builds_dir = "/Users/username/DOCKER_BUILD_DIR"
[runners.docker]
tls_verify = false
image = "ubuntu:latest"
privileged = false
disable_cache = false
volumes = ["/cache"]
[runners.cache]
Dockerfile:
FROM ubuntu:latest
#https://github.com/sameersbn/docker-mysql/blob/master/Dockerfile
ENV DEBIAN_FRONTEND noninteractive
ENV MYSQL_USER mysql
ENV MYSQL_DATA_DIR /var/lib/mysql
ENV MYSQL_RUN_DIR /run/mysqld
ENV MYSQL_LOG_DIR /var/log/mysql
ENV DB_NAME "db_test"
ENV DB_IMPORT "db_test_schema.sql"
# RUN apt-get update && \
# apt-get -y install sudo
# RUN useradd -m docker && echo "docker:docker" | chpasswd && adduser docker sudo
# USER docker
# CMD /bin/bash
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive apt-get install -y mysql-server
# \
# && rm -rf ${MYSQL_DATA_DIR} \
# && rm -rf /var/lib/apt/lists/*
ADD ${DB_IMPORT} /tmp/${DB_IMPORT}
# #RUN /usr/bin/sudo service mysql start \
# RUN service mysql start \
# && mysql -u root -e "CREATE DATABASE $DB_NAME" \
# && mysql -u root $DB_NAME < /tmp/$DB_IMPORT
RUN locale-gen en_US.UTF-8 \
&& export LANG=en_US.UTF-8 \
&& apt-get update \
&& apt-get -y install apache2 libapache2-mod-php7.0 php7.0 php7.0-cli php-xdebug php7.0-mbstring php7.0-mysql php-memcached php-pear php7.0-dev php7.0-json vim git-core libssl-dev libsslcommon2-dev openssl libssl-dev \
&& a2enmod headers
ENV APACHE_RUN_USER www-data
ENV APACHE_RUN_GROUP www-data
ENV APACHE_LOG_DIR /var/log/apache2
ENV APACHE_PID_FILE /var/run/apache2.pid
ENV APACHE_RUN_DIR /var/run/apache2
ENV APACHE_LOCK_DIR /var/lock/apache2
RUN ln -sf /dev/stdout /var/log/apache2/access.log && \
ln -sf /dev/stderr /var/log/apache2/error.log
RUN mkdir -p $APACHE_RUN_DIR $APACHE_LOCK_DIR $APACHE_LOG_DIR
#VOLUME [ "/var/www/html" ]
WORKDIR /var/www/html
EXPOSE 80 3306
#ENTRYPOINT [ "/usr/sbin/apache2" ]
#CMD ["-D", "FOREGROUND"]
#ENTRYPOINT ["/bin/bash"]
You're not building any docker image on CI.
You are using php7 image from DockerHub to execute all jobs. This includes the job deploy_job1, that are trying to use docker binary to push an image (deploy:latest) that is not inside that container. Additionaly, I think that docker binary is not included on the php7 image.
I guess that you want to push the image that you build locally on your Mac, isn't it? In that case, you need to use another runner, which its executor should be shell. On that scenario, you will have 2 runners, one using docker to run the build_job1 job, and another one to push the locally built image. But there is a better solution that build manually the docker image, and it's to make GitLab CI to build it.
So, modifing your .gitlab-ci.yml (removing your comments, adding mines for explanation):
# Removed global image definition
stages:
- build
- test
- deploy
variables:
db_name: db_test
db_schema: "db_test_schema.sql"
build_job1:
stage: build
# Use image ONLY in docker runner
image: php7:latest
script:
- service mysql start
- echo "create database $db_name" | mysql -u root
- mysql -u root $db_name < $db_schema
- mysql -u root -e "show databases; use $db_name; show tables;"
# Run on runner with docker executor, this is ok
tags:
- docker
deploy_job1:
stage: deploy
script:
# Build the docker image first, and then push it
- docker build -t deploy:latest .
- docker push deploy:latest
# Run on runner with shell executor, set proper tag
tags:
- docker_builder
When you register the new runner, set executor as shell and tags docker_builder. I'm asuming that you have installed docker engine on your Mac.
On the other hand, this example makes no sense, at least for me. The build stage does nothing, as the container is ephemeral. I guess you should do that on the Dockerfile.