I am new to docker.I want to pass gitlab-ci variable to Dockerfile. Tried lot of things but nothing works. Below is my gitlab-ci.yml
api-tests:
image: test-img
stage: test
services:
- docker:abc
variables:
privileged: "true"
DOCKER_HOST: tcp://localhost:2375
script:
- apk add dialog && apk add bind-tools
- docker login -u $ci_account -p $ci_token $REGISTRY_HOST
- git clone --single-branch --depth 1 --recurse-submodules --branch master ssh://git#git.easygroup.co:1234/test/code.git test && cd test
- cd -
- make test-project
- TEST_PATH=first make test; r=$?
after_script:
- docker ps -a
- docker logs --tail=50 test-project
Thanks
You need to create environment variables using .gitlab-ci.yml variables sections. Once environment variables are created then can be accessed by script or other process.
Git lab docs:
Creating a custom environment variable
Assume you have something you want to repeat through your scripts in GitLab CI/CD’s configuration file. To keep this example simple, let’s say you want to output HELLO WORLD for a TEST variable.
You can either set the variable directly in the .gitlab-ci.yml file or through the UI.
Via .gitlab-ci.yml
To create a new custom env_var variable via .gitlab-ci.yml, define their variable/value pair under variables:
variables:
TEST: "HELLO WORLD"
For a deeper look into them, see .gitlab-ci.yml defined variables.
More info here->https://docs.gitlab.com/ee/ci/variables/#gitlab-ciyml-defined-variables
Related
We have used the technique detailed here to expose host environment variables to Docker build in a secured fashion.
# syntax=docker/dockerfile:1.2
FROM golang:1.18 AS builder
# move secrets out of the build process (and docker history)
RUN --mount=type=secret,id=github_token,dst=/app/secret_github_token,required=true,uid=10001 \
export GITHUB_TOKEN=$(cat /app/secret_github_token) && \
<nice command that uses $GITHUB_TOKEN>
And this command to build the image:
export DOCKER_BUILDKIT=1
docker build --secret id=github_token,env=GITHUB_TOKEN -t cool-image-bro .
The above works perfectly.
Now we also have a docker-compose file running in CI that needs to be modified. However, even if I confirmed that the ENV vars are present in that job, I do not know how to assign the environment variable to the github_token named secret ID.
In other words, what is the equivalent docker-compose command (up --build, or build) that can accept a mapping of an environment variable with a secret ID?
Turns out I was a bit ahead of the times. docker compose v.2.5.0 brings support for secrets.
After having modified the Dockerfile as explained above, we must then update the docker-compose to defined secrets.
docker-compose.yml
services:
my-cool-app:
build:
context: .
secrets:
- github_user
- github_token
...
secrets:
github_user:
file: secrets_github_user
github_token:
file: secrets_github_token
But where are those files secrets_github_user and secrets_github_token coming from? In your CI you also need to export the environment variable and save it to the default secrets file location. In our project we are using Tasks so we added these too lines.
Note that we are running this task from our CI, so you could do it differently without Tasks for example.
- printenv GITHUB_USER > /root/project/secrets_github_user
- printenv GITHUB_TOKEN > /root/project/secrets_github_token
We then update the CircleCI config and add two environment variable to our job:
.config.yml
name-of-our-job:
environment:
DOCKER_BUILDKIT: 1
COMPOSE_DOCKER_CLI_BUILD: 1
You might also need a more recent Docker version, I think they introduced it in a late 19 release or early 20. I have used this and it works:
steps:
- setup_remote_docker:
version: 20.10.11
Now when running your docker-compose based commands, the secrets should be successfully mounted through docker-compose and available to correctly build or run your Dockerfile instructions!
I have a node.js Project which I run as Docker-Container in different environments (local, stage, production) and therefor configure it via .env-Files. As always advised I don't store the .env-Files in my remote repository which is Gitlab. My production- and stage-systems are run as kubernetes cluster.
What I want to achieve is an automated build via Gitlab's CI for different environments (e.g. stage) depending on the commit-branch (named stage as well), meaning when I push to origin/stage I want an Docker-image to be built for my stage-environment with the corresponding .env-File in it.
On my local machine it's pretty simple, since I have all the different .env-Files in the root-Folder of my app I just use this in my Dockerfile
COPY .env-stage ./.env
and everything is fine.
Since I don't store the .env-Files in my remote repo, this approach doesn't work, so I used Gitlab CI Variables and created a variable named DOTENV_STAGE of type file with the contents of my local .env-stage file.
Now my problem is: How do I get that content as .env-File inside the docker image that is going to be built by gitlab since that file is not yet a file in my repo but a variable instead?
I tried using cp (see below, also in the before_script-section) to just copy the file to an .env-File during the build process, but that obviously doesn't work.
My current build stage looks like this:
image: docker:git
services:
- docker:dind
build stage:
only:
- stage
stage: build
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
script:
- cp $DOTENV_STAGE .env
- docker pull $GITLAB_IMAGE_PATH-$CI_COMMIT_BRANCH || true
- docker build --cache-from $GITLAB_IMAGE_PATH/$CI_COMMIT_BRANCH --file=Dockerfile-$CI_COMMIT_BRANCH -t $GITLAB_IMAGE_PATH/$CI_COMMIT_BRANCH:$CI_COMMIT_SHORT_SHA .
- docker push $GITLAB_IMAGE_PATH/$CI_COMMIT_BRANCH
This results in
Step 12/14 : COPY .env ./.env
COPY failed: stat /var/lib/docker/tmp/docker-builder513570233/.env: no such file or directory
I also tried cp $DOTENV_STAGE .env as well as cp $DOTENV_STAGE $CI_BUILDS_DIR/.env and cp $DOTENV_STAGE $CI_PROJECT_DIR/.env but none of them worked.
So the part I actually don't know is: Where do I have to put the file in order to make it available to docker during build?
Thanks
You should avoid copying .env file into the container altogether. Rather feed it from outside on runtime. There's a dedicated prop for that: env_file.
web:
env_file:
- .env
You can store contents of the .env file itself in a Masked Variable in the GitLabs CI backend. Then dump it to .env file in the runner and feed to Docker compose pipeline.
After some more research I stumbled upon a support-forum entry on gitlab.com, which exactly describes my situation (unfortunately it got deleted in the meanwhile) and it got solved by the same approach I was trying to use, namely this:
...
script:
- cp $DOTENV_STAGE $CI_PROJECT_DIR/.env
...
in my .gitlab-ci.yml
The part I was actually missing was adjusting my .dockerignore-File accordingly (removing .env from it) and then removing the line
COPY .env ./.env
from my Dockerfile
An alternative approach I thought about after joyarjo's answer could be to use a ConfigMap for Kubernetes. But I didn't try it yet
I have the following .gitlab-ci.yml
stages:
- test
- build
- art
image: golang:1.9.2
variables:
BIN_NAME: example
ARTIFACTS_DIR: artifacts
GO_PROJECT: example
GOPATH: /go
before_script:
- mkdir -p ${GOPATH}/src/${GO_PROJECT}
- mkdir -p ${CI_PROJECT_DIR}/${ARTIFACTS_DIR}
- go get -u github.com/golang/dep/cmd/dep
- cp -r ${CI_PROJECT_DIR}/* ${GOPATH}/src/${GO_PROJECT}/
- cd ${GOPATH}/src/${GO_PROJECT}
test:
stage: test
script:
# Run all tests
go test -run ''
build:
stage: build
script:
# Compile and name the binary as `hello`
- go build -o hello
- pwd
- ls -l hello
# Execute the binary
- ./hello
# Move to gitlab build directory
- mv ./hello ${CI_PROJECT_DIR} artifacts:
paths:
- ./hello
The issue is my program is dependant on both Go and Mysql...
I am aware i can have a different docker image for each stage but my test stage needs both
go test & MySql
What I have looked into:
I have learn ho to create my own docker image based using docker commit and also how to use a docker file to build and image up.
However I have hear there are way to link docker containers together using docker compose, and this seems like a better method...
I have no idea how to go about this in GitLab, I know I need a compose.yml file but not sure where to put it, whats need to go in it, does it create an image that I then link to from my .gitlab-ci.yml file?
Perhaps this is over kill and there is a simpler way?
I understand your tests need a MySQL server in order to work and that you are using some kind of MySQL client or driver in your Go tests.
You can use a Gitlab CI service which will be made available during your test job. GitlabCI will run a MySQL container beside your Go container which will be reachable via it's name from the Go container. For example:
test:
stage: test
services:
- mysql:5.7
variables:
# Configure mysql environment variables (https://hub.docker.com/_/mysql/)
MYSQL_DATABASE: mydb
MYSQL_ROOT_PASSWORD: password
script:
# Run all tests
go test -run ''
Will start a MySQL container and it will be reachable from the Go container via hostname mysql. Note you'll need to define variables for MySQL startup as per the Environment variables section of image documentation (such as Root password or database to create)
You can also define the service globally (will be made available for each job in your build) and use an alias so the MySQL server will be reachable from another hostname.
I was previously using the shell for my gitlab runner to build my project. So far I have set up the pipeline that will run whatever commands I have set in the gitlab-ci.yml file seen below:
gitlab-ci.yml using shell runner
before_script:
- npm install
- npm install --save #angular/material #angular/cdk
cache:
paths:
- node_modules/
stages:
- dev
- staging
- production
build_dev:
stage: dev
script:
- rm ./package-lock.json
- npm run build
- ./node_modules/#angular/cli/bin/ng test --browsers PhantomJS --watch false
Now, I want to switch to a docker image. I have reconfigured the runner to use a docker image, and I specified the image in my new gitlab-ci.yml file seen below. I followed the gitlab-ci docker tutorial and this is where it left off so I'm not entirely sure where to go from here:
gitlab-ci.yml using docker runner
image: node:8.10.0
before_script:
- npm install
- npm install --save #angular/material #angular/cdk
cache:
paths:
- node_modules/
stages:
- dev
- staging
- production
build_dev:
stage: dev
script:
- rm ./package-lock.json
- npm run build
- ./node_modules/#angular/cli/bin/ng test --browsers PhantomJS --watch false
Questions:
With my current gitlab-ci.yml file, how does this build a docker image/does it even build one? If it does, what does that mean? Currently the pipeline passed, but I have no idea if it did in a docker image or not (am I supposed to be able to tell?).
Also, let's say the docker image was created, ran the tests, and the pipeline passed; it should push the code to a new repository (not included in yml file yet). From what I gathered, the image isn't being pushed, it's just the code, right? So what do I do with this created docker image?
How does the Dockerfile get used? I see no link between the gitlab-ci.yml file and Dockerfile.
Do I need to surround all commands in the gitlab-ci.yml file in docker run <commands> or docker exec <commands>? Without including one of these 2 commands, it seems like it would just run on the server and not in a docker image.
I've seen people specify an image in both the gitlab-ci.yml file and Dockerfile. I have an angular project, and I specified an image of image: node:8.10.0. In the Dockerfile, should I specify the same image? I've seen some projects where they are completely different and I'm wondering what the use of both images are/if picking one image over another will severely impact my builds.
You have to take a different approach on building your app if you want to fully dockerize it. Export angular things into Dockerfile and get docker operations inside your .gitlab-ci instead of angular stuff like here:
stages:
- build
# - release
# - deploy
.build_template: &build_definition
stage: build
image: docker:17.06
services:
- docker:17.06-dind
script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
- docker pull $CONTAINER_RELEASE_IMAGE || true
- docker build --cache-from $CONTAINER_RELEASE_IMAGE -t $CONTAINER_IMAGE -f $DOCKERFILE ./
- docker push $CONTAINER_IMAGE
build_app_job:
<<: *build_definition
variables:
CONTAINER_IMAGE: $CI_REGISTRY_IMAGE/app:$CI_COMMIT_REF_SLUG
CONTAINER_RELEASE_IMAGE: $CI_REGISTRY_IMAGE/app:latest
DOCKERFILE: ./Dockerfile.app
build_nginx_job:
<<: *build_definition
variables:
CONTAINER_IMAGE: $CI_REGISTRY_IMAGE/nginx:$CI_COMMIT_REF_SLUG
CONTAINER_RELEASE_IMAGE: $CI_REGISTRY_IMAGE/nginx:latest
DOCKERFILE: ./Dockerfile
You can set up a few build jobs - for production, development, staging etc.
Right next to your .gitlab-ci.yaml you can put Dockerfile and Dockerfile.app - Dockerfile.app stands for building you angular app:
FROM node:10.5.0-stretch
RUN mkdir -p /usr/src/app
RUN mkdir -p /usr/src/remote
WORKDIR /usr/src/app
COPY . .
# do your commands here
Now with your app built, it can be served via a web server - it's your choice and a different configuration that follows with each choice - cant even scratch a surface here. That'd be implemented in Dockerfile - we usually use Nginx in our company.
From here on it's about releasing your images and deploying them. I've only specified how to build them in docker as it seems this is what the question is about.
If you want to deploy your image and run it somewhere - choose a provider - AWS, Heroku, own infrastructure - have it your way, but this is far too much to cover all those in a single answer so I'll leave it for another question when you specify where'd you like to deploy your newly built images and how would you like to serve it. In our company, we orchestrate things with Rancher, but there are multiple awesome and competing options in the market.
Edit for a custom registry
The above .gitlab-ci configuration works with Gitlab's "internal" registry only, in case you want to utilize your own registry, change the values accordingly:
#previous configs
script:
- docker login -u mysecretlogin -p mysecretpasswd registry.local.com
# further configs
from -u gitlab-ci-token to your login in the registry,
from $CI_JOB_TOKEN to your password
from $CI_REGISTRY to your registry address
Those values should be stored in Gitlab's CI secret variables and referenced via env variables so that they are not saved in the repository.
Finally, your script might look like below in case you decided to protect these values. Refer to Gitlab's official docs on how to add secret CI variables - super easy task.
#previous configs
script:
- docker login -u $registrylogin -p $registrypasswd $registryaddress
# further configs
If I use a environment variable the circle.yml bellow, fails, But if I statically type the machine name it will work.
How can I properly reference environment variables in CircleCI?
version: 2
executorType: machine
stages:
build:
workDir: ~/app
enviroment:
- IMAGE_NAME: "nginx-ks8-circleci-hello-world"
# - AWS_REGISTER: "096957576271.dkr.ecr.us-east-1.amazonaws.com"
steps:
- type: checkout
- type: shell
name: Build the Docker image
shell: /bin/bash
command: |
docker build --rm=false -t $IMAGE_NAME .
I check your syntax with this example of circleci docs https://circleci.com/docs/2.0/language-python/#config-walkthrough so you have to remove the hiphen
enviroment:
IMAGE_NAME: "nginx-ks8-circleci-hello-world"
Thats for the environment variable inside the docker image for CircleCi 2.0.
Circle runs each command in a subshell so there isn't a way to set environment variables for the CircleCi build from the build itself.
Instead use the actual CircleCi environment variables:
https://circleci.com/gh/{yourOrganization}/{yourRepo}/edit#env-vars