I am trying to use gitlab CI for docker in docker builds and i have it up and running.
My issue is that the nice pipeline functionality of gitlab is somewhat lost since the whole build,test,etc... stages are defined in the DockerFile and not yaml.
I only have a single stage in the yaml right now.
Is there a way to use docker in docker and still leverage gitlab pipelines?
Relevant files.
Dockerfile
# Stage 1 - require and update image
FROM microsoft/aspnetcore-build:2.0.0 AS build
# Update the imgage OS
RUN apt-get update \
&& apt-get install -y
# set the container working directory
WORKDIR /code
# Stage 2 - copy data
# caches restore result by copying csproj file separately
COPY src/Nuget.Config ./src/
COPY src/Inflow.NewsMl.FileLoader/Inflow.NewsMl.FileLoader.csproj ./src/Inflow.NewsMl.FileLoader/
RUN dotnet restore src/Inflow.NewsMl.FileLoader/Inflow.NewsMl.FileLoader.csproj --configfile /code/src/Nuget.Config
COPY src/Inflow.NewsMl.FileLoader.Test/Inflow.NewsMl.FileLoader.Test.csproj ./src/Inflow.NewsMl.FileLoader.Test/
RUN dotnet restore src/Inflow.NewsMl.FileLoader.Test/Inflow.NewsMl.FileLoader.Test.csproj --configfile /code/src/Nuget.Config
# Copy the rest
COPY . .
# Stage 3 - Build and test
RUN dotnet test src/Inflow.NewsMl.FileLoader.Test/Inflow.NewsMl.FileLoader.Test.csproj
# Stage 4 - Deploym to image
RUN dotnet publish src/Inflow.NewsMl.FileLoader --output /code/output --configuration Release
FROM microsoft/aspnetcore:2.0.0
COPY --from=build /code/output /app
WORKDIR /app
ENTRYPOINT [ "dotnet", "Inflow.NewsMl.FileLoader.dll" ]
and the ci runner yaml
image: docker:latest
.before_script:
- docker info
variables:
DOCKER_DRIVER: overlay
CONTAINER_TEST_IMAGE: infomedia/newsml.fileloader:CI_COMMIT_SHA
#$CI_COMMIT_TAG
services:
- docker:dind
stages:
- build
build:
stage: build
#only:
# - tags
#except:
# - master
script:
- docker build -t $CONTAINER_TEST_IMAGE .
Related
I'm kinda stuck in how to deploy something.
As a dotnet developer I created a webapp (just the simple weatherforecastteamplte) and tried to deploy this.
My ci-cd has 3 stages: build, test and deploy.
My first approach was to use aa ssh key to ssh into my buildserver and transfer the files, because I had gitlab runner running on my gitlab server rather than my buildserver.
For security reasons and other reasons , I have installed gitlab-runner on my buildserver.
My buildserver includes all the tools I nedd: docker, docker-compose, dotnet-npm, ...
So i registered a new runner with shell.
It starts to build the image but when it comes in my dokcerfile it gets stuck because it can't find the cproj file. Currently I have been trying to figure out where the executer is looking for the project file but currently I can't figure it out.
Hope someone can help me.
my dockerfile (generated by visual studio':
#See https://aka.ms/containerfastmode to understand how Visual Studio uses this Dockerfile to build your images for faster debugging.
FROM mcr.microsoft.com/dotnet/aspnet:6.0 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build
WORKDIR /src
COPY ["Test/Test.csproj", "Test/"]
RUN dotnet restore "Test/Test.csproj"
COPY . .
WORKDIR "/src/Test"
RUN dotnet build "Test.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "Test.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "Test.dll"]
my gitlab ci cd file:
image: mcr.microsoft.com/dotnet/sdk:latest
stages:
- build
- test
- deploy
before_script:
- dotnet restore Test.sln
build:
stage: build
tags:
- dev
script:
- 'dotnet build --no-restore Test.sln'
tests:
stage: test
tags:
- dev
script:
- 'dotnet test --no-restore Test.sln'
deploy:
stage: deploy
environment: production
tags:
- dev
before_script:
- cd Test
script:
- 'docker build - < Dockerfile'
We're using GITLAB CI/CD for deployment. This is the publish stage , Dockerfile is used here. If you check the script , I've integrated one environment variable(line no:10) , because we're using two jobs for publish itself like developer and stage. For that stage , I shown to you.
docker_build_stage:
stage: Publish
image: docker:19.03.11
services:
- docker:19.03.11-dind
variables:
IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker build --build-arg environment_name=stage -t $IMAGE_TAG .
- docker push $IMAGE_TAG
only:
- /^([A-Z]([0-9][-_])?)?SPRINT(([-_][A-Z][0-9])?)+/i
- stage
This is the docker file , we're using.
FROM maven:3.8.3-jdk-11 AS MAVEN_BUILD
COPY pom.xml /build/
COPY src /build/src/
WORKDIR /build/
RUN mvn clean install package -DskipTests=true
FROM openjdk:11
WORKDIR /app
COPY --from=MAVEN_BUILD /build/target/provider-service-*.jar /app/provider-service.jar
ENV PORT 8092
ENV env_var_name=$environment_name
EXPOSE $PORT
ENTRYPOINT ["java","-Dspring.profiles.active="$env_var_name,"-jar","/app/provider-service.jar"]
In the dockerfile , at last line(line no:12) , we add environment variable(line no:10), before ,the variable should be
active=stage"
Because , we're maintaining respective branch as per the environment. Now, we merged developer and stage environment into single script. We are facing some fetching issue. Pipeline was successful but it doesn't fetch.
I did not completely understand what is the issue you are mentioning, but I see you are missing ARG instruction in Dockerfile which is required to define the build-arg that you are passing if you want to use it in the build stage, in this case this arg "$environment_name"
You might change it like this
FROM maven:3.8.3-jdk-11 AS MAVEN_BUILD
COPY pom.xml /build/
COPY src /build/src/
WORKDIR /build/
RUN mvn clean install package -DskipTests=true
FROM openjdk:11
ARG environment_name
WORKDIR /app
COPY --from=MAVEN_BUILD /build/target/provider-service-*.jar /app/provider-service.jar
ENV PORT 8092
ENV env_var_name=$environment_name
EXPOSE $PORT
ENTRYPOINT ["java","-Dspring.profiles.active="$env_var_name,"-jar","/app/provider-service.jar"]
I have Installed Gitlab in one of the Ubuntu machine. And I have dotnetcore project in the name of ABC in the Gitlab.
But, In that ABC repo have multiple small doetnetcore application with different different directory like abc1 abc2 abc3 abc4.
I want to write a single pipeline under ABC to create the docker Image whenever developer push the code in the respective directory. but that need to be create docker Image for that only directory.
eg: Developer push the code under abc3 directory, that time pipeline run and create the docker Image for only abc3 directory.
Please help me out with it.
Thanks in Advance...!!!
Below is my pipeline what I have written also Docker file:
stages:
- docker
- build
services:
- docker:dind
before_script:
- "echo $gitlab"
docker-job:
stage: docker
image: docker:dind
script:
- docker login -u username -p password $CI_REGISTRY
- docker build -t dotnetcore .
#- docker push $IMAGE_PUSH:latest
build:
stage: build
tags:
- shell
image: mcr.microsoft.com/dotnet/sdk
script:
- dotnet restore
- dotnet build
FROM mcr.microsoft.com/dotnet/aspnet:5.0 AS base
WORKDIR /app
EXPOSE 80
ENV ASPNETCORE_URLS=http://+:80
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
WORKDIR /src
COPY ["dotnetcore.csproj","./"]
RUN dotnet restore "dotnetcore.csproj"
COPY . .
WORKDIR "/src/"
RUN dotnet build "dotnetcore.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "dotnetcore.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "dotnetcore.dll"]
In this pipeline and dockerfile I am only able to build "dotnetcore" project. But I have dotnetcore1 doctnetcore2 dotnetcore3 projects under the same Repo.
gitlab ci has a only:changes feature. You can define multiple jobs, each for one docker.
job-a:
script: docker build a
only:
changes:
- a/**/*
job-b:
script: docker build b
only:
changes:
- b/**/*
Assuming you have a small number of directories. You can create a job for each directory. Use rules with changes to check if changes were made in the particular directory.
I'm trying to build my React / NodeJS project using Docker and Gitlab CI.
When I build manually my images, I use .env file containing env vars, and everything is fine.
docker build --no-cache -f client/docker/local/Dockerfile . -t espace_client_client:local
docker build --no-cache -f server/docker/local/Dockerfile . -t espace_client_api:local
But when deploying with Gitlab, I can build successfully the image, but when I run it, env vars are empty in the client.
Here is my gitlab CI:
image: node:10.15
variables:
REGISTRY_PACKAGE_CLIENT_NAME: registry.gitlab.com/company/espace_client/client
REGISTRY_PACKAGE_API_NAME: registry.gitlab.com/company/espace_client/api
REGISTRY_URL: https://registry.gitlab.com
DOCKER_DRIVER: overlay
# Client Side
REACT_APP_API_URL: https://api.espace-client.company.fr
REACT_APP_DB_NAME: company
REACT_APP_INFLUX: https://influx-prod.company.fr
REACT_APP_INFLUX_LOGIN: admin
REACT_APP_HOUR_GMT: 2
stages:
- publish
docker-push-client:
stage: publish
before_script:
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $REGISTRY_URL
image: docker:stable
services:
- docker:dind
script:
- docker build --no-cache -f client/docker/prod/Dockerfile . -t $REGISTRY_PACKAGE_CLIENT_NAME:latest
- docker push $REGISTRY_PACKAGE_CLIENT_NAME:latest
Here is the Dockerfile for the client
FROM node:10.15-alpine
WORKDIR /app
COPY package*.json ./
ENV NODE_ENV production
RUN npm -g install serve && npm install
COPY . .
RUN npm run build
EXPOSE 3000
CMD [ "serve", "build", "-l", "3000" ]
Why is there such a difference between the 2 process ?
According to your answer in comments, GitLab CI/CD environment variables doesn't solve your issue. Gitlab CI environment is actual only in context of GitLab Runner that builds and|or deploys your app.
So, if you are going to propagate Env vars to the app, there are several ways to deliver variables from .gitlab-cy.ymlto your app:
ENV instruction Dockerfile
E.g.
FROM node:10.15-alpine
WORKDIR /app
COPY package*.json ./
ENV NODE_ENV production
ENV REACT_APP_API_URL: https://api.espace-client.company.fr
ENV REACT_APP_DB_NAME: company
ENV REACT_APP_INFLUX: https://influx-prod.company.fr
ENV REACT_APP_INFLUX_LOGIN: admin
ENV REACT_APP_HOUR_GMT: 2
RUN npm -g install serve && npm install
COPY . .
RUN npm run build
EXPOSE 3000
CMD [ "serve", "build", "-l", "3000" ]
docker-compose environment directive
web:
environment:
- NODE_ENV=production
- REACT_APP_API_URL=https://api.espace-client.company.fr
- REACT_APP_DB_NAME=company
- REACT_APP_INFLUX=https://influx-prod.company.fr
- REACT_APP_INFLUX_LOGIN=admin
- REACT_APP_HOUR_GMT=2
Docker run -e
(Not your case, just for information)
docker -e REACT_APP_DB_NAME="company"
P.S. Try Gitlab CI variables
There is convenient way to store variables outside of your code: Custom environment variables
You can set them up easily from the UI. That can be very powerful as it can be used for scripting without the need to specify the value itself.
(source: gitlab.com)
I'm doing a multi-stage Docker build:
# Dockerfile
########## Build stage ##########
FROM golang:1.10 as build
ENV TEMP /go/src/github.com/my-id/my-go-project
WORKDIR $TEMP
COPY . .
RUN make build
########## Final stage ##########
FROM alpine:3.4
# ...
ENV HOME /home/$USER
ENV TEMP /go/src/github.com/my-id/my-go-project
COPY --from=build $TEMP/bin/my-daemon $HOME/bin/
RUN chown -R $USER:$GROUP $HOME
USER $USER
ENTRYPOINT ["my-daemon"]
and the Makefile contains in part:
build: bin
go build -v -o bin/my-daemon cmd/my-daemon/main.go
bin:
mkdir $#
This all works just fine with a docker build.
Now I want to use Codeship, so I have:
# codeship-services.yml
cachemanager:
build:
image: my-daemon
dockerfile: Dockerfile
and:
# codeship-steps.yml
- name: my-daemon build
tag: master
service: my-service
command: true
The issue is if I do jet steps --master, it builds everything OK, but then runs the container as if I did a docker run. Why? I don't want it to do that.
It's as if I would have to have two separate Dockerfiles: one only for the build stage and one only for the run stage and use the former with jet. But then this defeats the point of Docker multi-stage builds.
I was able to solve this problem using multi-stage builds split into two different files following this guide: https://documentation.codeship.com/pro/common-issues/caching-multi-stage-dockerfile/
Basically, you'll take your existing Dockerfile and split it into two files like so, with the second referencing the first:
# Dockerfile.build-stage
FROM golang:1.10 as build-stage
ENV TEMP /go/src/github.com/my-id/my-go-project
WORKDIR $TEMP
COPY . .
RUN make build
# Dockerfile
FROM build-stage as build-stage
FROM alpine:3.4
# ...
ENV HOME /home/$USER
ENV TEMP /go/src/github.com/my-id/my-go-project
COPY --from=build $TEMP/bin/my-daemon $HOME/bin/
RUN chown -R $USER:$GROUP $HOME
USER $USER
ENTRYPOINT ["my-daemon"]
Then, in your codeship-service.yml file:
# codeship-services.yml
cachemanager-build:
build:
dockerfile: Dockerfile.build-stage
cachemanager-app:
build:
image: my-daemon
dockerfile: Dockerfile
And in your codeship-steps.yml file:
# codeship-steps.yml
- name: cachemanager build
tag: master
service: cachemanager-build
command: <here you can run tests or linting>
- name: publish to registry
tag: master
service: cachemanager-app
...
I don't think you want to actually run the Dockerfile because it will start your app. We use the second stage to push a smaller build to an image registry.