How to use ${CI_JOB_TOKEN} > .netrc without messing up docker cache - docker

I do have some repos on gitlab with CICD configured.
This is the build script:
Build
Staging:
stage: build
image: docker:19.03.1
services:
- docker:19.03.1-dind
before_script:
- apk --update --no-cache add openssh-client curl py-pip gettext
- pip install awscli
- echo -e "machine gitlab.com\nlogin gitlab-ci-token\npassword ${CI_JOB_TOKEN}" > .netrc
script:
- $(aws ecr get-login --no-include-email --region sa-east-1)
- docker pull $AWS_ECR:latest || true
- docker build --cache-from $AWS_ECR:latest...
And my dockerfile is the following:
FROM golang:latest
WORKDIR $GOPATH/src/api-v2
COPY go.mod go.sum ./
COPY .netrc /root/
RUN go mod download && go mod verify
COPY . $GOPATH/src/api-v2
...
RUN go build
EXPOSE 8080
CMD [ "api-v2" ]
With this dockerfile if my dependencies dosen't change the docker is supposed to use the cache until the 6th line, that happens if I run docker build locally
That said whenever the gitlab-ci triggers it stops using the cache on line 4
COPY .netrc /root/
That happens due to a .netrc change on this line
- echo -e "machine gitlab.com\nlogin gitlab-ci-token\npassword ${CI_JOB_TOKEN}" > .netrc
I Thought on using a fixed user/pwd that would be obtained from gitlab variables:
- echo -e "machine gitlab.com\nlogin ${gitlab-user-var}\npassword ${gitlab-pwd-var}" > .netrc
But that dosen't seems right.
What is the better / reccomended / right way of using a .netrc to authenticate against gitlab without messing up the docker image cache ???

Related

Cannot clone a private Bitbucket repo from inside a Docker image using docker-compose

I am trying to clone a private Git repo hosted on Bitbucket Cloud from inside a docker image. I build the image using docker-compose (ver 3.9).
I have added the public key as an Access Key in the Repo settings in Bitbucket. 
Here is the error I get:
=> ERROR [16/19] RUN git clone git#bitbucket.org:some_repo/imp_hmi.git 0.7s
------
> [16/19] RUN git clone git#bitbucket.org:some_repo/imp_hmi.git:
#0 0.331 Cloning into 'imp_hmi'...
#0 0.636 Host key verification failed.
#0 0.636 fatal: Could not read from remote repository.
#0 0.636
#0 0.636 Please make sure you have the correct access rights
#0 0.636 and the repository exists.
I can clone the repo using the same SSH keys on the host machine.
Now, for the Dockerfile:
# Update this value when the version changes.
ARG UNITY_VERSION=2020.3.13f1
#ARG HMI_CONFIG=niro_av71oxu.yaml
FROM unityci/editor:ubuntu-${UNITY_VERSION}-linux-il2cpp-1.0.1 AS base
USER root
ENV HOME /home/root
# # don't ask interactive questions
ENV DEBIAN_FRONTEND noninteractive
# Create user bobsaccamano
RUN useradd -m -r bobsaccamano
RUN usermod -aG adm,cdrom,sudo,audio,dip,video,plugdev bobsaccamano
# Setup SSH keys
RUN mkdir -p -m 0700 /home/bobsaccamano/.ssh
COPY id-docker-unity /home/bobsaccamano/.ssh/
RUN chown bobsaccamano:bobsaccamano /home/bobsaccamano/.ssh/id-docker-unity
RUN chmod 600 /home/bobsaccamano/.ssh/id-docker-unity
COPY id-docker-unity.pub /home/bobsaccamano/.ssh/
RUN chown bobsaccamano:bobsaccamano /home/bobsaccamano/.ssh/id-docker-unity.pub
RUN touch /home/bobsaccamano/.ssh/known_hosts && chown bobsaccamano:bobsaccamano /home/bobsaccamano/.ssh/known_hosts
RUN ssh-keyscan bitbucket.org >> /home/bobsaccamano/.ssh/known_hosts
RUN cat /home/bobsaccamano/.ssh/id-docker-unity
# Change to bobsaccamano user
USER bobsaccamano
ENV HOME /home/bobsaccamano
ENV HMI_BUILT ${HOME}/HMI_built
# Create folders
RUN mkdir -p -m 0700 /home/bobsaccamano/proj/
RUN mkdir -p -m 0700 ${HMI_BUILT}
# Pull Repositories
WORKDIR /home/bobsaccamano/proj/
RUN git clone git#bitbucket.org:some_repo/imp_hmi.git
# Build HMI
RUN cd imp_hmi && chmod +x build_hmi.sh
RUN . build_hmi.sh DEV
WORKDIR ${HOME}
#RUN apt-get -y update
# WORKDIR /home/unity_volume
The docker-compose.yml file:
version: "3.9"
services:
unity_base:
build:
context: .
dockerfile: Dockerfile.unity
# args:
# progress: plain
volumes:
- hmi_built:/home/bobsaccamano/HMI_built
container_name: unity-base
hmi_app:
build:
context: .
dockerfile: Dockerfile.hmi
depends_on:
- unity_base
volumes:
- hmi_built:/home/bobsaccamano/HMI_built
container_name: hmi-app
volumes:
hmi_built:
Any help is much appreciated!
You should use Personal Access Token instead. Check the PAT docs. They also allow more control over what a user that has the PAT can do.
Don't put your ssh keys inside the docker image. If you start distributing the image you will also distribute your ssh keys.
On a more general note, the workflow that you are trying to apply is wrong in my opinion. Doesn't really make sense to make those operations in a Dockerfile. What I would do instead is fork the git repo (if it is not already yours of course) and add a Dockerfile and docker-compose.yml to it. Then whoever has access to the project can also build an image out of it directly.
you need to add add this lines in dockerfile
RUN eval "$(ssh-agent -s)" && \ chmod 600 /root/bobsaccamano/.ssh/id_rsa && \ ssh-add /home/bobsaccamano/.ssh/id_rsa
RUN ssh -o UserKnownHostsFile=//home/bobsaccamano/.ssh/known_hosts -o StrictHostKeyChecking=no git#bitbucket.org

cloud build pass secret env to dockerfile

I am using google cloud build to build a docker image and deploy in cloud run. The module has dependencies on Github that are private. In the cloudbuild.yaml file I can access secret keys for example the Github token, but I don't know what is the correct and secure way to pass this token to the Dockerfile.
I was following this official guide but it would only work in the cloudbuild.yaml scope and not in the Dockerfile. Accessing GitHub from a build via SSH keys
cloudbuild.yaml
steps:
- name: gcr.io/cloud-builders/docker
args: ["build", "-t", "gcr.io/$PROJECT_ID/$REPO_NAME:$COMMIT_SHA", "."]
- name: gcr.io/cloud-builders/docker
args: [ "push", "gcr.io/$PROJECT_ID/$REPO_NAME:$COMMIT_SHA" ]
- name: gcr.io/google.com/cloudsdktool/cloud-sdk
entrypoint: gcloud
args: [
"run", "deploy", "$REPO_NAME",
"--image", "gcr.io/$PROJECT_ID/$REPO_NAME:$COMMIT_SHA",
"--platform", "managed",
"--region", "us-east1",
"--allow-unauthenticated",
"--use-http2",
]
images:
- gcr.io/$PROJECT_ID/$REPO_NAME:$COMMIT_SHA
availableSecrets:
secretManager:
- versionName: projects/$PROJECT_ID/secrets/GITHUB_USER/versions/1
env: "GITHUB_USER"
- versionName: projects/$PROJECT_ID/secrets/GITHUB_TOKEN/versions/1
env: "GITHUB_TOKEN"
Dockerfile
# [START cloudrun_grpc_dockerfile]
# [START run_grpc_dockerfile]
FROM golang:buster as builder
# Create and change to the app directory.
WORKDIR /app
# Create /root/.netrc cred github
RUN echo machine github.com >> /root/.netrc
RUN echo login "GITHUB_USER" >> /root/.netrc
RUN echo password "GITHUB_PASSWORD" >> /root/.netrc
# Config Github, this create file /root/.gitconfig
RUN git config --global url."ssh://git#github.com/".insteadOf "https://github.com/"
# GOPRIVATE
RUN go env -w GOPRIVATE=github.com/org/repo
# Do I need to remove the /root/.netrc file? I do not want this information to be propagated and seen by third parties.
# Retrieve application dependencies.
# This allows the container build to reuse cached dependencies.
# Expecting to copy go.mod and if present go.sum.
COPY go.* ./
RUN go mod download
# Copy local code to the container image.
COPY . ./
# Build the binary.
# RUN go build -mod=readonly -v -o server ./cmd/server
RUN go build -mod=readonly -v -o server
# Use the official Debian slim image for a lean production container.
# https://hub.docker.com/_/debian
# https://docs.docker.com/develop/develop-images/multistage-build/#use-multi-stage-builds
FROM debian:buster-slim
RUN set -x && apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y \
ca-certificates && \
rm -rf /var/lib/apt/lists/*
# Copy the binary to the production image from the builder stage.
COPY --from=builder /app/server /server
# Run the web service on container startup.
CMD ["/server"]
# [END run_grpc_dockerfile]
# [END cloudrun_grpc_dockerfile]
After trying for 2 days I have not found a solution, the simplest thing I could do was to generate the vendor folder and commit it to the repository and avoid go mod download.
You have several way to do things.
With Docker, when you run a build, you run it in an isolated environment (it's the principle of isolation). So, you haven't access to your environment variables from inside the build process.
To solve that, you can use build args and put your secret values in that parameter.
But, there is a trap: you have to use bash code and not built in step code in Cloud Build. Let me show you
# Doesn't work
- name: gcr.io/cloud-builders/docker
secretEnv: ["GITHUB_USER","GITHUB_TOKEN"]
args: ["build", "-t", "gcr.io/$PROJECT_ID/$REPO_NAME:$COMMIT_SHA", "--build-args=GITHUB_USER=$GITHUB_USER,GITHUB_TOKEN=$GITHUB_TOKEN","."]
# Working version
- name: gcr.io/cloud-builders/docker
secretEnv: ["GITHUB_USER","GITHUB_TOKEN"]
entrypoint: bash
args:
- -c
- |
docker build -t gcr.io/$PROJECT_ID/$REPO_NAME:$COMMIT_SHA --build-args=GITHUB_USER=$$GITHUB_USER,GITHUB_TOKEN=$$GITHUB_TOKEN .
You can also perform the actions outside of the Dockerfile. It's roughly the same thing: load a container, perform operation, load another container and continue.

Problem running a Docker container in Gitlab CI/CD

I am trying to build and run my Docker image using Gitlab CI/CD, but there is one issue I can't fix even though locally everything works well.
Here's my Dockerfile:
FROM <internal_docker_repo_image>
RUN apt update && \
apt install --no-install-recommends -y build-essential gcc
COPY requirements.txt /requirements.txt
RUN pip install --no-cache-dir --user -r /requirements.txt
COPY . /src
WORKDIR /src
ENTRYPOINT ["python", "-m", "dvc", "repro"]
This is how I run the container:
docker run --volume ${PWD}:/src --env=GOOGLE_APPLICATION_CREDENTIALS=<path_to_json> <image_name> ./dvc_configs/free/dvc.yaml --force
Everything works great when running this locally, but it fails when run on Gitlab CI/CD.
stages:
- build_image
build_image:
stage: build_image
image: <internal_docker_repo_image>
script:
- echo "Building Docker image..."
- mkdir ~/.docker
- cat $GOOGLE_CREDENTIALS > ${CI_PROJECT_DIR}/key.json
- docker build . -t <image_name>
- docker run --volume ${PWD}:/src --env=GOOGLE_APPLICATION_CREDENTIALS=<path_to_json> <image_name> ./dvc_configs/free/dvc.yaml --force
artifacts:
paths:
- "./data/*csv"
expire_in: 1 week
This results in the following error:
ERROR: you are not inside of a DVC repository (checked up to mount point '/src')
Just in case you don't know what DVC is, this is a tool used in machine learning for versioning your models, datasets, metrics, and, in addition, setting up your pipelines, which I use it for in my case.
Essentially, it requires two folders .dvc and .git in the directory from which dvc repro is executed.
In this particular case, I have no idea why it's not able to run this command given that the contents of the folders are exactly the same and both .dvc and .git exist.
Thanks in advance!
Your COPY . /src is problematic for the same reason as Hidden file .env not copied using Docker COPY. You probably need !.dvc in your .dockerignore.
Additionally, docker run --volume ${PWD}:/src will overwrite the container's /src so $PWD itself will need .git & .dvc etc. You don't seem to have cloned a repo before running these script commands.

How do you use Docker build secrets with Docker Compose?

Using the docker build command line I can pass in a build secret as follows
docker build \
--secret=id=gradle.properties,src=$HOME/.gradle/gradle.properties \
--build-arg project=template-ms \
.
Then use it in a Dockerfile
# syntax = docker/dockerfile:1.0-experimental
FROM gradle:jdk12 AS build
COPY *.gradle .
RUN --mount=type=secret,target=/home/gradle/gradle.properties,id=gradle.properties gradle dependencies
COPY src/ src/
RUN --mount=type=secret,target=/home/gradle/gradle.properties,id=gradle.properties gradle build
RUN ls -lR build
FROM alpine AS unpacker
ARG project
COPY --from=build /home/gradle/build/libs/${project}.jar /tmp
RUN mkdir -p /opt/ms && unzip -q /tmp/${project}.jar -d /opt/ms && \
mv /opt/ms/BOOT-INF/lib /opt/lib
FROM openjdk:12
EXPOSE 8080
WORKDIR /opt/ms
USER nobody
CMD ["java", "-Xdebug", "-Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=0.0.0.0:8000", "-Dnetworkaddress.cache.ttl=5", "org.springframework.boot.loader.JarLauncher"]
HEALTHCHECK --start-period=600s CMD curl --silent --output /dev/null http://localhost:8080/actuator/health
COPY --from=unpacker /opt/lib /opt/ms/BOOT-INF/lib
COPY --from=unpacker /opt/ms/ /opt/ms/
I want to do a build using docker-compose, but I can't find in the docker-compose.yml reference how to pass the secret.
That way the developer just needs to type in docker-compose up
You can use environment or args to pass variables to container in docker-compose.
args:
- secret=id=gradle.properties,src=$HOME/.gradle/gradle.properties
environment:
- secret=id=gradle.properties,src=$HOME/.gradle/gradle.properties

Env vars lost when building docker image from Gitlab CI

I'm trying to build my React / NodeJS project using Docker and Gitlab CI.
When I build manually my images, I use .env file containing env vars, and everything is fine.
docker build --no-cache -f client/docker/local/Dockerfile . -t espace_client_client:local
docker build --no-cache -f server/docker/local/Dockerfile . -t espace_client_api:local
But when deploying with Gitlab, I can build successfully the image, but when I run it, env vars are empty in the client.
Here is my gitlab CI:
image: node:10.15
variables:
REGISTRY_PACKAGE_CLIENT_NAME: registry.gitlab.com/company/espace_client/client
REGISTRY_PACKAGE_API_NAME: registry.gitlab.com/company/espace_client/api
REGISTRY_URL: https://registry.gitlab.com
DOCKER_DRIVER: overlay
# Client Side
REACT_APP_API_URL: https://api.espace-client.company.fr
REACT_APP_DB_NAME: company
REACT_APP_INFLUX: https://influx-prod.company.fr
REACT_APP_INFLUX_LOGIN: admin
REACT_APP_HOUR_GMT: 2
stages:
- publish
docker-push-client:
stage: publish
before_script:
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $REGISTRY_URL
image: docker:stable
services:
- docker:dind
script:
- docker build --no-cache -f client/docker/prod/Dockerfile . -t $REGISTRY_PACKAGE_CLIENT_NAME:latest
- docker push $REGISTRY_PACKAGE_CLIENT_NAME:latest
Here is the Dockerfile for the client
FROM node:10.15-alpine
WORKDIR /app
COPY package*.json ./
ENV NODE_ENV production
RUN npm -g install serve && npm install
COPY . .
RUN npm run build
EXPOSE 3000
CMD [ "serve", "build", "-l", "3000" ]
Why is there such a difference between the 2 process ?
According to your answer in comments, GitLab CI/CD environment variables doesn't solve your issue. Gitlab CI environment is actual only in context of GitLab Runner that builds and|or deploys your app.
So, if you are going to propagate Env vars to the app, there are several ways to deliver variables from .gitlab-cy.ymlto your app:
ENV instruction Dockerfile
E.g.
FROM node:10.15-alpine
WORKDIR /app
COPY package*.json ./
ENV NODE_ENV production
ENV REACT_APP_API_URL: https://api.espace-client.company.fr
ENV REACT_APP_DB_NAME: company
ENV REACT_APP_INFLUX: https://influx-prod.company.fr
ENV REACT_APP_INFLUX_LOGIN: admin
ENV REACT_APP_HOUR_GMT: 2
RUN npm -g install serve && npm install
COPY . .
RUN npm run build
EXPOSE 3000
CMD [ "serve", "build", "-l", "3000" ]
docker-compose environment directive
web:
environment:
- NODE_ENV=production
- REACT_APP_API_URL=https://api.espace-client.company.fr
- REACT_APP_DB_NAME=company
- REACT_APP_INFLUX=https://influx-prod.company.fr
- REACT_APP_INFLUX_LOGIN=admin
- REACT_APP_HOUR_GMT=2
Docker run -e
(Not your case, just for information)
docker -e REACT_APP_DB_NAME="company"
P.S. Try Gitlab CI variables
There is convenient way to store variables outside of your code: Custom environment variables
You can set them up easily from the UI. That can be very powerful as it can be used for scripting without the need to specify the value itself.
(source: gitlab.com)

Resources