Use Github secrets in Dockerfile does not work with Github Actions - docker

I have a Github Action to build image from a Dockerfile located in the same repo with the Github Action.
In the Dockerfile I use sensitive data so I chose to use Github Secrets.
Here is my Dockerfile:
From python:3.9.5
ARG NEXUS_USER
ARG NEXUS_PASS
RUN pip install --upgrade pip
RUN pip config set global.extra-index-url https://${NEXUS_USER}:${NEXUS_PASS}#<my nexus endpoint>
RUN pip config set global.trusted-host <my nexus endpoint>
COPY ./src/python /python-scripts
ENTRYPOINT [ "python", "/python-scripts/pipe.py" ]
Actions builds an image using this Dockerfile:
jobs:
docker:
runs-on: self-hosted
.
.
.
.
.
- name: build
run: |
docker build -t ${GITHUB_REPO} .
Action fails when calling the Github secrets from Dockerfile. What is the proper way to do that? As you can see I tried to add ARG in Dockerfile but that didn't work as well.

Is not clear where you are calling secrets from the Dockerfile, BTW you could pass the credentials to the build command using the build-arg flag, like:
docker build \
--build-arg "NEXUS_USER=${{ secrets.NEXUS_USER }}" \
--build-arg "NEXUS_PASS=${{ secrets.NEXUS_PASS }}" \
-t ${GITHUB_REPO} .

Related

ENV vars for docker build in multi-stage build

I have a multi-stage build where a python script runs in the first stage and uses several env vars.
How do I set these variables in the docker build command?
Here's the Dockerfile:
FROM python:3 AS exporter
RUN mkdir -p /opt/export && pip install mysql-connector-python
ADD --chmod=555 export.py /opt/export
CMD ["python", "/opt/export/export.py"]
FROM nginx
COPY --from=exporter /tmp/gen/* /usr/share/nginx/html
My export.py script reads several env vars, and I have a .env file. If I run a container built with teh first stage and pass --env-file it works, but I can't seem to get it to work in the build stage.
How can I get the env vars to be available when building the first stage?
I don't care if they are saved in the image or not...
its seens you are looking for the ARG instruction. it's only avaible at the building time and won't be avaible at image runtime. Don’t use them for secrets which are not meant to stick around!
# default value if not using --build-arg instruction
ARG GLOBAL_AVAILABLE=iamglobal
FROM python:3 AS exporter
RUN mkdir -p /opt/export && pip install mysql-connector-python
ADD --chmod=555 export.py /opt/export
ARG GLOBAL_AVAILABLE
ENV GLOBAL_AVAILABLE=$GLOBAL_AVAILABLE
# only visible at exporter build stage:
ARG LOCAL_AVAILABLE=aimlocal
# multistage visible:
RUN echo ${GLOBAL_AVAILABLE}
# local stage visible (exporter build stage):
RUN echo ${LOCAL_AVAILABLE}
CMD ["python", "/opt/export/export.py"]
FROM nginx
COPY --from=exporter /tmp/gen/* /usr/share/nginx/html
you can pass custom ARG values by using the --build-arg flag:
docker build -t <image-name>:<tag> --build-arg GLOBAL_AVAILABLE=abc .
the general format to pass multiple args is:
docker build -t <image-name>:<tag> --build-arg <key1>=<value1> --build-arg <key2>=<value2> .
some refs:
https://docs.docker.com/engine/reference/builder/
https://blog.bitsrc.io/how-to-pass-environment-info-during-docker-builds-1f7c5566dd0e
https://vsupalov.com/docker-arg-env-variable-guide/

AWS Codeartifact and docker build cache

Im trying to use AWS Codeartifact as my pip repo.
every time I build a docker image I need to login or generate token,
I tried this: How to use AWS CodeArtifact *within* A Dockerfile in AWSCodeBuild
but in each build the pip.conf file is different (new token) which breaks the docker cache.
for now I want to avoid base image with all the packages pre-installed.
anyone has a solution for this problem?
thx!
looks like docker buildkit is the answer.
Makefile:
docker_build:
#$(eval CODEARTIFACT_AUTH_TOKEN := $(shell aws codeartifact get-authorization-token --domain your-domain --domain-owner your-id --region your-region --query authorizationToken --output text --duration-seconds 900))
#pip config set global.index-url "https://aws:${CODEARTIFACT_AUTH_TOKEN}#<your-domain>-<your-id>.d.codeartifact.<your-region>.amazonaws.com/pypi/your-repo/simple/"
cp ~/.config/pip/pip.conf /tmp/pip.conf
DOCKER_BUILDKIT=1 docker build --progress=plain --secret id=pip.conf,src=/tmp/pip.conf -t tmp_docker_image .
Dockerfile:
FROM python:3.8.8-slim-buster
WORKDIR /code
ADD requirements.txt /code/requirements.txt
RUN --mount=type=secret,id=pip.conf,dst=/root/.pip/pip.conf \
pip install -r ./requirements.txt
I have tested it couple of times, changed the token on each run, looks good.
this one helped: https://dev.to/hugoprudente/managing-secrets-during-docker-build-3682

docker build --build-arg with github password is not working

When my Dockerfile was like below, it was working well.
...
RUN pip install git+https://user_name:my_password#github.com/repo_name.git#egg=repo_name==1.0.0
...
But when I changed Dockerfile to the below
...
RUN pip install git+https://user_name:${GITHUB_PASSWORD}#github.com/repo_name.git#egg=repo_name==1.0.0
...
And used the command below, it's not working.
docker build -t my_repo:tag_name . --build-arg GITHUB_PASSWORD=my_password
You need to add an ARG declaration into the Dockerfile:
FROM ubuntu
ARG PASSWORD
RUN echo ${PASSWORD} > /password
Then build your docker image:
$ docker build -t foo . --build-arg PASSWORD="foobar"
After this, you can check for the existence of the parameter in your docker container:
$ docker run -it foo bash
root#ebeb5b33941e:/# cat /password
foobar
Therefore, add the ARG GITHUB_PASSWORD build arg into your dockerfile to get it to work.

Env vars lost when building docker image from Gitlab CI

I'm trying to build my React / NodeJS project using Docker and Gitlab CI.
When I build manually my images, I use .env file containing env vars, and everything is fine.
docker build --no-cache -f client/docker/local/Dockerfile . -t espace_client_client:local
docker build --no-cache -f server/docker/local/Dockerfile . -t espace_client_api:local
But when deploying with Gitlab, I can build successfully the image, but when I run it, env vars are empty in the client.
Here is my gitlab CI:
image: node:10.15
variables:
REGISTRY_PACKAGE_CLIENT_NAME: registry.gitlab.com/company/espace_client/client
REGISTRY_PACKAGE_API_NAME: registry.gitlab.com/company/espace_client/api
REGISTRY_URL: https://registry.gitlab.com
DOCKER_DRIVER: overlay
# Client Side
REACT_APP_API_URL: https://api.espace-client.company.fr
REACT_APP_DB_NAME: company
REACT_APP_INFLUX: https://influx-prod.company.fr
REACT_APP_INFLUX_LOGIN: admin
REACT_APP_HOUR_GMT: 2
stages:
- publish
docker-push-client:
stage: publish
before_script:
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $REGISTRY_URL
image: docker:stable
services:
- docker:dind
script:
- docker build --no-cache -f client/docker/prod/Dockerfile . -t $REGISTRY_PACKAGE_CLIENT_NAME:latest
- docker push $REGISTRY_PACKAGE_CLIENT_NAME:latest
Here is the Dockerfile for the client
FROM node:10.15-alpine
WORKDIR /app
COPY package*.json ./
ENV NODE_ENV production
RUN npm -g install serve && npm install
COPY . .
RUN npm run build
EXPOSE 3000
CMD [ "serve", "build", "-l", "3000" ]
Why is there such a difference between the 2 process ?
According to your answer in comments, GitLab CI/CD environment variables doesn't solve your issue. Gitlab CI environment is actual only in context of GitLab Runner that builds and|or deploys your app.
So, if you are going to propagate Env vars to the app, there are several ways to deliver variables from .gitlab-cy.ymlto your app:
ENV instruction Dockerfile
E.g.
FROM node:10.15-alpine
WORKDIR /app
COPY package*.json ./
ENV NODE_ENV production
ENV REACT_APP_API_URL: https://api.espace-client.company.fr
ENV REACT_APP_DB_NAME: company
ENV REACT_APP_INFLUX: https://influx-prod.company.fr
ENV REACT_APP_INFLUX_LOGIN: admin
ENV REACT_APP_HOUR_GMT: 2
RUN npm -g install serve && npm install
COPY . .
RUN npm run build
EXPOSE 3000
CMD [ "serve", "build", "-l", "3000" ]
docker-compose environment directive
web:
environment:
- NODE_ENV=production
- REACT_APP_API_URL=https://api.espace-client.company.fr
- REACT_APP_DB_NAME=company
- REACT_APP_INFLUX=https://influx-prod.company.fr
- REACT_APP_INFLUX_LOGIN=admin
- REACT_APP_HOUR_GMT=2
Docker run -e
(Not your case, just for information)
docker -e REACT_APP_DB_NAME="company"
P.S. Try Gitlab CI variables
There is convenient way to store variables outside of your code: Custom environment variables
You can set them up easily from the UI. That can be very powerful as it can be used for scripting without the need to specify the value itself.
(source: gitlab.com)

docker build --build-arg with shell command in docker-compose file

How can I convert this command below to docker-compose version?
docker build -t xxx --build-arg SSH_PRV_KEY="$(cat ~/.ssh/id_rsa)" .
I try this block below, but it does not work. Please help. Thanks.
xxx:
build:
context: .
dockerfile: Dockerfile
args:
SSH_PRV_KEY: "$(cat ~/.ssh/id_rsa)"
docker-compose doesn't undershell shell code like that. You can do it this way:
xxx:
build:
context: .
dockerfile: Dockerfile
args:
SSH_PRV_KEY
Now, before run docker-compose, export your SSH_PRV_KEY env var:
export SSH_PRV_KEY="$(cat ~/.ssh/id_rsa)"
# now run docker-compose up as you normally do
Then SSH_PRV_KEY will have the right value.
Two thing you need to consider:
It may not work as expected if you have pass pharase in your id_rsa.
This SSH_PRV_KEY will actually available to docker meta data such as docker history or images inspect. To get around that you should look into multi stage build https://docs.docker.com/develop/develop-images/multistage-build/. In your build steps, you use that key to do anything you want. Then in your final image, don't declare SSH_PRV_KEY but simply copy the result from previous image. A more specific example where you use a private key to install dependencies
FROM based as build
ARG SSH_PRV_KEY
RUN echo "$SSH_PRV_KEY" > ~/.ssh/id_rsa
RUN npm install # this may need access to that rsa key
FROM node
COPY --from=builder node_modules node_modules
Notice in second images, we don't declare ARG therefore we don't expose it.

Resources