How can I convert this command below to docker-compose version?
docker build -t xxx --build-arg SSH_PRV_KEY="$(cat ~/.ssh/id_rsa)" .
I try this block below, but it does not work. Please help. Thanks.
xxx:
build:
context: .
dockerfile: Dockerfile
args:
SSH_PRV_KEY: "$(cat ~/.ssh/id_rsa)"
docker-compose doesn't undershell shell code like that. You can do it this way:
xxx:
build:
context: .
dockerfile: Dockerfile
args:
SSH_PRV_KEY
Now, before run docker-compose, export your SSH_PRV_KEY env var:
export SSH_PRV_KEY="$(cat ~/.ssh/id_rsa)"
# now run docker-compose up as you normally do
Then SSH_PRV_KEY will have the right value.
Two thing you need to consider:
It may not work as expected if you have pass pharase in your id_rsa.
This SSH_PRV_KEY will actually available to docker meta data such as docker history or images inspect. To get around that you should look into multi stage build https://docs.docker.com/develop/develop-images/multistage-build/. In your build steps, you use that key to do anything you want. Then in your final image, don't declare SSH_PRV_KEY but simply copy the result from previous image. A more specific example where you use a private key to install dependencies
FROM based as build
ARG SSH_PRV_KEY
RUN echo "$SSH_PRV_KEY" > ~/.ssh/id_rsa
RUN npm install # this may need access to that rsa key
FROM node
COPY --from=builder node_modules node_modules
Notice in second images, we don't declare ARG therefore we don't expose it.
Related
I have a Github Action to build image from a Dockerfile located in the same repo with the Github Action.
In the Dockerfile I use sensitive data so I chose to use Github Secrets.
Here is my Dockerfile:
From python:3.9.5
ARG NEXUS_USER
ARG NEXUS_PASS
RUN pip install --upgrade pip
RUN pip config set global.extra-index-url https://${NEXUS_USER}:${NEXUS_PASS}#<my nexus endpoint>
RUN pip config set global.trusted-host <my nexus endpoint>
COPY ./src/python /python-scripts
ENTRYPOINT [ "python", "/python-scripts/pipe.py" ]
Actions builds an image using this Dockerfile:
jobs:
docker:
runs-on: self-hosted
.
.
.
.
.
- name: build
run: |
docker build -t ${GITHUB_REPO} .
Action fails when calling the Github secrets from Dockerfile. What is the proper way to do that? As you can see I tried to add ARG in Dockerfile but that didn't work as well.
Is not clear where you are calling secrets from the Dockerfile, BTW you could pass the credentials to the build command using the build-arg flag, like:
docker build \
--build-arg "NEXUS_USER=${{ secrets.NEXUS_USER }}" \
--build-arg "NEXUS_PASS=${{ secrets.NEXUS_PASS }}" \
-t ${GITHUB_REPO} .
I have:
docker-compose.yml
version: "3.9"
services:
test_name:
image: ${PROJECT_NAME}/test_service
build:
dockerfile: Dockerfile
env_file: .env
Dockerfile
FROM alpine:3.15
RUN echo $TEST >> test1.txt
CMD echo $TEST >> test2.txt
As result:
test1.txt - empty and test2.txt with data.
My problem is that this variables are too much, so can I get environment variables in RUN command from .env file without enumeration all of them in ARG?
To use variables in a RUN instruction, you need to use ARG. ARG are available at build time while ENV is available when the container runs.
FROM alpine:3.15
ARG FOO="you see me on build"
ENV BAR="you see me on run"
RUN echo $FOO >> test1.txt
CMD echo $BAR >> test2.txt
docker build --build-arg FOO="hi" --tag test .
docker run --env BAR="there" test
There is one thing that comes close to using env variables, but you still need to provide the --build-arg flag.
You can define env variable with the same name as the build arg and reference it by its name without setting a value. The value will be taken from the env variable in your shell.
export FOO="bar"
docker build --build-arg FOO --tag test .
This also works in compose.
Additionally, when you use compose you can place a .env file next to your compose file. Variables found there will be read and are available in the build:arg key as well as the environment key, But you still have to name them.
# env file
FOO=bar
BAZ=qux
services:
test_name:
build:
context: ./
args:
FOO:
BAZ:
I have a multi-stage build where a python script runs in the first stage and uses several env vars.
How do I set these variables in the docker build command?
Here's the Dockerfile:
FROM python:3 AS exporter
RUN mkdir -p /opt/export && pip install mysql-connector-python
ADD --chmod=555 export.py /opt/export
CMD ["python", "/opt/export/export.py"]
FROM nginx
COPY --from=exporter /tmp/gen/* /usr/share/nginx/html
My export.py script reads several env vars, and I have a .env file. If I run a container built with teh first stage and pass --env-file it works, but I can't seem to get it to work in the build stage.
How can I get the env vars to be available when building the first stage?
I don't care if they are saved in the image or not...
its seens you are looking for the ARG instruction. it's only avaible at the building time and won't be avaible at image runtime. Don’t use them for secrets which are not meant to stick around!
# default value if not using --build-arg instruction
ARG GLOBAL_AVAILABLE=iamglobal
FROM python:3 AS exporter
RUN mkdir -p /opt/export && pip install mysql-connector-python
ADD --chmod=555 export.py /opt/export
ARG GLOBAL_AVAILABLE
ENV GLOBAL_AVAILABLE=$GLOBAL_AVAILABLE
# only visible at exporter build stage:
ARG LOCAL_AVAILABLE=aimlocal
# multistage visible:
RUN echo ${GLOBAL_AVAILABLE}
# local stage visible (exporter build stage):
RUN echo ${LOCAL_AVAILABLE}
CMD ["python", "/opt/export/export.py"]
FROM nginx
COPY --from=exporter /tmp/gen/* /usr/share/nginx/html
you can pass custom ARG values by using the --build-arg flag:
docker build -t <image-name>:<tag> --build-arg GLOBAL_AVAILABLE=abc .
the general format to pass multiple args is:
docker build -t <image-name>:<tag> --build-arg <key1>=<value1> --build-arg <key2>=<value2> .
some refs:
https://docs.docker.com/engine/reference/builder/
https://blog.bitsrc.io/how-to-pass-environment-info-during-docker-builds-1f7c5566dd0e
https://vsupalov.com/docker-arg-env-variable-guide/
I have a docker-compose file which includes the following:
environment:
DOCUMENT_ROOT: /var/some/dir
I would like to add that path to my container.
In my DockerFile I add:
RUN echo "export PATH=$PATH:${DOCUMENT_ROOT}" >> /root/.bashrc
But it doesn't work. It seems the ENV parameter isn't available.
What's the problem ?
Yaron
ARG some_variable_name
RUN echo "export PATH=$PATH:${some_variable_name}" >> /root/.bashrc
You should use ARG in Dockerfile and set arguments in build command:
docker build --build-arg some_variable_name=a_value
ARG is only available during the build of a Docker image (RUN etc),
not after the image is created and containers are started from it
(ENTRYPOINT, CMD). You can use ARG values to set ENV values to work
around that.
or in docker-compose:
version: '3'
services:
somename:
build:
context: ./app
dockerfile: Dockerfile
args:
some_variable_name: a_value
Understanding Docker Build Args, Environment Variables and Docker Compose Variables
I'm doing a multi-stage Docker build:
# Dockerfile
########## Build stage ##########
FROM golang:1.10 as build
ENV TEMP /go/src/github.com/my-id/my-go-project
WORKDIR $TEMP
COPY . .
RUN make build
########## Final stage ##########
FROM alpine:3.4
# ...
ENV HOME /home/$USER
ENV TEMP /go/src/github.com/my-id/my-go-project
COPY --from=build $TEMP/bin/my-daemon $HOME/bin/
RUN chown -R $USER:$GROUP $HOME
USER $USER
ENTRYPOINT ["my-daemon"]
and the Makefile contains in part:
build: bin
go build -v -o bin/my-daemon cmd/my-daemon/main.go
bin:
mkdir $#
This all works just fine with a docker build.
Now I want to use Codeship, so I have:
# codeship-services.yml
cachemanager:
build:
image: my-daemon
dockerfile: Dockerfile
and:
# codeship-steps.yml
- name: my-daemon build
tag: master
service: my-service
command: true
The issue is if I do jet steps --master, it builds everything OK, but then runs the container as if I did a docker run. Why? I don't want it to do that.
It's as if I would have to have two separate Dockerfiles: one only for the build stage and one only for the run stage and use the former with jet. But then this defeats the point of Docker multi-stage builds.
I was able to solve this problem using multi-stage builds split into two different files following this guide: https://documentation.codeship.com/pro/common-issues/caching-multi-stage-dockerfile/
Basically, you'll take your existing Dockerfile and split it into two files like so, with the second referencing the first:
# Dockerfile.build-stage
FROM golang:1.10 as build-stage
ENV TEMP /go/src/github.com/my-id/my-go-project
WORKDIR $TEMP
COPY . .
RUN make build
# Dockerfile
FROM build-stage as build-stage
FROM alpine:3.4
# ...
ENV HOME /home/$USER
ENV TEMP /go/src/github.com/my-id/my-go-project
COPY --from=build $TEMP/bin/my-daemon $HOME/bin/
RUN chown -R $USER:$GROUP $HOME
USER $USER
ENTRYPOINT ["my-daemon"]
Then, in your codeship-service.yml file:
# codeship-services.yml
cachemanager-build:
build:
dockerfile: Dockerfile.build-stage
cachemanager-app:
build:
image: my-daemon
dockerfile: Dockerfile
And in your codeship-steps.yml file:
# codeship-steps.yml
- name: cachemanager build
tag: master
service: cachemanager-build
command: <here you can run tests or linting>
- name: publish to registry
tag: master
service: cachemanager-app
...
I don't think you want to actually run the Dockerfile because it will start your app. We use the second stage to push a smaller build to an image registry.