Build args not passing from docker-compoes.yml to Dockerfile - docker

My docker-compose.yml:
version: "3.8"
services:
web:
build:
context: .
args:
file_url: 'some-url'
My Dockerfile:
FROM ruby:2.7.4-bullseye
ARG file_url
RUN curl $file_url -L -o "file"
When I run:
docker-compose up --build
I expect Docker'd build and start my containers. But instead, I got:
Step 4/12 : RUN curl $file_url -L -o "file"
---> Running in 59696c6274c7
curl: no URL specified!
curl: try 'curl --help' or 'curl --manual' for more information
So obviously the build args are not passing to Dockerfile. I've read a lot of similar threads on Stackoverflow but couldn't figure out what went wrong.
One more thing. I can actually build the containers with docker-compose build web just fine. It's just when I run docker-compose up --build or docker-compose up the error occurred.

The variable is not passed correctly in your dockerfile.
see this for variable calls inside dockerfile: https://docs.docker.com/engine/reference/builder/
Ideally it's contained inside ${variable_name} .
this line RUN curl $file_url -L -o "file needs rectification.
In your Dockerfile change $file_url to ${file_url}

Related

Why is Docker not binding my volumes to the container?

I have a very simple project:
Dockerfile:
from node:lts
VOLUME /scripts
WORKDIR /scripts
RUN bash -c 'ls /'
RUN bash -c 'ls /scripts'
RUN script.sh
docker-compose.yml:
version: '3.7'
services:
service:
build: .
volumes:
- .:/scripts
Then I run docker-compose build but it fails with /bin/sh: 1: script.sh: not found
From the ls /scripts I can see that Docker isn't binding my script to the container. I have Docker 19.03.8. Do you know what I am doing wrong?
When you run a Docker Compose file, the build: block is run first, and it ignores all of the options outside that block. A Dockerfile never has mounted volumes, it can never make network calls to other Compose containers, and it won't see environment: variables that are set elsewhere.
That means you must explicitly COPY code into your image before you can RUN it.
FROM node:ls
WORKDIR /scripts
COPY script.sh .
RUN ./script.sh

$GOPATH/go.mod exists but should not when building docker container, but works if I manually run commands

I'm building a golang:1.14.2 docker container with go-redis from a Dockerfile.
FROM golang:1.14.2
# project setup and install go-redis
RUN mkdir -p /go/delivery && cd /go/delivery && \
go mod init example.com/delivery && \
go get github.com/go-redis/redis/v7
# important to copy to /go/delivery
COPY ./src /go/delivery
RUN ls -la /go/delivery
RUN go install example.com/delivery
ENTRYPOINT ["delivery"]
However, when I try to build the container using docker-compose up --build -d, I get this error: $GOPATH/go.mod exists but should not
ERROR: Service 'delivery' failed to build: The command '/bin/sh -c go get github.com/go-redis/redis/v7' returned a non-zero code: 1.
However, I can create a docker container using the image from the dockerfile docker container run -it --rm golang:1.14.2 and then run the exact same commands as in the Dockerfile, and delivery does what I expect it to.
``
Here is deliver.go:
package main
import (
"fmt"
"github.com/go-redis/redis/v7"
)
func main() {
// redis client created here...
fmt.Println("inside main...")
}
What am I doing wrong? I looked up this error message and none of the solutions I've seen worked for me.
EDIT: Here is the compose file:
version: '3.4'
services:
...
delivery:
build: ./delivery
environment:
- REDIS_PORT=${REDIS_PORT}
- REDIS_PASS=${REDIS_PASS}
- QUEUE_NAME-${QUEUE_NAME}
volumes:
- ./logs:/logs
I have same problem. You need set WORKDIR /go/delivery

Pass argument to Dockerfile from a file with docker-compose (SSH private key)

Hi!
I'm kinda stuck in docker-compose, as I need to pass my private SSH key to my Dockerfile declared in my docker-compose.yml, as below:
docker-compose.yml
version: '3.7'
services:
worker:
build: .
args:
- SSH_PRIVATE_KEY
Dockerfile
ARG SSH_PRIVATE_KEY
RUN mkdir /root/.ssh/ && \
echo "${SSH_PRIVATE_KEY}" > /root/.ssh/id_rsa && \
chmod 600 /root/.ssh/id_rsa
With docker itself, that's quite easy, as I just need to run the following command:
docker build . --build-arg SSH_PRIVATE_KEY="$(cat ~/.ssh/id_rsa)"
But in docker-compose... The problem of the ARGS configuration in docker-compose as described in another question is that I can't let the private key inside the docker-compose.yml file.
I need to let docker-compose access to the key inside ~/.ssh/id_rsa: Any clue on how to perform that?
Thank you!
The docs on args states that :
You can omit the value when specifying a build argument, in which case its value at build time is the value in the environment where Compose is running.
In your case, you probably want to build worker service with the following command :
SSH_PRIVATE_KEY="$(cat ~/.ssh/id_rsa)" docker-compose build
By the way, your docker-compose.yml is wrong (missing context) and should be :
version: '3.7'
services:
worker:
build:
context: .
args:
- SSH_PRIVATE_KEY

docker-compose execute command

I'm trying to put some commands in my docker-compose file to be ran in my container and they don't work.
I map a volume from host to container where I have a Root certificate, all I want to do is to run this command update-ca-certificates, so it will updates the directory /etc/ssl/certs with my cert in the container, however it is not happening.
I tried to solve this in a Dockerfile and I can see that the command runs, but it seems that the cert is not present there and just appears after I login to the container.
What I end doing is to get into the container, then I run the needed commands manually.
This is the piece of my docker-compose file that I have been trying to use:
build:
context: .
dockerfile: Dockerfile
command: >
sh -c "ls -la /usr/local/share/ca-certificates &&
update-ca-certificates"
security_opt:
- seccomp:unconfined
volumes:
- "c:/certs_for_docker:/usr/local/share/ca-certificates"
In the same way I cannot run apt update or anything like this, but after connecting to the container with docker exec -it test_alerting_comp /bin/bash I can pull anything from any repo.
My goal is to execute any needed command on building time, so when I login to the container I have already the packages I will use and the Root cert updated, thanks.
Why dont you do package update/install and copy certificates in the Dockerfile?
Dockerfile
...
RUN apt-get update && apt-get -y install whatever
COPY ./local/certificates /usr/local/share/my-certificates
RUN your-command-for-certificates
docker-compose.yml
version: "3.7"
services:
your-service:
build: ./dir

command/CMD in docker-compose is not equivalent to CMD in Dockerfile

I have a container that uses a volume in its entrypoint. for example
CMD bash /some/volume/bash_script.sh
I moved this to compose but it only works if my compose points to a Dockerfile in the build section if I try to write the same line in the command section is not acting as I expect and throws file not found error.
I also tried to use docker-compose run <specific service> bash /some/volume/bash_script.sh which gave me the same error.
The question is - Why dont I have this volume at the time that the docker-compose 'command' is executed? Is there anyway to make this work/ override the CMD in my dockerfile?
EDIT:
I'll show specifically how I do this in my files:
docker-compose:
version: '3'
services:
base:
build:
context: ..
dockerfile: BaseDockerfile
volumes:
code:/volumes/code/
my_service:
volumes:
code:/volumes/code/
container_name: my_service
image: my_service_image
ports:
- 1337:1337
build:
context: ..
dockerfile: Dockerfile
volumes:
code:
BaseDockerfile:
FROM python:3.6-slim
WORKDIR /volumes/code/
COPY code.py code.py
CMD tail -f /dev/null
Dockerfile:
FROM python:3.6-slim
RUN apt-get update && apt-get install -y redis-server \
alien \
unixodbc
WORKDIR /volumes/code/
CMD python code.py;
This works.
But if I try to add to docker-compose.yml this line:
command: python code.py
Then this file doesnt exist at the command time. I was expecting this to behave the same as the CMD command
Hmm, nice point!
command: python code.py is not exactly the same as CMD python code.py;!
Since the first one is interpreted as a shell-form command, where the latter is interpreted as an exec-form command.
The problem is about the differences in these two types of CMDs. (i.e. CMD ["something"] vs CMD "something").
For more info about these two, see here.
But, you may still be thinking of what's wrong with your example?
In your case, based on the specification of YAML format, python code.py in the command: python code.py will be interpreted as a single string value, not an array!
On the other hand, as you've probably guessed, python code.py; in the above-mentioned Dockerfile is interpreted as an array, which provides an exec-form command.
The (partial) answer is that the error that was thrown was not at all what the problem was.
Running the following command: bash -c 'python code.py' worked fine. I still cant explain why there was a difference between CMD in Dockerfile and docker-compose "command" oprtion. but this solved it for me
I found out this will work:
command: python ./code.py

Resources