How to give Dockerfile input parameters from docker run command - docker

FROM centos
RUN yum -y update
ENV zk=dx
RUN mkdir $zk
after building image and after running fallowing command
docker run -it -e zk="hifi" <image ID>
I get a directory with name dx but not with hifi
can anyone help me how to set a Dockerfile variable from docker run command

This has behaved this way because:
The RUN commands in the Dockerfile are executed when the Docker image is built (like almost all Dockerfile instructions) - ie. when you run docker build
The docker run command runs when the container is run from the image.
So when you run docker run and set the value to "hifi", the image already exists which has a directory called "dx" in it. The directory creation task has already been performed - updating the environment variable to "hifi" won't change it.
You cannot set a Dockerfile build variable at run time. The build has already happened.
Incidentally, you're over-writing the value of the zk variable right before you create the directory. If you did successfully pass "hifi" into the docker build, it would be over-written and the folder would always be called "dx".

Related

RUN pwd does not seem to work in my dockerfile

I am studying on Docker these days and confused that why RUN pwd just does not seem to work while running my docker file.
I am working on IOS
and the full content of my docker file can be seen as below:
FROM ubuntu:latest
MAINTAINER xxx
RUN mkdir -p /ln && echo hello world > /ln/wd6.txt
WORKDIR /ln
RUpwd
CMD ["more" ,"wd6.txt"]
as far as my understanding,
after building the docker image with the tag 'wd8'and running it, I supposed the result should show like this
~ % docker run wd8
::::::::::::::
wd6.txt
::::::::::::::
hello world
ln
however, the fact is without ln.
I have tried with RUN $pwd, and also added ENV at the beginning of my dockerfile, both do not work.
Please help point out where the problem is.
ps: so I should not expect to see the directory 'ln' on my disk, right? since it is supposed to be created within the container...?
enter image description here
1227
There are actually multiple reasons you don't see the output of the pwd command, some of them already mentioned in the comments:
the RUN statements in your Dockerfile are only executed during the build stage, i.e. using docker build and not with docker run
when using the BuildKit backend (which is the case here) the output of successfully run commands is collapsed; to see them anyway use the --progress=plain flag
running the same build multiple times will use the build cache of the previous build and not execute the command again; you can disable this with the --no-cache flag

How can I prevent Docker from removing intermediate containers when executing RUN command?

The error I'm experiencing is that I want to execute the command "change directory" in my Docker machine, but every time I execute RUN instruction in my Dockerfile, it deletes the actual container (intermediate container).
DOCKERFILE
This happens when I execute the Dockerfile from above
How can I prevent Docker from doing that?
docker build --rm=false
Remove intermediate containers after a successful build (default true)
The current paths are different for Dockerfile and RUN (inside container).
Each RUN command starts from the Dockerfile path (e. g. '/').
When you do RUN cd /app, the "inside path" changes, but not the "Dockerfile path". The next RUN command will again be run at '/'.
To change the "Dockerfile path", use WORKDIR (see reference), for example WORKDIR /opt/firefox.
The alternative would be chaining the executed RUN commands, as EvgeniySharapov pointed out: RUN cd opt; ls; cd firefox; ls
on multiple lines:
RUN cd opt; \
ls; \
cd firefox; \
ls
(To clarify: It doesn't matter that Docker removes intermediate containers, that is not the problem in this case.)
When you use docker build --no-cache this will delete intermediate containers when you build an image. This may affect building times when you run build multiple times. Alternatively you can choose to put multiple shell commands into one shell command using \ and then use it as a RUN argument.
Mote tips could be found here

How to copy a file from the host into a container while starting?

I am trying to build a docker image using the dockerfile, my purpose is to copy a file into a specific folder when i run the "docker run" command!
this my dockerfile code:
FROM openjdk:7
MAINTAINER MyPerson
WORKDIR /usr/src/myapp
ENTRYPOINT ["cp"]
CMD ["/usr/src/myapp"]
CMD ls /usr/src/myapp
After building my image without any error (using the docker build command), i tried to run my new image:
docker run myjavaimage MainClass.java
i got this error: ** cp: missing destination file operand after ‘MainClass.java’ **
How can i resolve this? thx
I think you want this Dockerfile:
FROM openjdk:7
WORKDIR /usr/src/myapp
COPY MainClass.java .
RUN javac MainClass.java
ENV CLASSPATH=/usr/src/myapp
CMD java MainClass
When you docker build this image, it COPYs your Java source file from your local directory into the image, compiles it, and sets some metadata telling the JVM where to find the resulting .class files. Then when you launch the container, it will run the single application you've packaged there.
It's common enough to use a higher-level build tool like Maven or Gradle to compile multiple files into a single .jar file. Make sure to COPY all of the source files you need in before running the build. In Java it seems to be common to build the .jar file outside of Docker and just COPY that in without needing a JDK, and that's a reasonable path too.
In the Dockerfile you show, Docker combines ENTRYPOINT and CMD into a single command and runs that command as the single main process of the container. If you provide a command of some sort at the docker run command, that overrides CMD but does not override ENTRYPOINT. You only get one ENTRYPOINT and one CMD, and the last one in the Dockerfile wins. So you're trying to run container processes like
# What's in the Dockerfile
cp /bin/sh -c "ls /usr/src/myapp"
# Via your docker run command
cp MainClass.java
As #QuintenScheppermans suggests in their answer you can use a docker run -v option to inject the file at run time, but this will happen after commands like RUN javac have already happened. You don't really want a workflow where the entire application gets rebuilt every time you docker run the container. Build the image during docker build time, or before.
Two things.
You have used CMD twice.
CMD can only be used once, think of it as the purpose of your docker image. Every time a container is run, it will always execute CMD if you want multiple commands, you should use RUN and then lastly, used CMD
FROM openjdk:
MAINTAINER MyPerson
WORKDIR /usr/src/
ENTRYPOINT ["cp"]
RUN /usr/src/myapp
RUN ls /usr/src/myapp
Copying stuff into image
There is a simple command COPY the syntax being COPY <from-here> <to-here>
Seems like you want to run myjavaimage so what you will do is
COPY /path/to/myjavaimage /myjavaimage
CMD myjavaimage MainClass.java
Where you see the arrows, I've just written dummy code. Replace that with the correct code.
Also, your Dockerfile is badly created.
ENTRYPOINT -> not sure why you'd do "cp", but it's an actual entrypoint. Could point to the root dir of your project or to an app that will be run.
Don't understand why you want to do ls /usr/src/myapp but if you do want to do it, use RUN and not CMD
Lastly,
Best way to debug docker containers are in interactive mode. That means ssh'ing in to your container, have a look around, run code, and see what is the problem.
Run this: docker run -it <image-name> /bin/bash and then have a look inside and it's usually the best way to see what causes issues.
This stackoverflow page perfectly answers your question.
COPY foo.txt /data/foo.txt
# where foo.txt is the relative path on host
# and /data/foo.txt is the absolute path in the image
If you need to mount a file when running the command:
docker run --name=foo -d -v ~/foo.txt:/data/foo.txt -p 80:80 image_name

Source files are updated, but CMD does not reflect

I'm new to docker and am trying to dockerize an app I have. Here is the dockerfile I am using:
FROM golang:1.10
WORKDIR /go/src/github.com/myuser/pkg
ADD . .
RUN curl https://raw.githubusercontent.com/golang/dep/master/install.sh | sh
RUN dep ensure
CMD ["go", "run", "cmd/pkg/main.go"]
The issue I am running into is that I will update source files on my local machine with some log statements, rebuild the image, and try running it in a container. However, the CMD (go run cmd/pkg/main.go) will not reflect the changes I made.
I looked into the container filesystem and I see that the source files are updated and match what I have locally. But when I run go run cmd/pkg/main.go within the container, I don't see the log statements I added.
I've tried using the --no-cache option when building the image, but that doesn't seem to help. Is this a problem with the golang image, or my dockerfile setup?
UPDATE: I have found the issue. The issue is related to using dep for vendoring. The vendor folder had outdated files for my package because dep ensure was pulling them from github instead of locally. I will be moving to go 1.1 which support to go modules to fix this.
I see several things:
According to your Dockerfile
Maybe you need a dep init before dep ensure
Probably you need to check if main.go path is correct.
According to docker philosophy
In my humble opinion, you should create an image with docker build -t <your_image_name> ., executing that where your Dockerfile is, but without CMD line.
I would execute your go run <your main.go> in your docker run -d <your_image_name> go run <cmd/pkg/main.go> or whatever is your command.
If something is wrong, you can check exited containers with docker ps -a and furthermore check logs with docker logs <your_CONTAINER_name/id>
Other way to check logs is access to the container using bash and execute go run manually:
docker run -ti <your_image_name> bash
# go run blablabla

docker build how to run intermediate containers with centos:systemd

I am trying to build a docker image that is based on centos:systemd. In my Dockerfile I am running a command that depends on systemd running, this fails with the following error:
Failed to get D-Bus connection: Operation not permitted
error: %pre(mod-php-7.1-apache2-zend-server-7.1.7-16.x86_64) scriptlet failed, exit status 1
Error in PREIN scriptlet in rpm package mod-php-7.1-apache2-zend-server-7.1.7-16.x86_64
how can I get the intermediate containers to run with --privileged and mapping -v /sys/fs/cgroup:/sys/fs/cgroup:ro ?
If I comment out the installer and just run the container and manually execute the installer it works fine.
Here is the Dockerfile
FROM centos/systemd
COPY ./ZendServer-9.1.0-RepositoryInstaller-linux.tar.gz /opt
RUN tar -xvf /opt/ZendServer-9.1.0-RepositoryInstaller-linux.tar.gz -C /opt/
RUN /opt/ZendServer-RepositoryInstaller-linux/install_zs.sh 7.1 java --automatic
If your installer needs systemd running, I think you will need to launch a container with the base centos/systemd image, manually run the commands, and then save the result using docker commit. The base image ENTRYPOINT and CMD are not run while child images are getting built, but they do run if you launch a container and make your changes. After manually executing the installer, run docker commit {my_intermediate_container} {my_image}:{my_version}, replacing the bits in curly braces with the container name/hash, your desired image name, and image version.
You might also be able to change your Dockerfile to launch init before running your installer. I am not sure if that will work here in the context of building an image, but that would look like:
FROM centos/systemd
COPY ./ZendServer-9.1.0-RepositoryInstaller-linux.tar.gz /opt
RUN tar -xvf /opt/ZendServer-9.1.0-RepositoryInstaller-linux.tar.gz -C /opt/ \
&& /usr/sbin/init \
&& /opt/ZendServer-RepositoryInstaller-linux/install_zs.sh 7.1 java --automatic
A LAMP stack inside a docker container does not need systemd - I have made to work with the docker-systemctl-replacement script. It is able to start and stop a service according to what's written in the *.service file. You could try it with what the ZendServer is normally doing outside a docker container.

Resources