Docker run without setting WORKDIR doesn't work - docker

This doesn't work:
docker run -d -p 80:80 image /go/src/hello
This works:
docker run -w /go/src/ -d -p 80:80 image ./hello
I'm confused about the above result.
I prefer the first one but it doesn't work. Anyone can help with this.

It depends on the ENTRYPOINT (you can see it with a docker inspect --format='{{.Config.Entrypoint}}' image)
With a default ENTRYPOINT of sh -c, both should work.
With any other ENTRYPOINT, it might expect to be in the right folder before doing anything.
Also a docker log <container> would be helpful to know what the first try did emit as an output.
As The OP Clerk comments, it is from a golang Dockerfile/image, which means 'hello' will refer to a compiled hello executable in $GOPATH/bin or to an executable built with go build in the src folder.

Related

Docker run commands works but dockerfile not

I have a problem with running the image of Dockerfile. CLI commands work fine but when I use the Dockerfile I get an error from the localhost:
localhost didn’t send any data.
What I am doing is simple. By CLI:
docker run -d --name mytomcat -p 8080:8080 tomcat:latest
docker exec -it mytomcat /bin/bash
mv webapps webapps2
mv webapps.dist/ webapps
exit
Which works fine.
My Dockerfile:
FROM tomcat:latest
CMD mv webapps webapps2 && mv webapps.dist/ webapps && /bin/bash
Build and run:
docker build -t myrepo/tomacat:1.00 .
docker run -d --name mytomcat -p 8080:8080 myrepo/tomacat:1.00
Doesn't work and show the above error.
Note: I am using mv command because I get 404 error!
Does anybody know the problem here?
When your Dockerfile has a CMD command, that runs instead of the command in the base image. With the tomcat image, the base image would run the Tomcat server; but with this Dockerfile, it's trying to run a bash shell instead, and without any input that just exits immediately.
To just moves files around, it's usually better to use COPY and RUN directives to set up the image once, rather than trying to repeat these steps every time you run the container. For this setup where the base image already has a reasonable CMD, you don't need to repeat it in your own custom Dockerfile.
FROM tomcat:latest
RUN mv webapps webapps2 && mv webapps.dist/ webapps
# no particular mention of bash; use the `CMD` from the base image
It's not uncommon for a base image to include some sort of runtime that needs to be configured, but for the base image's CMD to still be correct. In addition to tomcat, nginx and php:fpm work similarly; so long as their configuration files and code are in the right place, you don't need to repeat the CMD.

How to `docker-compose run` outside the working directory

Suppose I have the following dockerfile:
FROM node:alpine
WORKDIR /src/mydir
Now, suppose I want to run docker-compose from the src/ folder as opposed to src/mydir as happens by default.
I tried the following:
docker-compose run my_container ../ my-task
However the above failed.
Any guidance is much appreciated!
You want to use the --workdir (or -w) option of the docker-compose run command.
See the official documentation of the command here: https://docs.docker.com/compose/reference/run/
For instance, given your above example:
docker-compose run my_container -w /src my-task

A script copied through the dockerfile cannot be found after I run docker with -w

I have the following Dockerfile
FROM ros:kinetic
COPY . ~/sourceshell.sh
RUN["/bin/bash","-c","source /opt/ros/kinetic/setup.bash"]
when I did this (after building it with docker build -t test
docker run --rm -it test /bin/bash
I had a bash terminal and I could clearly see there was a sourceshell.sh file that I could even execute from the Host
However I modified the docker run like this
docker run --rm -it -w "/root/afolder/" test /bin/bash
and now the file sourceshell.sh is nowhere to be seen.
Where do the files copied in the dockerfile go when the working directory is reasigned with docker run?
Option "-w" is telling your container execute commands and access on/to "/root/afolder" while your are COPYing "sourceshell.sh" to the context of the build, Im not sure, you can check into documentation but i think also "~" is not valid either. In order to see your file exactly where you access you should use your dockerfile like this bellow otherwise you would have to navigate to your file with "cd":
FROM ros:kinetic
WORKDIR /root/afolder
COPY . ./sourceshell.sh
RUN["/bin/bash","-c","source /opt/ros/kinetic/setup.bash"]
Just in case you don't understand the diff of the build and run process:
the code above belongs to the the build context, meaning, first build and image (the one you called it "test"). then the command:
docker run --rm -it -w "/root/afolder/" test /bin/bash
runs a container using "test" image and use WORKDIR "/root/afolder"

How can I run docker image using singularity?

I would like to run a docker image with singularity (I have never used either).
The person who made the docker image suggested to locate the terminal shell to the location where the files (that are used as input for the docker image) are located and then do:
docker run -v ${PWD}:/DATA -w /DATA -i image/myimage -specifications
I am able to run this image using singularity when I omit ${PWD}:/DATA -w /DATA and indicate the paths to input files and docker image. But I would prefer to run it as in the example above. Can someone tell me how I can do this using singularity? I saw that singularity run --bind might be a way, but couldn’t figure out how. I know this is very basic, but I’m just starting to learn this. Thank you!
With Docker, -v ${PWD}:/DATA -w /DATA will mount the current directory inside the container to the specified location (/Data).
You can easily emulate this behaviour with Singularity if you use --bind instead of -v:
--bind ${PWD}:/DATA -w /DATA
However, the Docker WORKDIR (-w/--workdir) is not the same as the Singularity option -W/--workdir. Depending on what you want to do exactly, singularity exec --pwd might be able to replace Dockers -w argument.

Run command as --privileged in Dockerfile

I need to bee --privileged to run a specific command in the Dockerfile but I can't find a way to tell docker to do so.
The command is RUN echo core > /proc/sys/kernel/core_pattern
If I put that in the Dockerfile the build process fails.
If I run the Dockerfile with that line commented but with the flag --privileged then I can run the command well within the container.
Is there any solution to make everything work from the Dockerfile?
Thank you
Not exactly "Dockerfile", but you can do this with an entrypoint script provided you always run the container with --privileged
That being said, I would warn against this if at all possible as part of the beauty of docker is that you aren't running things as root.
A more better alternative, IMHO, is instead to change this on the host system. In that way, it will be reflected within the container as well.
The only caveat to that is that that will be reflected on all containers on that system (and of course, the system itself).
Here is a proof of concept for my suggested solution:
root#terrorbyte:~# docker run -it alpine cat /proc/sys/kernel/core_pattern
core
root#terrorbyte:~# echo core2 > /proc/sys/kernel/core_pattern
root#terrorbyte:~# docker run -it alpine cat /proc/sys/kernel/core_pattern
core2

Resources