How to `docker-compose run` outside the working directory - docker

Suppose I have the following dockerfile:
FROM node:alpine
WORKDIR /src/mydir
Now, suppose I want to run docker-compose from the src/ folder as opposed to src/mydir as happens by default.
I tried the following:
docker-compose run my_container ../ my-task
However the above failed.
Any guidance is much appreciated!

You want to use the --workdir (or -w) option of the docker-compose run command.
See the official documentation of the command here: https://docs.docker.com/compose/reference/run/
For instance, given your above example:
docker-compose run my_container -w /src my-task

Related

Folder created in Dockerfile not visible after running container

I am using Docker and have a docker-compose.yml and a Dockerfile. In my Dockerfile, I create a folder. When I build the container, I can see that the folder is created, but when I run the container, I can see all the files, but the folder that I created during the container build is not visible.
Both of these files are in the same location.
Here is docker-compose
version: '3'
services:
app:
build: .
volumes:
- ./:/app
command: tail -f /dev/null #command to leave the container on
Here is my Dockerfile
FROM alpine
WORKDIR /app
COPY . /app
RUN mkdir "test"
RUN ls
To build the container I use the command: docker-compose build --progress=plain --no-cache. Command RUN ls from Dockerfile prints me that there are 3 files: Dockerfile, docker-compose.yml and created in Dockerfile test directory.
When my container is running and i'm entering the docker to check which files are there i haven't directory 'test'.
I probably have 'volumes' mounted incorrectly. When I delete them, after entering the container, I see the 'test' folder. Unfortunately,
I want my files in the container and on the host to be sync.
This is a simple example. I have the same thing creating dependencies in nodejs. I think that the question written in this way will be more helpful to others.
When you mount the same volume in docker-compose it will mount that folder to the running image, and test folder will be overwritten by mounted folder.
This may work for you (to create folder from docker-compose file), but im not really sure in your use case:
version: '3'
services:
app:
build: .
volumes:
- ./:/app
command: mkdir /app/test && tail -f /dev/null
Based on your comment, below is an example on how i would use Dockerfile to build node packages and save node_modules back to host:
Dockerfile
FROM node:latest
COPY . /app
RUN npm install
Sh
#!/bin/bash
docker build -t my-image .
container_id=$(docker run -d my-image)
docker cp $container_id:/app/node_modules .
docker stop $container_id
docker rm $container_id
Or more simple way, on how I use to do it, is just to run the docker image and ssh into it:
docker run --rm -it -p 80:80 -v $(pwd):/home/app node:14.19-alpine
ssh into running container, perform npm commands then exit;

Apache/Nifi 1.12.1 Docker Image Issue

I have a Dockerfile based on apache/nifi:1.12.1 and want to expand it like this:
FROM apache/nifi:1.12.1
RUN mkdir -p /opt/nifi/nifi-current/conf/flow
Thing is that the folder isn't created when I'm building the image from Linux distros like Ubuntu and CentOS. Build succeeds, I run it with docker run -it -d --rm --name nifi nifi-test but when I enter the container through docker exec there's no flow dir.
Strange thing is, that the flow dir is being created normally when I'm building the image through Windows and Docker Desktop. I can't understand why is this happening.
I've tried things such as USER nifi or RUN chown ... but still...
For your convenience, this is the base image:
https://github.com/apache/nifi/blob/rel/nifi-1.12.1/nifi-docker/dockerhub/Dockerfile
Take a look at this as well:
This is what looks like at the CLI
Thanks in advance.
By taking a look at the dockerfile provided you can see the following volume definition
Then if you run
docker image inspect apache/nifi:1.12.1
As a result, when you execute the RUN command to create a folder under the conf directory it succeeds
BUT when you run the container the volumes are mounted and as a result they overwrite everything that is under the mountpoint /opt/nifi/nifi-current/conf
In your case the flow directory.
You can test this by editing your Dockerfile
FROM apache/nifi:1.12.1
# this will be overriden, by volumes
RUN mkdir -p /opt/nifi/nifi-current/conf/flow
# this will be available in the container environment
RUN mkdir -p /opt/nifi/nifi-current/flow
To tackle this you could
clone the Dockerfile of the image you use as base one (the one in
FROM) and remove the VOLUME directive manually. Then build it and
use in your FROM as base one.
You could try to avoid adding directories under the mount points specified in the Dockerfile

A script copied through the dockerfile cannot be found after I run docker with -w

I have the following Dockerfile
FROM ros:kinetic
COPY . ~/sourceshell.sh
RUN["/bin/bash","-c","source /opt/ros/kinetic/setup.bash"]
when I did this (after building it with docker build -t test
docker run --rm -it test /bin/bash
I had a bash terminal and I could clearly see there was a sourceshell.sh file that I could even execute from the Host
However I modified the docker run like this
docker run --rm -it -w "/root/afolder/" test /bin/bash
and now the file sourceshell.sh is nowhere to be seen.
Where do the files copied in the dockerfile go when the working directory is reasigned with docker run?
Option "-w" is telling your container execute commands and access on/to "/root/afolder" while your are COPYing "sourceshell.sh" to the context of the build, Im not sure, you can check into documentation but i think also "~" is not valid either. In order to see your file exactly where you access you should use your dockerfile like this bellow otherwise you would have to navigate to your file with "cd":
FROM ros:kinetic
WORKDIR /root/afolder
COPY . ./sourceshell.sh
RUN["/bin/bash","-c","source /opt/ros/kinetic/setup.bash"]
Just in case you don't understand the diff of the build and run process:
the code above belongs to the the build context, meaning, first build and image (the one you called it "test"). then the command:
docker run --rm -it -w "/root/afolder/" test /bin/bash
runs a container using "test" image and use WORKDIR "/root/afolder"

Run a command as root with docker-compose?

While in the process of painstakingly sculpting a Dockerfile and docker-compose.yml, what is THE RIGHT WAY to run root shell in work-in-progress containers (without actually starting their services!) in order to debug issues? I need to be able to run the shell as root, because only root has full access to files containing the information that I need to examine.
I can modify Dockerfile and docker-compose.yml in order to achieve this goal; as I wrote above, I am in the process of sculpting those anyway.
The problem however is that about the only way I can think of is putting USER root in Dockerfile or user: root in docker-compose.yml, but those SimplyHaveNoEffect™ in the docker-compose run <service> bash scenario. whoami in the shell thus started says neo4j instead of root, no matter what I try.
I might add sudo to the image, which doesn't have sudo, but this should be considered last resort. Also using docker directly instead of docker-compose is less than preferable.
All of the commands that launch shells in containers (including, for example, docker-compose run have a --user option and so you can specify an arbitrary user for your debugging shell.
docker-compose run -u root <service> bash
If you're in the process of debugging your image build, note that each build step produces an image, and you can run a debugging shell on that image. (For example, examine the step before a RUN step to see what the filesystem looks like before it executes, or after to see its results.)
$ docker build .
...
Step 7/9 : RUN ...
---> Using cache
---> 55c91a5dca05
...
$ docker run --rm -it -u root 55c91a5dca05 bash
In both of these cases the command (bash) overrides the CMD in the Dockerfile. If you have an ENTRYPOINT wrapper script that will still run, but the standard exec "$#" command will launch your debugging shell. If you've put your default command to run as ENTRYPOINT, change it to CMD to better support this use case (and also the wrapper entrypoint pattern, should you need it).
If you really can't change the Dockerfile, you can override the ENTRYPOINT too, but it's a little awkward.
docker run --rm -it -u root --entrypoint ls myimage -al /app
You can also use it this way:
version: '3'
services:
jenkins:
user: root
image: jenkins/jenkins:lts
ports:
- "8080:8080"
- "50000:50000"
volumes:
- /jenkins:/var/jenkins_home
you can refer to How to configure docker-compose.yml to up a container as root

Docker run without setting WORKDIR doesn't work

This doesn't work:
docker run -d -p 80:80 image /go/src/hello
This works:
docker run -w /go/src/ -d -p 80:80 image ./hello
I'm confused about the above result.
I prefer the first one but it doesn't work. Anyone can help with this.
It depends on the ENTRYPOINT (you can see it with a docker inspect --format='{{.Config.Entrypoint}}' image)
With a default ENTRYPOINT of sh -c, both should work.
With any other ENTRYPOINT, it might expect to be in the right folder before doing anything.
Also a docker log <container> would be helpful to know what the first try did emit as an output.
As The OP Clerk comments, it is from a golang Dockerfile/image, which means 'hello' will refer to a compiled hello executable in $GOPATH/bin or to an executable built with go build in the src folder.

Resources