How can I run docker image using singularity? - docker

I would like to run a docker image with singularity (I have never used either).
The person who made the docker image suggested to locate the terminal shell to the location where the files (that are used as input for the docker image) are located and then do:
docker run -v ${PWD}:/DATA -w /DATA -i image/myimage -specifications
I am able to run this image using singularity when I omit ${PWD}:/DATA -w /DATA and indicate the paths to input files and docker image. But I would prefer to run it as in the example above. Can someone tell me how I can do this using singularity? I saw that singularity run --bind might be a way, but couldn’t figure out how. I know this is very basic, but I’m just starting to learn this. Thank you!

With Docker, -v ${PWD}:/DATA -w /DATA will mount the current directory inside the container to the specified location (/Data).
You can easily emulate this behaviour with Singularity if you use --bind instead of -v:
--bind ${PWD}:/DATA -w /DATA
However, the Docker WORKDIR (-w/--workdir) is not the same as the Singularity option -W/--workdir. Depending on what you want to do exactly, singularity exec --pwd might be able to replace Dockers -w argument.

Related

Understanding a Docker .Sh file

The .sh file I am working with is:
docker run -d --rm -it --gpus '"device=0,1,2,3"' --ipc=host -v $HOME/Folder:/Folder tr_xl_container nohup python /Path/file.py -p/Path/ |& tee $HOME/Path/log.txt
I am confused about the -v and everything after that. Specifically, the -v $HOME/Folder:/Folder tr_xl_container section and -p/Path/. If someone would be able to help breakdown what those commands mean or point me to a reference that does, that would be very much appreciated. I checked Docker documentation and Linux command line documentation and did not come up with anything too helpful.
A docker run command is split up in 3 parts:
docker options
the image to run
a command for the container
In your case -d --rm -it --gpus '"device=0,1,2,3"' --ipc=host -v $HOME/Folder:/Folder are docker options.
tr_xl_container is the image name.
nohup python /Path/file.py -p/Path/ is the command sent to the container.
The last part, |& tee $HOME/Path/log.txt isn't run in the container, but takes the output from the docker run command and saves it in $HOME/Path/log.txt.
As for -v $HOME/Folder:/Folder, it's a volume mapping or more precisely, a bind mount. It creates a directory in the container with the path /Folder that is linked to the directory $Home/Folder on the host machine. That makes files in the host directory visible inside the container and if the container does anything with files in the /Folder directory, those changes will be visible in the host directory.
The command after the image name is for the container and it's up to the container what to do with it. From looking at it, it looks like it runs a Python program stored in /Path/file.py in the image. But to be sure, you'll need to know what the image does.

understanding docker : how come my docker container content is dynamic?

I want to make sure I understand correctly docker: when i build an image from the current directory I run:
docker build -t imgfile .
What happens when i change the content of a file in the directory AFTER the image is built? From what i've tried it seems it changes the content of the docker image also dynamically.
I thought the docker image was like a zip file that could only be changed with docker commands or logging into the image and running commands.
The dockerfile is :
FROM lambci/lambda:build-python3.8
WORKDIR /var/task
EXPOSE 8000
RUN echo 'export PS1="\[\e[36m\]zappashell>\[\e[m\] "' >> /root/.bashrc
CMD ["bash"]
And the docker run command is :
docker run -ti -p 8000:8000 -e AWS_PROFILE=zappa -v "$(pwd):/var/task" -v ~/.aws/:/root/.aws --rm zappa-docker-image
Thank you
Best,
Your docker run command isn't really running your image at all. The docker run -v $(pwd):/var/task syntax overwrites what was in /var/task in the image with a bind mount to the current directory on the host. So when you edit a file on your host, the container has the same host directory (and not the content from the image) and you see the changes inside the container as well.
You're right that the image is immutable. The image you show doesn't really contain anything, beyond a .bashrc file that won't usually be used. You can try running the image without the -v options to see:
docker run --rm zappa-docker-image ls -al
# just shows `.` and `..` directories
I'd recommend making sure you COPY your application into the image, setting its CMD to actually run the application, and removing the -v option that overwrites its main directory. If your goal is to run host code against host files with host supporting data like your AWS credentials, you're not really getting much benefit from introducing Docker in between your application and every single file it uses.

Docker volumes not keeping data

I've built a docker image with a python script that works with two different commands. The first one creates a file that is used when executing the second one.
As far as I know, I must use a Docker volume to store data between executions so I've created a volume with:
docker volume create myvol
To then use it when running the container
$ docker run myimg fit -v myvol:/data
model.h5 stored at /data
But then, when executing the other command it seems than the Docker directory /data is empty...
$ docker run predict -v myvol:/data
Error: /data/model.h5 not found
Is there any point that I'm missing?
The docker command line is order sensitive. The syntax is:
docker $args_to_docker run $args_to_run $image_name $override_to_cmd
In your command you pass the -v option after the image name, so it becomes the CMD value in your container:
$ docker run myimg fit -v myvol:/data
model.h5 stored at /data
That runs the cmd fit -v myvol:/data inside the container.
The solution is to change the order if you want the -v to be an option to run and define a volume:
$ docker run -v myvol:/data myimg fit
$ docker run -v myvol:/data predict
Make sure you use the -v or --mount argument to the Docker run command. This will make sure that the data is really stored outside of the container, and you'll lose nothing.
See: https://docs.docker.com/storage/volumes/ for details.
When running these commands you need to remember that the first part of the -v option is always <path_to_directory_on_host>:<path_to_directory_on_guest>. You are able to use both absolute path's and relative paths to the directory on your host machine. So first thing is to create a directory on your host called data, move model.h5 into that directory, then mount it with the -v switch.
so if I had my data directory in C:\data on a windows directory I would use:
docker run <img_name> -v C:\data:/data
if I was on unix and my data directory was in /usr/data and I wanted it to be mounted on /data in my guest container then I would use
docker run <img_name> -v /usr/data:/data

Docker volume mount: "no such file or directory"

I'm pretty new to Docker and trying to get my first (real) image up-and-running. I've written my Dockerfile:
WORKDIR /myapp
VOLUME /myapp/configuration
VOLUME /etc/asterisk
VOLUME /usr/share/asterisk/sounds
So my container should start in /myapp and I should be able to mount external volumes to configure my app (configuration), Asterisk and its sounds.
Though, when I start a container with my image:
docker run -it \
--entrypoint /bin/bash myapp \
-v $(pwd)/asterisk:/etc/asterisk \
-v $(pwd)/configuration:/myapp/configuration \
-v $(pwd)/asterisk/sounds:/usr/share/asterisk/sounds
It gives me the following error:
/bin/bash: /home/me/Docker/asterisk:/etc/asterisk: No such file or directory
I really don't understand why. I've verified it was not a line-ending issue (CRLF rather than expected LF for example), and it's not. I really don't know.
If it counts, I'm running elementary OS Loki.
Please suggest.
I found what the problem was, my hint was the unusual /bin/bash beginning the line.
Actually, it was interpreting my -v options as options for the entrypoint, and bash didn't understand it.
I moved my lines (--entrypoint is now the last option) and it works like a charm.

Docker run without setting WORKDIR doesn't work

This doesn't work:
docker run -d -p 80:80 image /go/src/hello
This works:
docker run -w /go/src/ -d -p 80:80 image ./hello
I'm confused about the above result.
I prefer the first one but it doesn't work. Anyone can help with this.
It depends on the ENTRYPOINT (you can see it with a docker inspect --format='{{.Config.Entrypoint}}' image)
With a default ENTRYPOINT of sh -c, both should work.
With any other ENTRYPOINT, it might expect to be in the right folder before doing anything.
Also a docker log <container> would be helpful to know what the first try did emit as an output.
As The OP Clerk comments, it is from a golang Dockerfile/image, which means 'hello' will refer to a compiled hello executable in $GOPATH/bin or to an executable built with go build in the src folder.

Resources