I am trying to pull a docker image but have to use singularity. How can I do this? Here is the script I am running.
cp -rp ~/adversarial-policies/ $SLURM_TMPDIR
cd adversarial-policies/
singularity pull docker://humancompatibleai/adversarial_policies:latest
singularity run -it --env MUJOCO_KEY=~/.mujoco/mjkey.txt ./adversarial_policies-latest.simg
source ./modelfreevenv/bin/activate
python -m modelfree.multi.train with paper --path $SLURM_TMPDIR --data-path $SLURM_TMPDIR
cp $SLURM_TMPDIR/job-output.txt /network/tmp1/gomrokma/
cp $SLURM_TMPDIR/error.txt /network/tmp1/gomrokma/
The errors I get are with ERROR: Unknown option: --build-arg
ERROR: Unknown option: -it.
Any help would be appreciated. I am new to using singularity containers instead of docker
Singularity and Docker are both containers, but they are not a drop in replacement for each other. I strongly recommend reading the documentation of the relevant version of the singularity you're using. The latest version has a good section on using docker and singularity together.
If you are using singularity v3 or newer, the file created from singularity pull will be named adversarial_policies_latest.sif, not adversarial_policies-latest.simg. If v2 is the only version available on your cluster, ask the admins to install the v3. 2.6.1 is the only v2 without security issues and it is no longer getting any updates.
As for the singularity run ..., the -it docker options are to force an interactive tty session rather than run in the background. singularity exec and singularity run both will always run in the foreground, so there is no equivalent required option to use with singularity. Passing environment variables is also handled differently. Since the container is run as your user, it passes your environment through to it. You can either set export MUJOCO_KEY=~/.mujoco/mjkey.txt further up the script or have it set just for the command: MUJOCO_KEY=~/.mujoco/mjkey.txt singularity run ./adversarial_policies-latest.simg.
Related
I'm using the node docker images as a container for my build pipelines.
An issue I frequently run into is that a binary that I expect to exist, doesn't and I have to wait for it fail in the build pipeline. The zip command is one such example.
I can run the docker image on my local machine and ssh in to test commands.
Is there a way to summarise what commands are available for a given image?
Is there a way to summarise what commands are available for a given image?
You could look at the contents of /bin:
$ docker run --rm -it --entrypoint=ls node /bin
or /usr/local/bin:
$ docker run --rm -it --entrypoint=ls node /usr/local/bin
etc...
i've been using a docker container to build the chromium browser (building for Android on Debian 10). I've already created a Dockerfile that contains most of the packages I need.
Now, after building and running the container, I followed the instructions, which asked me to execute an install script (./build/install-build-deps-android.sh). In this script multiple apt install commands are executed.
My question now is, is there a way to install these packages without rebuilding the container? Downloading and building it took rather long, plus rebuilding a container each time a new package is required seems kind of suboptimal. The error I get when executing the install script is:
./build/install-build-deps-android.sh: line 21: lsb_release: command not found
(I guess there will be multiple missing packages). And using apt will give:
root#677e294147dd:/android-build/chromium/src# apt install nginx
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package nginx
(nginx just as an example install).
I'm thankfull for any hints, as I could only find guides that use the Dockerfile to install packages.
You can use docker commit:
Start your container sudo docker run IMAGE_NAME
Access your container using bash: sudo docker exec -it CONTAINER_ID bash
Install whatever you need inside the container
Exit container's bash
Commit your changes: sudo docker commit CONTAINER_ID NEW_IMAGE_NAME
If you run now docker images, you will see NEW_IMAGE_NAME listed under your local images.
Next time, when starting the docker container, use the new docker image you just created:
sudo docker run **NEW_IMAGE_NAME** - this one will include your additional installations.
Answer based on the following tutorial: How to commit changes to docker image
Thanks for #adnanmuttaleb and #David Maze (unfortunately, they only replied, so I cannot accept their answers).
What I did was to edit the Dockerfile for any later updates (which already happened), and use the exec command to install the needed dependencies from outside the container. Also remember to
apt update
otherwise you cannot find anything...
A slight variation of the steps suggested by Arye that worked better for me:
Create container from image and access in interactive mode: docker run -it IMAGE_NAME bin/bash
Modify container as desired
Leave container: exit
List launched containers: docker ps -a and copy the ID of the container just modified
Save to a new image: docker commit CONTAINER_ID NEW_IMAGE_NAME
If you haven't followed the Post-installation steps for Linux
, you might have to prefix Docker commands with sudo.
I would like to create a minimalist dev environment for occasional developers which only need Docker.
The ecosystem would have:
code-server image to run Visual Studio Code
gcc image to build the code
git to push/commit the code
ubuntu with some modifications to run the code
I looked to docker-in-docker which could be a solution:
Docker
code-server
docker run -it -v ... gcc make
docker run -it -v ... git git commit ...
docker run -it -v ... ubuntu ./program
But it seems perhaps a bit overkill. What would be the proper way to have a full dev environment well separated, that only require Docker to be installed on the host machine (Linux, Windows, MacOS, Chromium)
I suggest using a Dockerfile.
This file specifies a few steps used to build an image.
The first line of the file specifies a base image(in your case, I would use Ubuntu):
FROM ubuntu:latest
Then, you can e.g. copy files to the image or select commands to run:
RUN apt install gcc make
RUN apt install git
and so on.
At the end, you may want to specify the program that is run when you start the container
CMD /bin/bash
Then you can build it with the command docker build -f Dockerfile -t devenv:latest. This builds a new image named devenv:latest (latest is the version) from the file Dockerfile.
Then, you can create a container from the file using docker run devenv:latest.
If you want to use this container multiple times, you could create it using docker run -it devenv:latest
If you want to, you can also use the code-server base image instead of ubuntu:latest.
I am following this guide to make a aws lambda package for a piece of python code. The only difference is that I am pulling a python3.7 image like so:
docker run lambci/lambda:build-python3.7 aws --version
According to the documentation I should be able to run uname and to check that I am inside the linux environment. I am not inside this environment.
I am unable to enter the docker image after it has completed the pull, how do I enter the docker container after it has completed being pulled?
You need to specify a command (looks like that image does not have default one). Also add -it as parameters, then it works:
docker run -it lambci/lambda:build-python3.7 bash
https://serverfault.com/questions/757210/no-command-specified-from-re-imported-docker-image-container
$: man docker
-i, --interactive Keep STDIN open even if not attached
-t, --tty Allocate a pseudo-TTY
I did not get why -i is needed as it is not run in detached mode, but does not work w/out it. Welcome explanations from experts in comments.
I am running a docker image from deepai/densecap on my windows machine using docker toolbox. When i run image using docker CLI and pass the arguments for cp command as stated in below picture
It says that "docker cp" requires exactly 2 arguments". The various command i try to pass my image from local file system to container are:
docker cp C:\Users\piyush\Desktop\img1.jpg in1
docker cp densecap:C:\Users\piyush\Desktop\image1.jpg in1
docker cp C:\Users\piyush\Desktop\img1.jpg densecap:/shared/in1
I have just started using docker. Any help will be highly appreciated. I am also posting the container log:
It would seem on some versions of Docker, docker cp does not support parameter expansion...
For example
WORKS Docker version 19.03.4-ce, build 9013bf583a
CTR_ID=$(docker ps -q -f name=containername)
docker cp patches $CTR_ID:/home/build
FAILS Docker version 19.03.4-ce, build 9013bf583a
BUILDHOME=/home/build
docker cp patches containeridliteral:$BUILDHOME
In your case, maybe the pwd is not expanding properly.