I want to build singularity container from dockerfile.
I have pulled and run docker images from docker hub with singularity.
singularity pull docker://ubuntu:latest
I have also build the image from singularity recipe file.
singularity build cpp.sif singularity_file
But I want to build singularity image from dockerfile.
Anyone know how to do it. Is it possible ???
You cannot build a singularity container directly from a Dockerfile, but you can do it in a two-step process.
docker build -t local/my_container:latest .
sudo singularity build my_container.sif docker-daemon://local/my_container:latest
Using docker://my_container looks for the container on Docker Hub. When you use docker-daemon, it looks at your locally built docker containers. You can also use Bootstrap: docker-daemon in a Singularity definition file.
EDIT: Both singularity and apptainer now require an explicit tag name for the source docker container. Answer updated accordingly.
You can transform a Dockerfile into a singularity recipe or vise-versa using Singularity Python. Singularity Python offers some very helpful utilities, consider to install it if you plan to work with singularity a lot
pip3 install spython # if you do not have spython install it from the command line
# print in the console
spython recipe Dockerfile
# save in the *.def file
spython recipe Dockerfile &> Singularity.def
If you have problems with pip you can download spython or pull a container as described in Singularity Python install. Find more about recipe conversion here
sudo singularity build ubuntu.sif docker://ubuntu:latest
builds it directly for me
Unsure if there was an update to singularity for this purpose
Related
My goal is to build a Docker Build image that can be used as a CI stage that's capable of building a multi-archtecture image.
FROM public.ecr.aws/docker/library/docker:20.10.11-dind
# Add the buildx plugin to Docker
COPY --from=docker/buildx-bin:0.7.1 /buildx /usr/libexec/docker/cli-plugins/docker-buildx
# Create a buildx image builder that we'll then use within this container to build our multi-architecture images
RUN docker buildx create --platform linux/amd64,linux/arm64 --name=my-builder --use
^ builds the container I need, but does not include the emulator of arm64. This means when I try to use it to build a multiarchitecture image via a command like docker buildx build --platform=$SUPPORTED_ARCHITECTURES --build-arg PHP_VERSION=8.0.1 -t my-repo:latest ., I get the error:
error: failed to solve: process "/dev/.buildkit_qemu_emulator /bin/sh -c apt-get update && apt-get -y install -q ....
The solution is to run docker run --rm --privileged tonistiigi/binfmt --install arm64 as part of the CI steps, which uses the buildx container I previously built. However, I'd really like to understand why the emulator cannot seem to be installed in the container by adding something like this to the Dockerfile:
# Install arm emulator
COPY --from=tonistiigi/binfmt /usr/bin/binfmt /usr/bin/binfmt
RUN /usr/bin/binfmt --install arm64
I'd really like to understand why the emulator cannot seem to be installed in the container
Because when you perform a RUN command, the result is to capture the filesystem changes from that step, and save them to a new layer in your image. But the qemu setup command isn't really modifying the filesystem, it's modifying the host kernel, which is why it needs --privileged to run. You'll see evidence of those kernel changes in /proc/sys/fs/binfmt_misc/ on the host after configuring qemu. It's not possible to specify that flag as part of the container build, all steps run in the Dockerfile are unprivileged, without access to the host devices or the ability to alter the host kernel.
The standard practice in CI systems is to configure the host in advance, and then run the docker build. In GitHub Actions, that's done with the setup-qemu-action before running the build step.
I am trying to pull a docker image but have to use singularity. How can I do this? Here is the script I am running.
cp -rp ~/adversarial-policies/ $SLURM_TMPDIR
cd adversarial-policies/
singularity pull docker://humancompatibleai/adversarial_policies:latest
singularity run -it --env MUJOCO_KEY=~/.mujoco/mjkey.txt ./adversarial_policies-latest.simg
source ./modelfreevenv/bin/activate
python -m modelfree.multi.train with paper --path $SLURM_TMPDIR --data-path $SLURM_TMPDIR
cp $SLURM_TMPDIR/job-output.txt /network/tmp1/gomrokma/
cp $SLURM_TMPDIR/error.txt /network/tmp1/gomrokma/
The errors I get are with ERROR: Unknown option: --build-arg
ERROR: Unknown option: -it.
Any help would be appreciated. I am new to using singularity containers instead of docker
Singularity and Docker are both containers, but they are not a drop in replacement for each other. I strongly recommend reading the documentation of the relevant version of the singularity you're using. The latest version has a good section on using docker and singularity together.
If you are using singularity v3 or newer, the file created from singularity pull will be named adversarial_policies_latest.sif, not adversarial_policies-latest.simg. If v2 is the only version available on your cluster, ask the admins to install the v3. 2.6.1 is the only v2 without security issues and it is no longer getting any updates.
As for the singularity run ..., the -it docker options are to force an interactive tty session rather than run in the background. singularity exec and singularity run both will always run in the foreground, so there is no equivalent required option to use with singularity. Passing environment variables is also handled differently. Since the container is run as your user, it passes your environment through to it. You can either set export MUJOCO_KEY=~/.mujoco/mjkey.txt further up the script or have it set just for the command: MUJOCO_KEY=~/.mujoco/mjkey.txt singularity run ./adversarial_policies-latest.simg.
I am using ubuntu 18.04
I have docker-ce installed
I have a file named Dockerfile
I didn't have any other files
how can I start using this container
Firstly you need to build an image from Dockerfile. To do this:
Go to the directory containing Dockerfile
Run (change <image_name> to some meaningful name): docker build -t <image_name> .
After image is built we can finally run it: docker run -it <image_name>
There multiple options how the image can be run so I encourage you to read some docs.
I am creating a Dockerfile, where my remote repo is cloned, then built.
Can I map that output folder inside Docker container to a local folder so that to have the build result in it?
For something like this, I would not use docker build. Instead, create a Docker image that contains the necessary tools to build your project and use it as a "compiler". In the end, you want to be able to do:
$ docker run -v $(pwd):/output compiler
Building the project using a command has a lot of advantages over doing it during docker build:
You are able to use volumes to mount local directories into the Docker container
You can easily re-run single steps of your build process without having to re-run the whole build
You can build the project, and use the build output for another image (e.g. build Javascript project and put it in nginx image)
Not "mapped" from the container as such. Building and maps/mounts don't really coexist (unless you use something like rocker to build). You can get a copy of the data from the built image though.
Via tar.
docker run --rm IMAGE tar -cf - /clone | tar -xvf -
Or docker cp
CID=$(docker create IMAGE)
docker cp $CID:/clone ./
docker rm -f $CID
Or use a named volume, the data will be found in Mountpoint from inspect.
docker run --rm -v myclone:/clone IMAGE sleep 1
docker volume inspect myclone
I download the docker and want to compile it from the source code:
[root#localhost docker-1.5.0]# make
mkdir bundles
docker build -t "docker" .
/bin/sh: docker: command not found
make: *** [build] Error 127
Per my understanding, if I want to compile docker, I need to get a docker firstly. Is it right? If it is true, how does the first docker come?
yum or apt-get install docker-io will install the docker-io
then you build it from source and it replace the existing docker or set your path to point to the new docker.
You must have Docker to build Docker only because that's what the Docker guys thought would be the most convenient.
Of course, there is a way to compile the Docker source without having Docker installed on your machine; but then - you will have to have on your machine all the compilation tools and dependencies needed for the compilation.
So, the Docker team "dockerized" the compilation process. Namely, they used Docker itself, and what it is intended to do, also for the compilation of the Docker source.