I download the docker and want to compile it from the source code:
[root#localhost docker-1.5.0]# make
mkdir bundles
docker build -t "docker" .
/bin/sh: docker: command not found
make: *** [build] Error 127
Per my understanding, if I want to compile docker, I need to get a docker firstly. Is it right? If it is true, how does the first docker come?
yum or apt-get install docker-io will install the docker-io
then you build it from source and it replace the existing docker or set your path to point to the new docker.
You must have Docker to build Docker only because that's what the Docker guys thought would be the most convenient.
Of course, there is a way to compile the Docker source without having Docker installed on your machine; but then - you will have to have on your machine all the compilation tools and dependencies needed for the compilation.
So, the Docker team "dockerized" the compilation process. Namely, they used Docker itself, and what it is intended to do, also for the compilation of the Docker source.
Related
My goal is to build a Docker Build image that can be used as a CI stage that's capable of building a multi-archtecture image.
FROM public.ecr.aws/docker/library/docker:20.10.11-dind
# Add the buildx plugin to Docker
COPY --from=docker/buildx-bin:0.7.1 /buildx /usr/libexec/docker/cli-plugins/docker-buildx
# Create a buildx image builder that we'll then use within this container to build our multi-architecture images
RUN docker buildx create --platform linux/amd64,linux/arm64 --name=my-builder --use
^ builds the container I need, but does not include the emulator of arm64. This means when I try to use it to build a multiarchitecture image via a command like docker buildx build --platform=$SUPPORTED_ARCHITECTURES --build-arg PHP_VERSION=8.0.1 -t my-repo:latest ., I get the error:
error: failed to solve: process "/dev/.buildkit_qemu_emulator /bin/sh -c apt-get update && apt-get -y install -q ....
The solution is to run docker run --rm --privileged tonistiigi/binfmt --install arm64 as part of the CI steps, which uses the buildx container I previously built. However, I'd really like to understand why the emulator cannot seem to be installed in the container by adding something like this to the Dockerfile:
# Install arm emulator
COPY --from=tonistiigi/binfmt /usr/bin/binfmt /usr/bin/binfmt
RUN /usr/bin/binfmt --install arm64
I'd really like to understand why the emulator cannot seem to be installed in the container
Because when you perform a RUN command, the result is to capture the filesystem changes from that step, and save them to a new layer in your image. But the qemu setup command isn't really modifying the filesystem, it's modifying the host kernel, which is why it needs --privileged to run. You'll see evidence of those kernel changes in /proc/sys/fs/binfmt_misc/ on the host after configuring qemu. It's not possible to specify that flag as part of the container build, all steps run in the Dockerfile are unprivileged, without access to the host devices or the ability to alter the host kernel.
The standard practice in CI systems is to configure the host in advance, and then run the docker build. In GitHub Actions, that's done with the setup-qemu-action before running the build step.
i've been using a docker container to build the chromium browser (building for Android on Debian 10). I've already created a Dockerfile that contains most of the packages I need.
Now, after building and running the container, I followed the instructions, which asked me to execute an install script (./build/install-build-deps-android.sh). In this script multiple apt install commands are executed.
My question now is, is there a way to install these packages without rebuilding the container? Downloading and building it took rather long, plus rebuilding a container each time a new package is required seems kind of suboptimal. The error I get when executing the install script is:
./build/install-build-deps-android.sh: line 21: lsb_release: command not found
(I guess there will be multiple missing packages). And using apt will give:
root#677e294147dd:/android-build/chromium/src# apt install nginx
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package nginx
(nginx just as an example install).
I'm thankfull for any hints, as I could only find guides that use the Dockerfile to install packages.
You can use docker commit:
Start your container sudo docker run IMAGE_NAME
Access your container using bash: sudo docker exec -it CONTAINER_ID bash
Install whatever you need inside the container
Exit container's bash
Commit your changes: sudo docker commit CONTAINER_ID NEW_IMAGE_NAME
If you run now docker images, you will see NEW_IMAGE_NAME listed under your local images.
Next time, when starting the docker container, use the new docker image you just created:
sudo docker run **NEW_IMAGE_NAME** - this one will include your additional installations.
Answer based on the following tutorial: How to commit changes to docker image
Thanks for #adnanmuttaleb and #David Maze (unfortunately, they only replied, so I cannot accept their answers).
What I did was to edit the Dockerfile for any later updates (which already happened), and use the exec command to install the needed dependencies from outside the container. Also remember to
apt update
otherwise you cannot find anything...
A slight variation of the steps suggested by Arye that worked better for me:
Create container from image and access in interactive mode: docker run -it IMAGE_NAME bin/bash
Modify container as desired
Leave container: exit
List launched containers: docker ps -a and copy the ID of the container just modified
Save to a new image: docker commit CONTAINER_ID NEW_IMAGE_NAME
If you haven't followed the Post-installation steps for Linux
, you might have to prefix Docker commands with sudo.
I would like to create a minimalist dev environment for occasional developers which only need Docker.
The ecosystem would have:
code-server image to run Visual Studio Code
gcc image to build the code
git to push/commit the code
ubuntu with some modifications to run the code
I looked to docker-in-docker which could be a solution:
Docker
code-server
docker run -it -v ... gcc make
docker run -it -v ... git git commit ...
docker run -it -v ... ubuntu ./program
But it seems perhaps a bit overkill. What would be the proper way to have a full dev environment well separated, that only require Docker to be installed on the host machine (Linux, Windows, MacOS, Chromium)
I suggest using a Dockerfile.
This file specifies a few steps used to build an image.
The first line of the file specifies a base image(in your case, I would use Ubuntu):
FROM ubuntu:latest
Then, you can e.g. copy files to the image or select commands to run:
RUN apt install gcc make
RUN apt install git
and so on.
At the end, you may want to specify the program that is run when you start the container
CMD /bin/bash
Then you can build it with the command docker build -f Dockerfile -t devenv:latest. This builds a new image named devenv:latest (latest is the version) from the file Dockerfile.
Then, you can create a container from the file using docker run devenv:latest.
If you want to use this container multiple times, you could create it using docker run -it devenv:latest
If you want to, you can also use the code-server base image instead of ubuntu:latest.
I want to build singularity container from dockerfile.
I have pulled and run docker images from docker hub with singularity.
singularity pull docker://ubuntu:latest
I have also build the image from singularity recipe file.
singularity build cpp.sif singularity_file
But I want to build singularity image from dockerfile.
Anyone know how to do it. Is it possible ???
You cannot build a singularity container directly from a Dockerfile, but you can do it in a two-step process.
docker build -t local/my_container:latest .
sudo singularity build my_container.sif docker-daemon://local/my_container:latest
Using docker://my_container looks for the container on Docker Hub. When you use docker-daemon, it looks at your locally built docker containers. You can also use Bootstrap: docker-daemon in a Singularity definition file.
EDIT: Both singularity and apptainer now require an explicit tag name for the source docker container. Answer updated accordingly.
You can transform a Dockerfile into a singularity recipe or vise-versa using Singularity Python. Singularity Python offers some very helpful utilities, consider to install it if you plan to work with singularity a lot
pip3 install spython # if you do not have spython install it from the command line
# print in the console
spython recipe Dockerfile
# save in the *.def file
spython recipe Dockerfile &> Singularity.def
If you have problems with pip you can download spython or pull a container as described in Singularity Python install. Find more about recipe conversion here
sudo singularity build ubuntu.sif docker://ubuntu:latest
builds it directly for me
Unsure if there was an update to singularity for this purpose
I'm trying to learn Docker using Windows as the host OS to create a container using Rails image from Docker Hub.
I've created a Dockerfile with the content below and an empty Gemfile, however I'm still getting the error "Could not locate Gemfile".
Dockerfile
FROM rails:4.2.6
The commands I used are the following (not understanding what they actually do though):
ju.oliveira#br-54 MINGW64 /d/Juliano/ddoc
$ docker build -t ddoc .
Sending build context to Docker daemon 4.608 kB
Step 1 : FROM rails:4.2.6
---> 3fc52e59c752
Step 2 : MAINTAINER Juliano Nunes
---> Using cache
---> d3ab93260f0f
Successfully built d3ab93260f0f
SECURITY WARNING: You are building a Docker image from Windows against a non-Windows Docker host. All files and directories added to build context will have '-rwxr-xr-x' permissions. It is recommended to double check and reset permissions for sensitive files and directories.
$ docker run --rm -v "$PWD":/usr/src/app -w /usr/src/app ruby:2.1 bundle install
Unable to find image 'ruby:2.1' locally
2.1: Pulling from library/ruby
fdd5d7827f33: Already exists
a3ed95caeb02: Pull complete
0f35d0fe50cc: Already exists
627b6479c8f7: Already exists
67c44324f4e3: Already exists
1429c50af3b7: Already exists
f4f9e6a0d68b: Pull complete
eada5eb51f5d: Pull complete
19aeb2fc6eae: Pull complete
Digest: sha256:efc655def76e69e7443aa0629846c2dd650a953298134a6f35ec32ecee444688
Status: Downloaded newer image for ruby:2.1
Could not locate Gemfile
So, my questions are:
Why it can't find the Gemfile if it's in the same directory as the Dockerfile?
What does the command docker run --rm -v "$PWD":/usr/src/app -w /usr/src/app ruby:2.1 bundle install do?
How do I set a folder in my host file system to be synced to the container (I'm trying to create a development environment for Rails projects using Docker on Windows)?
I don't know if this makes any difference, but I'm running this command from the bash executed via "Docker Quickstart Terminal" shortcut. I think all it does is run these commands in a default VM, though I could create a new one (but I don't know if I should do this).
Thank you and sorry for all these questions, but right know Docker seems very confusing for me.
You must mount HOST directory somewhere inside your HOME directory (e.g. c:/Users/john/*)
$PWD will give you a Unix-like path. If your shell is like Cygwin, it will look like /cygdrive/c/Users/... or something funny. However, Docker and VirtualBox is a Windows executable, so they expect a plain Windows path. However it seems Docker cannot accept a Windows path in the -v command line, so it is converted to /c/Users/.... The other people may be right; you may not be able to access a directory outside your home for some reason (but I wouldn't know why). To solve your problem, create a junction within your home that points to the path you want, then mount that path in your home.
>mklink /j \users\chloe\workspace\juliano \temp
Junction created for \users\chloe\workspace\juliano <<===>> \temp
>docker run -v /c/Users/Chloe/workspace/juliano:/app IMAGE-NAME ls
007.jpg
...
In your case that would be
mklink /j C:\Users\Juliano\project D:\Juliano\ddoc
docker run -v /c/Users/Juliano/project:/usr/src/app -w /usr/src/app ruby:2.1 bundle install
I don't know what --rm does. I assume -w sets the working directory. -v sets the volume mount and maps the host path to the container path. ruby:2.1 uses the Docker standard Ruby 2.1 image. bundle install run Bundler!