When I run a docker build command i see the following
[root#hadoop01 myjavadir]# docker build -t runhelloworld .
Sending build context to Docker daemon 4.096 kB
Sending build context to Docker daemon
Step 0 : FROM java
---> 3323938eb5a2
Step 1 : MAINTAINER priyanka priyanka.patil#subex.com
---> Running in 89fa73dbc2b8
---> 827afdfa3d71
Removing intermediate container 89fa73dbc2b8
Step 2 : COPY ./HelloWorld.java .
---> 9e547d78d08c
Removing intermediate container ff5b7c7a8122
Step 3 : RUN javac HelloWorld.java
---> Running in d52f3093d6a3
---> 86121aadfc67
Removing intermediate container d52f3093d6a3
Step 4 : CMD java HelloWorld
---> Running in 7b4fa1b8ed37
---> 6eadaac27986
Removing intermediate container 7b4fa1b8ed37
Successfully built 6eadaac27986
Want to understand the meaning of these container ids like 7b4fa1b8ed37.
What does it mean when the daemon says "Removing intermediate container d52f3093d6a3"?
The docker build process automates what is happening in the Creating your own images section of the docker docs.
In your case above:
The image ID we're going to start with is 3323938eb5a2 (the ID of the java image)
from that image we run a container (after it's created it has a container ID of 89fa73dbc2b8) to set the MAINTAINER meta data, docker commits the changes and the resulting layer ID is 827afdfa3d71
because we're finished with the container 89fa73dbc2b8, we can remove it
from the layer we created from the MAINTAINER line, we create a new container to run the command COPY ./HelloWorld.java . which gets a container ID of ff5b7c7a8122, docker commits the changes and the resulting layer ID is 9e547d78d08c
because we're finished with the container ff5b7c7a8122, we can remove it
Repeat for steps 3 and 4.
Related
I am new to docker. Excuse my ignorance on this topic.
I created a dockerfile with the intent on running the windows steam application. this is my docker file.
# escape=`
FROM mcr.microsoft.com/windows/servercore:ltsc2019
ADD https://steamcdn-a.akamaihd.net/client/installer/SteamSetup.exe c:\SteamSetup.exe
RUN c:\SteamSetup.exe /S
ENTRYPOINT ["c:\Program Files (x86)\Steam\Steam.exe"]
I verified that in the docker image Steam is installed at c:\Program Files (x86)\Steam\Steam.exe I attached to the docker with a powershell entrypoint and was able to run "& c:\Program Files (x86)\Steam\Steam.exe" I cannot however get the docker image to launch steam on its own. I gt the error below.
PS C:\Users\AJWHEELE\Desktop\dockers\steamOS> docker build -t ajwtech/windowstest .
Sending build context to Docker daemon 2.048kB
Step 1/4 : FROM mcr.microsoft.com/windows/servercore:ltsc2019
---> 80e84fd280e2
Step 2/4 : ADD https://steamcdn-a.akamaihd.net/client/installer/SteamSetup.exe c:\SteamSetup.exe
Downloading [==================================================>] 1.574MB/1.574MB
---> Using cache
---> d39ad50d3754
Step 3/4 : RUN c:\SteamSetup.exe /S
---> Using cache
---> 33cdd5566dad
Step 4/4 : ENTRYPOINT ["c:\Program Files (x86)\Steam\Steam.exe"]
---> Running in 65027c59352a
Removing intermediate container 65027c59352a
---> e92095819109
Successfully built e92095819109
Successfully tagged ajwtech/windowstest:latest
PS C:\Users\AJWHEELE\Desktop\dockers\steamOS> docker run --rm -e DISPLAY=192.168.1.119:0 ajwtech/windowstest:latest
The filename, directory name, or volume label syntax is incorrect.
PS C:\Users\AJWHEELE\Desktop\dockers\steamOS>
Also I am trying to get Steam to launch so that I can see the user interface. currently I am on a windows 10 machine trying to use VcXsrv.
Thanks,
Adam
I am trying to create a docker image and tag it at the same time. This way I can create a script that uses the -t option in the "docker build" command. Thus staff members that deploy new images does not need to type docker commands, they simply run the script.
The problem that I have is that the "docker build" command also starts the image. This causes the docker build command to get 'stuck' when it gets to the point where the image runs, because the image is suppose to run indefinitely, it is a service, thus the build command never finishes, and the result is that the tag mentioned in the "-t" part of the build command never gets applied to the new image.
So there is no way to identify new images because none of them have tags. I can fix it by terminating the build command with Ctrl+C and then afterwards using the "docker tag" command. But that means that I cannot put the build and tag commands in a bash script, because I have to tag the image ID and not the name. Which changes every time I run the docker build command.
I have tried the following:
Hitting Ctrl+C to terminate the application running inside the new image. This does end the current running application. But this terminates the docker build command as well. Thus the image tag never gets applied.
I have tried using "docker ps" in another terminal to find the currently running container and stopping it with "docker stop ID". This also stops the application / container but this generates an error on the docker build command and once again doesn't finish and doesn't apply the tag.
This is what I see after I have tried steps 1 or 2 above and run a "docker image list" command, the neither the tag field nor the repository field being set:
REPOSITORY TAG IMAGE ID CREATED SIZE
<none> <none> df355e74685b 6 minutes ago 493MB
openjdk latest e92ef2c3a3dd 12 days ago 470MB
openjdk 8 b84359d0cbce 3 weeks ago 488MB
portainer/portainer latest da2759008147 4 weeks ago 75.4MB
My docker build command :
sudo docker build -t slite/cloud-db-host -f slite/cloud/db/Dockerfile.Host.docker .
And here is my docker file:
FROM openjdk:8
LABEL maintainer="techss.co.za"
LABEL vendor="techss.co.za"
LABEL app="slite-db-host"
LABEL repository="slite"
COPY slite/cloud/db /slite/cloud/db
COPY slite/lib/java /slite/lib/java
EXPOSE 51173
WORKDIR .
RUN javac slite/cloud/db/*.java && javac slite/lib/java/*.java && java slite.cloud.db.SliteDBHost
ENTRYPOINT ["java","slite.cloud.db.SliteDBHost"]
Here is the output from docker build:
Sending build context to Docker daemon 13.43MB
Step 1/11 : FROM openjdk:8
---> b84359d0cbce
Step 2/11 : LABEL maintainer="techss.co.za"
---> Running in 3dc3f0fcea2c
Removing intermediate container 3dc3f0fcea2c
---> 0946737c1386
Step 3/11 : LABEL vendor="techss.co.za"
---> Running in c289dd741158
Removing intermediate container c289dd741158
---> 00d5a7f3d7e5
Step 4/11 : LABEL app="slite-db-host"
---> Running in 1d7e953bdf6f
Removing intermediate container 1d7e953bdf6f
---> 4540390e8bb5
Step 5/11 : LABEL repository="slite"
---> Running in c366a92becb5
Removing intermediate container c366a92becb5
---> c9be0ef5e6da
Step 6/11 : COPY slite/cloud/db /slite/cloud/db
---> f3efeb406aef
Step 7/11 : COPY slite/lib/java /slite/lib/java
---> 797bf7df8335
Step 8/11 : EXPOSE 51173
---> Running in 93389673e9cc
Removing intermediate container 93389673e9cc
---> abfb10413edf
Step 9/11 : WORKDIR .
---> Running in 77a67baa9be6
Removing intermediate container 77a67baa9be6
---> 7d313395f072
Step 10/11 : RUN javac slite/cloud/db/*.java && javac slite/lib/java/*.java && java slite.cloud.db.SliteDBHost
---> Running in 99edcf79d5f4
Sun Jul 07 18:47:02 UTC 2019 Listening on port 51173
It just hangs on the last line, I assume it's waiting for the application running inside the container to end, which will never happen because it's a service. So how do I force docker build to carry on, even though the container is running, thus applying the needed tags. Or force docker build to NOT start the image but simply create it, which would be first prize.
Just replace RUN with CMD and it will not be runned during the build:
CMD ["sh","-c","javac slite/cloud/db/*.java && javac slite/lib/java/*.java && java slite.cloud.db.SliteDBHost"]
Cheers
Any RUN instruction will be executed when the Docker image is built. I suspect your issue will be fixed if you change line 10 of your Dockerfile.
Before:
RUN javac slite/cloud/db/*.java && javac slite/lib/java/*.java && java slite.cloud.db.SliteDBHost
After:
RUN javac slite/cloud/db/*.java && javac slite/lib/java/*.java
I am using jenkins image to create a docker container. For now I am just trying to create a new directory and copy a couple of files. The image build process runs fine but when I start the container I cannot see the files and the directory.
Here is my dockerfile
FROM jenkins:2.46.1
MAINTAINER MandeepSinghGulati
USER jenkins
RUN mkdir /var/jenkins_home/aws
COPY aws/config /var/jenkins_home/aws/
COPY aws/credentials /var/jenkins_home/aws/
I found a similar question here but it seems different because I am not creating the jenkins user. It already exists with home directory /var/jenkins_home/. Not sure what I am doing wrong
Here is how I am building my image and starting the container:
➜ jenkins_test docker build -t "test" .
Sending build context to Docker daemon 5.632 kB
Step 1/6 : FROM jenkins:2.46.1
---> 04c1dd56a3d8
Step 2/6 : MAINTAINER MandeepSinghGulati
---> Using cache
---> 7f76c0f7fc2d
Step 3/6 : USER jenkins
---> Running in 5dcbf4ef9f82
---> 6a64edc2d2cb
Removing intermediate container 5dcbf4ef9f82
Step 4/6 : RUN mkdir /var/jenkins_home/aws
---> Running in 1eb86a351beb
---> b42587697aec
Removing intermediate container 1eb86a351beb
Step 5/6 : COPY aws/config /var/jenkins_home/aws/
---> a9d9a28fd777
Removing intermediate container ca4a708edc6e
Step 6/6 : COPY aws/credentials /var/jenkins_home/aws/
---> 9f9ee5a603a1
Removing intermediate container 592ad0031f49
Successfully built 9f9ee5a603a1
➜ jenkins_test docker run -it -v $HOME/jenkins:/var/jenkins_home -p 8080:8080 --name=test-container test
If I run the command without the volume mount, I can see the copied files and the directory. However with the volume mount I cannot see the same. Even if I empty the directory on the host machine. Is this the expected behaviour? How can I copy over files to the directory being used as a volume ?
Existing volumes can be mounted with
docker container run -v MY-VOLUME:/var/jenkins_home ...
Furthermore, the documentation of COPY states:
All new files and directories are created with a UID and GID of 0.
So COPY does not reflect your USER directive. This seems to be the second part of your problem.
I would like to understand the execution steps involved in building Docker Images using Dockerfile. Couple of questions I have listed down below. Please help me in understanding the build process.
Dockerfile content
#from base image
FROM ubuntu:14.04
#author name
MAINTAINER RAGHU
#commands to run in the container
RUN echo "hello Raghu"
RUN sleep 10
RUN echo "TASK COMPLETED"
Command used to build the image: docker build -t raghavendar/hands-on:2.0 .
Sending build context to Docker daemon 20.04 MB
Step 1 : FROM ubuntu:14.04
---> b1719e1db756
Step 2 : MAINTAINER RAGHU
---> Running in 532ed79e6d55
---> ea6184bb8ef5
Removing intermediate container 532ed79e6d55
Step 3 : RUN echo "hello Raghu"
---> Running in da327c9b871a
hello Raghu
---> f02ff92252e2
Removing intermediate container da327c9b871a
Step 4 : RUN sleep 10
---> Running in aa58dea59595
---> fe9e9648e969
Removing intermediate container aa58dea59595
Step 5 : RUN echo "TASK COMPLETED"
---> Running in 612adda45c52
TASK COMPLETED
---> 86c73954ea96
Removing intermediate container 612adda45c52
Successfully built 86c73954ea96
In step 2 :
Step 2 : MAINTAINER RAGHU
---> Running in 532ed79e6d55
Question 1 : it indicates that it is running in the container with id - 532ed79e6d55, but with what Docker image is this container formed ?
---> ea6184bb8ef5
Question 2 : what is this id? Is it an image or container ?
Removing intermediate container 532ed79e6d55
Question 3 : Is the final image formed with multiple layers saved from intermediate containers?
Yes, Docker images are layered. When you build a new image, Docker does this for each instruction (RUN, COPY etc.) in your Dockerfile:
create a temporary container from the previous image layer (or the base FROM image for the first command;
run the Dockerfile instruction in the temporary "intermediate" container;
save the temporary container as a new image layer.
The final image layer is tagged with whatever you name the image - this will be clear if you run docker history raghavendar/hands-on:2.0, you'll see each layer and an abbreviation of the instruction that created it.
Your specific queries:
1) 532 is a temporary container created from image ID b17, which is your FROM image, ubuntu:14.04.
2) ea6 is the image layer created as the output of the instruction, i.e. from saving intermediate container 532.
3) yes. Docker calls this the Union File System and it's the main reason why images are so efficient.
docker build --rm=true
This is the default option, which makes it to delete all intermediate images after a successful build.
Does it affect the caching adversely? Since cache relies on the intermediate images I think?
Why not try it and find out?
$ cat Dockerfile
FROM debian
RUN touch /x
RUN touch /y
$ docker build --rm .
Sending build context to Docker daemon 2.048 kB
Sending build context to Docker daemon
Step 0 : FROM debian
---> df2a0347c9d0
Step 1 : RUN touch /x
---> Running in 2e5ff13506e5
---> fd4dd6845e31
Removing intermediate container 2e5ff13506e5
Step 2 : RUN touch /y
---> Running in b2a585989fa5
---> 0093f530941b
Removing intermediate container b2a585989fa5
Successfully built 0093f530941b
$ docker build --rm .
Sending build context to Docker daemon 2.048 kB
Sending build context to Docker daemon
Step 0 : FROM debian
---> df2a0347c9d0
Step 1 : RUN touch /x
---> Using cache
---> fd4dd6845e31
Step 2 : RUN touch /y
---> Using cache
---> 0093f530941b
Successfully built 0093f530941b
So no, the cache still works. As you pointed out, --rm is actually on by default (you would have to run --rm=false to turn it off), but it refers to the intermediate containers not the intermediate images. These are the containers that Docker ran your build commands in to create the images. In some cases you might want to keep those containers around for debugging, but normally the images are enough. In the above output, we can see the containers 2e5ff13506e5 and b2a585989fa5, which are deleted, but also the images fd4dd6845e31 and 0093f530941b which are kept.
You can't delete the intermediate images as they are needed by the final image (an image is the last layer plus all ancestor layers).