I am working on single node kubernetes cluster built with kubeadm. During development, create a new docker image, but the image will be deleted immediately without permission from kubernetes garbage collection. How do I control this?
Environment:
kubeadm version: v1.17.2
kubelet version: v1.17.2
docker version: 19.03.5
Ubuntu 18.04 desktop
Linux kernel version: 4.15.0-74-generic
I created an image with the docker build command on master node, and
confirmed that the image was deleted immediately with docker
container ls -a. If I running Docker only, the images have not been
deleted. So I guess the reason for the removal was due to the
kubernetes garbage collection. – MASH 3 hours ago
Honestly, I doubt that your recently build docker image could've been deleted by kubernetes garbage collector.
I think you are confusing two concepts: image and stopped container. If you want to check your local images, you should use docker image ls command, not docker container ls -a. The last one doesn't say anything about available images and doesn't prove that any image was deleted.
It's totally normal behaviour of Docker. Please look at this example from docker docs:
We build a new docker image using following commands:
# create a directory to work in
mkdir example
cd example
# create an example file
touch somefile.txt
# build an image using the current directory as context, and a Dockerfile passed through stdin
docker build -t myimage:latest -f- . <<EOF
FROM busybox
COPY somefile.txt .
RUN cat /somefile.txt
EOF
After successful build:
Sending build context to Docker daemon 2.103kB
Step 1/3 : FROM busybox
---> 020584afccce
Step 2/3 : COPY somefile.txt .
---> 216f8119a0e6
Step 3/3 : RUN cat /somefile.txt
---> Running in 90cbaa24838c
Removing intermediate container 90cbaa24838c
---> b1e6c2284368
Successfully built b1e6c2284368
Successfully tagged myimage:latest
we run:
$ docker container ls -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
As you can see there's nothing there and it's totally ok.
But when you run docker image ls command instead, you'll see our recently build image:
$ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
myimage latest b1e6c2284368 10 seconds ago 1.22MB
Related
I am a beginner with docker and have created a docker-compose.yml file. Everything is running well, but I want to move my container that is generated by "docker-compose up" to another machine. How can I save/export my container that is running my services of the docker-compose.yml to another machine?
Thanks
For commit running container to image with new tag you can use this command :
$ docker commit <containerID> new_image_name:tag
For save new_image_name:tag to file use :
$ docker save -o new_file_name.tar new_image_name:tag
Now you can move your docker-compose.yml to same folder to another machine and your new_file_name.tar too.
On your mother machine where are this files run :
$ docker load --input new_file_name.tar
Rewrite in docker-compose.yml section image: to your new_image_name:tag
If you lost the name use :
$ docker images
and continue with $ docker load..... from above
Last step is run :
$ docker-compose up -d
Created Docker container process is not showing using command docker ps --all
I have created only one Dockerfile in a directory, and which is a single available file inside the directory.
the Dockerfile contains following lines:
# Use an existing docker image as a base
FROM alpine
# Download and install a dependency
RUN apk add --update redis
# Tell the image what to do when it starts
# as a container
CMD ["redis-server"]
and after saving the file with name "Dockerfile" and without any extension, I am simply executing the command to build using docker build .
and the logs are as follows:
Sending build context to Docker daemon 2.048kB
Step 1/3 : FROM alpine
---> a24bb4013296
Step 2/3 : RUN apk add --update redis
---> Using cache
---> 218559ae3e9b
Step 3/3 : CMD ["redis-server"]
---> Using cache
---> fc607b62e266
Successfully built fc607b62e266
SECURITY WARNING: You are building a Docker image from Windows against a non-Windows Docker host. All files and directories added to build context will have '-rwxr-xr-x' permissions. It is recommended to double check and reset permissions for sensitive files and directories.
so, as above you can see the container image process is getting built successfully, but still the container not getting displayed using docker ps --all command.
Well I am using Windows 10 x64 OS & with assurance that no antivirus or firewall is blocker this docker. Am I missing something or making any mistake here?
docker ps works like the unix command ps. It doesn't show all the docker images that could be run. It only shows the docker containers that are running (or with -a, have been run).
You've built a docker image but you haven't yet run it with docker run.
Try docker run --rm -it fc607b62e266 and if you have everything et up right, it should start up your redis server. (--rm removes the container when it stops running, generally a good practice unless you want it to persist).
You'll see your new image - not container because it hasn't been instantiated yet - in the output of docker images. Add a -t flag to give a tag to your docker build command and you'll see it under that name, otherwise, you can always use the image ref.
A container will only run while it's underlying "root process" is running. When running a service like redis-server, this means the container will run until signaled (docker service restart, system restrart, or docker kill).
While it is running, more processes can be executed within the container with docker exec. Each of these is a separate process, but all will be terminated when the "root process" ends. The converse, of course, is not true - ending the execed process doesn't affect the "root process".
I'm trying to start a docker container with docker start my_container, but it is exiting immediately. It works fine on some machines, but not on others. Here's my process:
Pull an image via docker pull <repo>:latest
Create a container via docker create --name my_container <repo>:latest
Start the container via docker start my_container
When I check the running docker processes via docker ps -a, I see that the status of my_container is Exited (1) 2 seconds ago.
When I run docker logs my_container, the only output is:
standard_init_linux.go:190: exec user process caused "exec format error"
The underlying issue in my case was an architecture mismatch.
My Dockerfile was using an amd64 base image. I built an image from this dockerfile and pushed it to a remote repository. I then pulled the image onto a device with arm32v7 architecture, created a container from the image, and tried to run the container.
A docker image built from the base image below will work on amd64 - it will not work on arm32v7.
FROM amd64/ros:kinetic-ros-core-xenial
A docker image built from the base image below will work on arm32v7 - it will not work on amd64.
FROM arm32v7/ros:kinetic-ros-core-xenial
A docker image built from a Dockerfile with the base image defined as below will default to the architecture of your current machine.
FROM ros:kinetic-ros-core-xenial
based on Moving docker-compose containersets
I have loaded the images :
$ docker images -a
REPOSITORY TAG IMAGE ID CREATED SIZE
br/irc latest 3203cf074c6b 23 hours ago 377MB
openjdk 8u131-jdk-alpine a2a00e606b82 5 days ago 101MB
nginx 1.13.3-alpine ba60b24dbad5 4 months ago 15.5MB
but now i want to run them, as they would run with docker-compose, but i cannot find any example.
here is the docker-compose.yml
version: '3'
services:
irc:
build: irc
hostname: irc
image: br/irc:latest
command: |
-Djava.net.preferIPv4Stack=true
-Djava.net.preferIPv4Addresses
run-app
volumes:
- ./br/assets/br.properties:/opt/br/src/java/br.properties
nginx:
hostname: nginx
image: nginx:1.13.3-alpine
ports:
- "80:80"
links:
- irc:irc
volumes:
- ./nginx/assets/default.conf:/etc/nginx/conf.d/default.conf
so how can i run the container, and attach to it, to see if its running, and in what order do i run these three images. Just started with docker, so not sure of the typical workflow ( build, run, attach etc )
so even though i do have docker-compose yml file, but since i have the build images from another host, can i possibly run docker commands to run and execute the images ? making sure that the local images are being referenced, and not the ones from docker registry.
Thanks #tgogos, this does give me a general overview, but specifically i was looking for:
$ docker run -dit openjdk:8u131-jdk-alpine
then:
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
cc6ceb8a82f8 openjdk:8u131-jdk-alpine "/bin/sh" 52 seconds ago Up 51 seconds vibrant_hodgkin
shows its running
2nd:
$ docker run -dit nginx:1.13.3-alpine
3437cf295f1c7f1c27bc27e46fd46f5649eda460fc839d2d6a2a1367f190cedc
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3437cf295f1c nginx:1.13.3-alpine "nginx -g 'daemon ..." 20 seconds ago Up 19 seconds 80/tcp vigilant_kare
cc6ceb8a82f8 openjdk:8u131-jdk-alpine "/bin/sh" 2 minutes ago Up 2 minutes vibrant_hodgkin
then: finally:
[ec2-user#ip-10-193-206-13 DOCKERLOCAL]$ docker run -dit br/irc
9f72d331beb8dc8ccccee3ff56156202eb548d0fb70c5b5b28629ccee6332bb0
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9f72d331beb8 br/irc "/opt/irc/grailsw" 8 seconds ago Up 7 seconds 8080/tcp cocky_fermi
3437cf295f1c nginx:1.13.3-alpine "nginx -g 'daemon ..." 56 seconds ago Up 55 seconds 80/tcp vigilant_kare
cc6ceb8a82f8 openjdk:8u131-jdk-alpine "/bin/sh" 2 minutes ago Up 2 minutes vibrant_hodgkin
All three UP !!!!
Your question is about docker-compose but you also ask things about run, build, attach which makes me think I should try to help you with some basic information (which wasn't so easy for me to cope with a couple of months ago :-)
images
Images are somehow the base from which containers are created. Docker pulls images from http://hub.docker.com and stores them in your host to be used every time you create a new container. Changes in the container do not affect the base image.
To pull images from docker hub, use docker pull .... To build your own images start reading about Dockerfiles. A simple Dockerfile (in an abstract way) would look like this:
FROM ubuntu # base image
ADD my_super_web_app_files # adds files for your app
CMD run_my_app.sh # starts serving requests
To create the above image to your host, you use docker build ... and this is a very good way to build your images, because you know the steps taken to be created.
If this procedure takes long, you might consider later to store the image in a docker registry like http://hub.docker.com, so that you can pull it from any other machine easily. I had to do this, when dealing with ffmpeg on a Raspberry Pi (the compilation took hours, I needed to pull the already created image, not build it from scratch again in every Raspberry).
containers
Containers are based on images, you can have many different containers from the same image on the same host. docker run [image] creates a new container based on that image and starts it. Many people here start thinking containers are like mini-VMs. They are not!
Consider a container as a process. Every container has a CMD and when started, executes it. If this command finishes, or fails, the container stops, exits. A good example for this is nginx: go check the official Dockerfile, the command is:
CMD ["nginx"]
If you want to see the logs from the CMD, you can docker attach ... to your container. You can also docker stop ... a running container or docker start ... an already stopped one. You can "get inside" to type commands by:
docker exec -it [container_name] /bin/bash
This opens a new tty for you to type commands, while the CMD continues to run.
To read more about the above topics (I've only scratched the surface) I suggest you also read:
Is it possible to start a shell session in a running container (without ssh)
Docker - Enter Running Container with new TTY
How do you attach and detach from Docker's process?
Why docker container exits immediately
~jpetazzo: If you run SSHD in your Docker containers, you're doing it wrong!
docker-compose
After you feel comfortable with these, docker-compose will be your handy tool which will help you manipulate many containers with single line commands. For example:
docker compose up
Builds, (re)creates, starts, and attaches to containers for a service.
Unless they are already running, this command also starts any linked services.
The docker-compose up command aggregates the output of each container (essentially running docker-compose logs -f). When the command exits, all containers are stopped. Running docker-compose up -d starts the containers in the background and leaves them running
To run your docker-compose file you would have to execute:
docker-compose up -d
Then to see if your containers are running you would have to run:
docker ps
This command will display all the running containers
Then you could run the exec command which will allow you to enter inside a running container:
docker-compose exec irc
More about docker-compose up here: https://docs.docker.com/compose/reference/up/
I've been using Dockerfiles so often that I've forgotten how to start up a new one without one.
I was reading https://docs.docker.com/engine/reference/commandline/start/ and ofc it doesn't state how to start up a new one.
docker run -it ubuntu:16.04 bash
A Dockerfile describes a Docker image not a container.
The container is an instance of this image.
If you want to run a container without building an image (which means without creating a Dockerfile), you need to use an existing image on the Docker Hub (link here).
N.B.: The Docker Hub is a Docker online repository, they are more repositories like Quay, Rancher and others.
For example, if you want to test this, you can use the hello-world image found on the Docker Hub: https://hub.docker.com/_/hello-world/.
According to the documentation, to run a simple hello-world container:
$ docker run hello-world
Source: https://hub.docker.com/_/hello-world/
If you don't have the image locally, Docker will automatically pull it
from the web. If you want to manually pull the image you can run the
following command:
$ docker pull hello-world
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Source: https://hub.docker.com/_/hello-world/
docker start is used to start a stopped container which already exists and in stopped state.
If you want to start a new container use docker run instead. For information about docker run please see https://docs.docker.com/engine/reference/commandline/run/