I need to delete a port from the image and for some reason I can't recreate the image frome base image. How can I remove a port that exposes?
I took this image where a port was exposed
docker ps:
beddbd08c417 new:new "/bin/sh" 4 seconds ago Up 2 seconds 9010/tcp dazzling_pike
Dockerfile:
FROM gitlabregistry.isaco.ir/elyas/oicnode16
ENTRYPOINT [ "/bin/sh" ]
There are a few options for removing an exposed port:
Ignore it. It's documentation and doesn't affect container networking unless you run a specific command that uses this metadata.
Create a new base image without this exposed port. If they are exposing ports that you don't want exposed in your image, then there may be functionality in the base image you don't want. If you are changing configuration from the base image, evaluate if that makes more sense to happen in the base image.
Mutate the image after it's been created. I'm not a huge fan of that for this scenario, but if neither of the above options works, you can save the image from docker and change the json configuration. In my own project, regctl has an image mod command that does this for images in a registry (or you can import an image from docker save to an ocidir):
$ regctl image copy nginx:latest localhost:5000/library/nginx:latest
$ regctl image config localhost:5000/library/nginx:latest --format '{{jsonPretty .Config.ExposedPorts}}'
{
"80/tcp": {}
}
$ regctl image mod --expose-rm "80/tcp" localhost:5000/library/nginx:latest --create no-expose
localhost:5000/library/nginx:no-expose
$ regctl image config localhost:5000/library/nginx:no-expose --format '{{jsonPretty .Config.ExposedPorts}}'
null
There's no way to cancel an EXPOSE directive once it's been set, whether in the base image or earlier in the same image. Further EXPOSE lines will just expose more ports. (VOLUME and LABEL also add values to lists and work the same way.)
On the other hand, in modern Docker, "exposing a port" means almost nothing. Ports that are exposed but not published show up in docker ps output, and the unusual docker run -P option (capital P) publishes all exposed ports on arbitrary host ports, but that's about it.
So practically, I'd just ignore this; let your derived container include the base image's port in docker ps even if nothing is actually running there.
Related
I've running several containers on my host (Ubuntu Server)
I ran docker container via a command, like below
sudo docker run -d -p 5050:80 gitname/reponame
And after I call sudo docker ps and it shows
CONTAINER ID: e404ffa2bc6b
IMAGE: gitname/reponame
COMMAND: "dotnet run --server…"
CREATED: 14 seconds ago
STATUS: Up 12 seconds
PORTS: 5050/tcp, 0.0.0.0:5050->80/tcp
NAMES: reverent_mcnulty
And in a week I run sudo docker ps again and it shows another info where IMAGE was changed and it looks like ba2486f19dc0
I don't understand why.
It's problem for me because for stopping containers I use the command:
sudo docker stop $(sudo docker ps | awk '{ print $1,$2 }' | grep gitname/reponame | awk '{print $1 }')
And it doesn't work because image name already was changed
Every Docker image has a unique hex ID. These IDs will be different on different systems (even docker pull of the same image), but every image has exactly one ID.
Each image has some number of tags associated with it. This could be none, or multiple. docker tag will add a tag to an existing image; docker rmi will remove a tag, and also (if there are no other tags on the image, no other images using the image as a base, and no extant containers using the image) remove the image.
It's possible to "steal" the tag from an existing image. The most obvious way to do this is with docker build:
cat >Dockerfile <<EOF
FROM busybox
COPY file.txt /
EOF
echo foo > file.txt
docker build -t foo .
docker images
# note the ID for foo:latest
echo bar > file.txt
docker build -t foo .
docker images
# note the old ID will show as foo:<none>
# note a different ID for foo:latest
An explicit docker tag can do this too. Images on Docker Hub and other repositories can change too (ubuntu:16.04 is routinely re-published with security updates), and so if you docker pull an image you already have it can cause the old image to "lose its name" in favor of a newer version of that image.
How does this interact with docker run and docker ps? docker run can remember the image tag an image was started with, but if the image no longer has that tag, it forgets that datum. That's why your docker ps output reverts to showing the hex ID of the image.
There are a couple of ways around this for your application:
If the image is one you're building yourself, always use an explicit version tag (could just be the current datestamp) both when building and running the image. Then you'll never overwrite an existing image's tag. ("Don't use :latest tags.")
Use docker run --name to keep track of which container is which, and filter based on that, not the image tag. (#quentino's suggestion from comments.)
Don't explicitly docker pull in your workflow. If you don't have an image, docker run will automatically pull it for you. This will avoid existing images getting minor updates and also will avoid existing images losing their names.
I would like to distribute some larger static files/assets as a Docker image so that it is easy for user to pull those optional files down the same way they would be pulling the app itself. But I cannot really find a good way to expose files from one Docker image to the other? Is there a way to mount a Docker image itself (or a directory in it) as a volume to other Docker container?
I know that there are volume plugins I could use, but I could not find any where I could to this or something similar?
Is possible create any directory of an image to a docker volume, but not full image. At least not in a pretty or simple way.
If you want to create a directory from your image as a docker volume you can create a named volume:
docker volume create your_volume
docker run -d \
-it \
--name=yourcontainer \
-v your_volume:/dir_with_data_you_need \
your_docker_image
From this point, you'll have accessible your_volume with data from image your_docker_image
Reason why you cannot mount the whole image in a volume is because docker doesn't let specify / as source of named volume. You'll get Cannot create container for service your-srv: invalid volume spec "/": invalid volume specification: '/' even if you try with docker-compose.
Don't know any direct way.
You can use a folder in your host as a bridge to share things, this is a indirect way to acheive this.
docker run -d -v /any_of_your_host_folder:/your_assets_folder_in_your_image_container your_image
docker run -d -v /any_of_your_host_folder:/your_folder_of_your_new_container your_container_want_to_use_assets
For your_image, you need add CMD in dockerfile to copy the assets to your_assets_folder_in_your_image_container(the one you use as volume as CMD executes after volume)
This may waste time, but just at the first time the assets container starts. And after the container starts, the files in assets container in fact copy to the host folder, and has none business with assets image any more. So you can just delete the image of the assets image. Then no space waste.
You aim just want other people easy to use the assets, so why not afford script to them, automatically fetch the image -> start the container(CMD auto copy files to volume) -> delete the image/container -> the assets already on host, so people just use this host assets as a volume to do next things.
Of course, if container can directly use other image's resource, it is better than this solution. Anyway, this can be a solution although not perfect.
You can add the docker sock as a volume which will allow you to start one of your docker images from within your docker container.
To do this, add the two following volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
- "/usr/bin/docker:/usr/bin/docker"
If you need to share files between the containers map the volume /tmp:/tmp when starting both containers.
I'm pretty new to docker and I'm a bit puzzled by the difference between tagging (--tag) an image and assigning it a name (--name).
For example, I can see that if I build my custom image out of a Docker file, I can tag it with a name:
sudo docker build --tag=tomcat-admin .
sudo docker run -it tomcat-admin
Passing the name to docker inspect produces a result:
docker inspect tomcat-admin
However it doesn't contain the same attributes of a "named" image:
docker inspect --format '{{ .NetworkSettings.IPAddress }}' tomcat-admin
Template parsing error: template: :1:19: executing "" at <.NetworkSettings.IPA...>: map has no entry for key "NetworkSettings
"
Somebody to shed some light on it?
Thanks!
I think you mixed two concepts here, which causes the confusion. On the one hand there is a Docker image which you can think of as a blueprint for starting a container. On the other hand there are containers which are running instances that are based on an image.
When you docker build -t tagname . you are creating an image and tag it with a "name:tag" format usually. So for example, you are building your image as
docker build -t myimage:1.0 .
which creates a new image that is named myimage with a version of 1.0. This is what you will see when you run docker images.
The --name parameter is then used when you create and start a new container based of your image. So for example, you run a new container using the following command:
docker run -it --name mycontainerinstance myimage
This creates a new container based of your image myimage. This container instance is named mycontainerinstance. You can see this when you run docker ps -a which will list the container with its container name mycontainerinstance.
So to better understand the difference, have a look at the docs for building an image and running a container, specifying an image. When reading the docs you will notice which commands target an image and which commands are for containers. You will also see, that there are commands that work for images and containers like docker inspect does.
Inspecting for a network address of course only works when you provide a container name, not an image. In your special case, the container got a generated name, which you can see by running docker ps -a. When you provide this name to the docker inspect command, you will likely see the ip address you wanted.
You tag an image
docker build --tag=tomcat-admin .
but you assign a name to a container
docker run -it tomcat-admin
You can assign multiple tags to images, e.g.
docker build --tag=tomcat-admin --tag=tomcat-admin:1.0 .
If you list images you get one line per tag, but they are related to the same image id:
docker images |grep tomcat
tomcat-admin 1.0 955395353827 11 minutes ago 188 MB
tomcat-admin latest 955395353827 11 minutes ago 188 MB
You can tag images also a second time, not just when you build them, so you can keep different image versions.
When you run a container based on a specific image, you can assign it a name, so you can refer it using the name instead than using the containerId.
Obviously you get different attributes by inspecting images and containers. I think it's more clear if you use different name for image tag and container name, e.g.
docker build --tag=tomcat-admin .
docker run -d -ti --name=tomcat-admin-container tomcat-admin
docker inspect tomcat-admin ==> You inspect the image
docker inspect tomcat-admin-container ==> You inspect the container
The confusing thing is that a tag consists of a name and a tag. In documentation you can see that:
--tag , -t Name and optionally a tag in the ‘name:tag’ format
So if you omit the :tag part, you actually add a name for the image. That's it.
The difference between image names and container names is explained in other's answers.
I'm totally new to Docker so I appreciate your patience.
I'm looking for a way to deploy multiple containers with the same image, however I need to pass in a different config (file) to each?
Right now, my understanding is that once you build an image, that's what gets deployed, but the problem for me is that I don't see the point in building multiple images of the same application when it's only the config that is different between the containers.
If this is the norm, then I'll have to deal with it however if there's another way then please put me out of my misery! :)
Thanks!
I think looking at examples which are easy to understand could give you the best picture.
What you want to do is perfectly valid, an image should be anything you need to run, without the configuration.
To generate the configuration, you either:
a) volume mounts
use volumes and mount the file during container start docker run -v my.ini:/etc/mysql/my.ini percona (and similar with docker-compose).
Be aware, you can repeat this as often as you like, so mount several configs into your container (so the runtime-version of the image).
You will create those configs on the host before running the container and need to ship those files with the container, which is the downside of this approach (portability)
b) entry-point based configuration (generation)
Most of the advanced docker images do provide a complex so called entry-point which consumes ENV variables you pass when starting the image, to create the configuration(s) for you, like https://github.com/docker-library/percona/blob/master/5.7/docker-entrypoint.sh
so when you run this image, you can do docker run -e MYSQL_DATABASE=myapp percona and this will start percona and create the database percona for you.
This is all done by
adding the entry-point script here https://github.com/docker-library/percona/blob/master/5.7/Dockerfile#L65
do not forget to copy the script during image build https://github.com/docker-library/percona/blob/master/5.7/Dockerfile#L63
Then during the image-startup, your ENV variable will cause this to trigger: https://github.com/docker-library/percona/blob/master/5.7/docker-entrypoint.sh#L91
Of course, you can do whatever you like with this. E.g this configures a general portus image: https://github.com/EugenMayer/docker-rancher-extra-catalogs/blob/master/templates/registry-slim/11/docker-compose.yml
which has this entrypoint https://github.com/EugenMayer/docker-image-portus/blob/master/build/startup.sh
So you see, the entry-point strategy is very common and very powerful and i would suppose to go this route whenever you can.
c) Derived images
Maybe for "completeness", the image-derive strategy, so you have you base image called "myapp" and for the installation X you create a new image
from myapp
COPY my.ini /etc/mysql/my.ini
COPY application.yml /var/app/config/application.yml
And call this image myapp:x - the obvious issue with this is, you end up having a lot of images, on the other side, compared to a) its much more portable.
Hope that helps
Just run from the same image as many times as needed. New containers will be created and they can then be started and stoped each one saving its own configuration. For your convenience would be better to give each of your containers a name with "--name".
F.i:
docker run --name MyContainer1 <same image id>
docker run --name MyContainer2 <same image id>
docker run --name MyContainer3 <same image id>
That's it.
$ docker ps
CONTAINER ID IMAGE CREATED STATUS NAMES
a7e789711e62 67759a80360c 12 hours ago Up 2 minutes MyContainer1
87ae9c5c3f84 67759a80360c 12 hours ago Up About a minute MyContainer2
c1524520d864 67759a80360c 12 hours ago Up About a minute MyContainer3
After that you have your containers created forever and you can start and stop them like VMs.
docker start MyContainer1
Each container runs with the same RO image but with a RW container specific filesystem layer. The result is each container can have it's own files that are distinct from every other container.
You can pass in configuration on the CLI, as an environment variable, or as a unique volume mount. It's a very standard use case for Docker.
This question already has answers here:
What is the difference between a Docker image and a container?
(31 answers)
Closed 3 years ago.
What's the difference between a container and an image in Docker? In the Get started with Docker tutorial these terms are both used, but I do not understand the difference.
Can anybody please shed some light?
Images are frozen immutable snapshots of live containers. Containers are running (or stopped) instances of some image.
Start with the base image called 'ubuntu'. Let's run bash interactively within the ubuntu image and create a file. We'll use the -i and -t flags to give us an interactive bash shell.
$ docker run -i -t ubuntu /bin/bash
root#48cff2e9be75:/# ls
bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
root#48cff2e9be75:/# cat > foo
This is a really important file!!!!
root#48cff2e9be75:/# exit
Don't expect that file to stick around when you exit and restart the image. You're restarting from exactly the same defined state as you started in before, not where you left off.
$ docker run -i -t ubuntu /bin/bash
root#abf181be4379:/# ls
bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
root#abf181be4379:/# exit
But, the container, now no longer running, has state and can be saved (committed) to an image.
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
abf181be4379 ubuntu:14.04 /bin/bash 17 seconds ago Exited (0) 12 seconds ago elegant_ardinghelli
48cff2e9be75 ubuntu:14.04 /bin/bash About a minute ago Exited (0) 50 seconds ago determined_pare
...
Let's create an image from container ID 48cff2e9be75 where we created our file:
$ docker commit 48cff2e9be75 ubuntu-foo
d0e4ae9a911d0243e95556e229c8e0873b623eeed4c7816268db090dfdd149c2
Now, we have a new image with our really important file:
$ docker run ubuntu-foo /bin/cat foo
This is a really important file!!!!
Try the command docker images. You should see your new image ubuntu-foo listed along with the ubuntu standard image we started with.
An image is an ordered collection of root filesystem changes and the corresponding execution parameters for use within a container runtime. Images are read-only.
https://docs.docker.com/glossary/?term=image
A container is an active (or inactive if exited) stateful instantiation of an image.
https://docs.docker.com/glossary/?term=container
Using an object-oriented programming analogy, the difference between a Docker image and a Docker container is the same as that of the difference between a class and an object. An object is the runtime instance of a class. Similarly, a container is the runtime instance of an image.
An object gets created only once when it is instantiated. Similarly, a container can be running or stopped. Containers are created out of an image, though this might not always be the case. The following example creates an Apache server image, runs the image, lists the images and then lists the containers:
Create a Dockerfile with the following contents:
FROM httpd:2.4
Install Apache server
sudo docker build -t my-apache2 .
Run the image
sudo docker run -it --rm --name my-running-app my-apache2
List Docker images
sudo docker images
List the running Docker containers
docker ps
List all containers
docker ps -a
List latest created containers
docker ps -l
An image is basically an immutable template for creating a container. It's easier to understand the difference between an image and container by considering what happens to an image to turn it into a container.
The Docker engine takes the image and adds a read-write filesystem on top, then initialises various settings. These settings include network options (IP, port, etc.), name, ID, and any resource limits (CPU, memory). If the Docker engine has been asked to run the container it will also initialise a process inside it. A container can be stopped and restarted, in which case it will retain all settings and filesystem changes (but will lose anything in memory and all processes will be restarted). For this reason a stopped or exited container is not the same as an image.
DockerFile --(Build)--> DockerImage --(run)--> DockerContainer
DockerFile is what you or developer write code to do something (ex- Install)
Docker Image is you get when you build docker file .
Docker Container is you get when you run your Docker image
We can get Docker Image from docker hub by pulling and then run it to get container .
Images [like vm]
Read only template used to create containers
Buuilt by you or other Docker users
Stored in the Docker Hub or your local Registry
Containers [like a runing machine]
Isolated application platform
Contains everything needed to run your application
Based on images
In Docker, it all begins with an image. An image is every file that makes up just enough of the operating system to do what you need to do. Traditionally you'd install a whole operating system with everything for each application you do. With Docker you pair it way down so that you have a little container with just enough of the operating system to do what you need to do, and you can have lots and lots of these efficiently on a computer.
Use docker images to see the installed images and docker ps to see the running images.
When you type docker run it takes the image, and makes it a living container with a running process. I tend to use:
docker run -ti <image>:<tag> bash
Lastly, images have their own set of ids and containers have their own set of ids - they don't overlap.
Containers are based on images. An image needs to be passed to the Dockers run command.
Example:
BusyBox image
http://i.stack.imgur.com/eK9dC.png
Here we specify an image called busybox. Docker does not have this image locally and pulls it from a public registry.
A registry is a catalog of Docker images that the Docker client can communicate with and download image from. Once the image is pulled, Docker starts a container and execute the echo hello world command.
Images: The filesystem and metadata needed to run containers. They can be thought of as an application packaging format that includes all of the dependencies to run the application, and default settings to execute that application. The metadata includes defaults for the command to run, environment variables, labels, and healthcheck command.
Containers: An instance of an isolated application. A container needs the image to define its initial state and uses the read-only filesystem from the image along with a container specific read-write filesystem. A running container is a wrapper around a running process, giving that process namespaces for things like filesystem, network, and PIDs.
When you execute a docker run command, you provide an image on the command line, along with any configurations, and docker returns a container based off of that image definition and configurations you provided.
References: to the docker engine, an image is just an image id. This is a unique immutable hash. A change to an image results in creating a new image id. However, you can have one or more references pointing to an image id, not unlike symbolic links. And these references can be updated to point to new image id's. Note that when you create a container, docker will resolve that reference at the time of container creation, so you cannot update the image of a running container. Instead, you create a new image, and create a new container based on that new image.
Layers: Digging a bit deeper, you have filesystem layers. Docker assembles images with a layered filesystem. Each layer is a read-only set of changes to the filesystem, and that layer is represented by a unique hash. Using these read-only layers, multiple images may extend another, and only the differences between those images need to be stored or transmitted over the network. When a Docker container is run, it receives a container specific read-write filesystem layer unique to that container, and all of the image layers are assembled with that using a union filesystem. A read is processed through each layer until the file is found, a deletion is found, or the file is not found in the bottom layer. A write performs a copy-on-write from the image read-only layer to the container specific read-write layer. And a deletion is recorded as a change to the container specific read-write layer. A common step in building images is to run a command in a temporary container based off the previous image filesystem state and save the resulting container specific layer as a layer in the new image.
Docker Images:
It contains a list of commands and instruction on how to build and run a container. So basically Images contains all the data and metadata required to fire up a container(also called blueprint).We can't lunch a container without specifying Images.
$docker images centos
List all the available version of centos.
Docker Container:
Containers are lunch from Images so we can say container is the running instance of an Images.
Container is a runtime construct, unlike Images which is build time construct.
The official difference is that the container is the last layer which is writable whereas the layers below are only readable and they belong to your image. The intuitive difference is that the docker instance is the instance virtualized by your docker daemon and the running your image, it operates within an isolated section of your kernel (this process is hidden from you). The image however is static, it doesn't run, it is just a pile of layers (static files). If we would relate this paradigm to object-oriented programming, the image is your class definition, whereas your docker instance is your class spawned object that resides in memory.
I have written a tutorial to reinforce your docker knowledge intuition:
http://javagoogleappspot.blogspot.com/2018/07/docker-basics.html
Image is the photo made from your phone.
Container is the phone.