how to create a docker centos image that would only run on centos - docker

I'm new to docker and I'm still trying to learn the conepts.
On my centos machine I created a test image that would include a C-compiled executable. Based on my understanding of docker my intention and my expectation was for the image to run on centos machines only. Here is my docker file:
FROM centos:7
WORKDIR /opt/MYAPPS
COPY my_hello .
CMD my_hello
The image builds and works fine on the centos machine I created.
Then I pushed this image to my repo and pulled it to another centos machine, and works correctly as well. So far so good.
As I mentioned I was expecting for this image to be limited to centos. In order to prove it I tried pulling it to other OSs, my ubuntu and my windows. To my surprise, it worked on both.
Obviously I'm missing something. Either I'm not grasping the concept of docker or I'm using the wrong "FROM" image.

As I mentioned I was expecting for this image to be limited to centos.
No: What you put in the image is merely what the dependencies for your executable to run, but in the end, everything resolves to system calls.
That means your image is limited by the kernel of the host machine, since the running process is doing system call to said kernel.
(As illustrated in "The Fascinating World of Linux System Calls")
As long as the kernel (here Linux, even on Windows, through HyperV) supports your system call, your program will run on those hosts.
See more at "How can Docker run distros with different kernels?".

Related

Using Docker and Cypress in Same Docker Image

Fair warning: I'm new to all of this, so there there might be some mistakes in my thinking process.
I want to system test an application we are developing, and we ship this application via Docker, so that's what I want to test.
For GitLab CI, this means creating a Docker image which has Docker in Docker and Cypress, since that is what I'd like to use.
So just from checking the Docker docs I can see that Docker can be installed on a multitude of Linux distros, but not on Alpine. The official image however is Alpine.
The Cypress docs however show that Cypress can not be installed to Alpine. Only the package managers "apt-get" and "yum" are supported, which is Ubuntu and Fedora, respectively.
So as far as I can tell, it's not possible to have both of these at once? Which would be absolutely baffling (but so is the package manager chaos I just learned about).
What I tried:
used the Docker image as a base and tried to install Cypress (does not work because there is no installation manual and the packages you need to install via apt-get don't exist for apk)
used the Cypress image as a base and tried to install Docker (does not work because the Cypress images don't work)
used another image and tried to install both (does not work because installing Docker inside the Docker container does not work, that's why they have the image provided)
used DinD with another distro (cruizba/ubuntu-dind, fails with " dockerd is not running after max time")
So... what am I missing? Is there any way to get to the point where I can use both Cypress and DinD in the same image?
There is an image named blackholegalaxy/cypress-dind which combines DinD and Cypress.
Sadly it's really old and there is no way to update Docker to the newest version easily.

Are there different images for different OS on docker hub

When i run the following command
docker run mongo
It will download the mongo image and run it on container.
I am running Linux on VM.
My OS details are as follows:
NAME="CentOS Linux"
VERSION="7 (Core)"
In case I am using different OS /Mac Machine / Windows, how does docker determine which image to pull. As I understand there is a single image on docker hub for mongo or is it that we can specify a specific image to run based on our OS.
At least we need take care of downloading specific version of mongo when doing installation on our local machine (when not using containers).
How is this taken care of by dockers.
Thanks.
The OS that you are running is for the most part irrelevant when it comes to pulling a docker image. As long as you are running docker (and the versions of docker are a little different from windows to Mac to Linux) on your host, you can pull any image you want. You can pull the same mongo image are run it in any operating system.
The image hides the host operating system making it easy to build an image an deploy pretty much in any machine.
Having said that you may be getting confused because image makers many times use different OS to build their applications. A quick example is people building application using an Ubuntu image but switching to an alpine based image for deployment because that is so much smaller. However, both images will run pretty much anywhere.
Probably you are confused with terms OS and Architecture?
The OS does not really matter, because, as #camba1 mentioned, the Docker daemon handles all that stuff.
What matters, is architecture, because Linux can run on ARM, AMD64, etc.
So, the Docker daemon must know which image is good for current architecture.
Here is a good article regarding this question.

Run image in docker

I am new to docker and i watched many videos and also studied articles. From there i came to know what exactly docker is.
But my question is -:
Lets suppose i have three docker image
First image of "Application 1" is created in window 7/8/10 environment
Second image of "Application 2" is created in CentOs .
Third docker of "Application 3" image is create in Linux.
so , can i run all these three images simultaneously in single environment(Window or CentOS or Linux) ?
Surely you can ! That's the advantage of docker . Docker runs images on any platform without worrying that what is inside image. So on centos you can run a ubuntu image and vice-versa.
You can run any Linux Docker container image created recently on any Docker host running Linux. There are exceptions around various kernel features which you might not have access to on an older kernel, though, for example. Windows apps do not run on Docker on Linux unless you are doing something like running them under Wine.
There is Windows specific containers which only runs on windows hosts, but if you are using the standard (none-windows-exclusive) images, they run the same on all hosts.
One of the core ideas with docker is that you should be able to run your service in the exact same environment (the container) on any system (the host). Which works pretty well (with the exception of the windows specific containers!).

Docker Base Image - which flavour?

We're running a Java based Webshop on SLES12.
Currently we're deciding if we want to run the Webshop in Future as a Docker Container.
At least in our Test-Environment we will host the Webshop as Docker Container.
My Question is now: How important is it to choose the base image of the Docker Container as Production near as possible? Which means: is it necessary (or recommendable) to build the Docker Container on a SLES (or opensuse) Base Image or is it OK to keep Debian as Base Image?
What's the major difference between Debian and Suse base Images (except Packaging Tool, directoy structure and Base Image Size)
How important is it to choose the base image of the Docker Container as Production near as possible?
It is not important. It is mandatory. You docker image have to be as near as possible from your prod environment if you intend to use docker to develop and not in prod.
Which means: is it necessary (or recommendable) to build the Docker Container on a SLES (or opensuse) Base Image or is it OK to keep Debian as Base Image?
If you are not running your project via docker in prod, you need to be the nearest so if you are running on debian, use debian as base image. If you intend to run on prod with docker, it is better to keep the image as light as possible (the more if you intend to make it public). But keeping debian base is ok.

What is the difference between a Docker image and a container?

When using Docker, we start with a base image. We boot it up, create changes and those changes are saved in layers forming another image.
So eventually I have an image for my PostgreSQL instance and an image for my web application, changes to which keep on being persisted.
What is a container?
An instance of an image is called a container. You have an image, which is a set of layers as you describe. If you start this image, you have a running container of this image. You can have many running containers of the same image.
You can see all your images with docker images whereas you can see your running containers with docker ps (and you can see all containers with docker ps -a).
So a running instance of an image is a container.
From my article on Automating Docker Deployments (archived):
Docker Images vs. Containers
In Dockerland, there are images and there are containers. The two are closely related, but distinct. For me, grasping this dichotomy has clarified Docker immensely.
What's an Image?
An image is an inert, immutable, file that's essentially a snapshot of a container. Images are created with the build command, and they'll produce a container when started with run. Images are stored in a Docker registry such as registry.hub.docker.com. Because they can become quite large, images are designed to be composed of layers of other images, allowing a minimal amount of data to be sent when transferring images over the network.
Local images can be listed by running docker images:
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
ubuntu 13.10 5e019ab7bf6d 2 months ago 180 MB
ubuntu 14.04 99ec81b80c55 2 months ago 266 MB
ubuntu latest 99ec81b80c55 2 months ago 266 MB
ubuntu trusty 99ec81b80c55 2 months ago 266 MB
<none> <none> 4ab0d9120985 3 months ago 486.5 MB
Some things to note:
IMAGE ID is the first 12 characters of the true identifier for an image. You can create many tags of a given image, but their IDs will all be the same (as above).
VIRTUAL SIZE is virtual because it's adding up the sizes of all the distinct underlying layers. This means that the sum of all the values in that column is probably much larger than the disk space used by all of those images.
The value in the REPOSITORY column comes from the -t flag of the docker build command, or from docker tag-ing an existing image. You're free to tag images using a nomenclature that makes sense to you, but know that docker will use the tag as the registry location in a docker push or docker pull.
The full form of a tag is [REGISTRYHOST/][USERNAME/]NAME[:TAG]. For ubuntu above, REGISTRYHOST is inferred to be registry.hub.docker.com. So if you plan on storing your image called my-application in a registry at docker.example.com, you should tag that image docker.example.com/my-application.
The TAG column is just the [:TAG] part of the full tag. This is unfortunate terminology.
The latest tag is not magical, it's simply the default tag when you don't specify a tag.
You can have untagged images only identifiable by their IMAGE IDs. These will get the <none> TAG and REPOSITORY. It's easy to forget about them.
More information on images is available from the Docker documentation and glossary.
What's a container?
To use a programming metaphor, if an image is a class, then a container is an instance of a class—a runtime object. Containers are hopefully why you're using Docker; they're lightweight and portable encapsulations of an environment in which to run applications.
View local running containers with docker ps:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f2ff1af05450 samalba/docker-registry:latest /bin/sh -c 'exec doc 4 months ago Up 12 weeks 0.0.0.0:5000->5000/tcp docker-registry
Here I'm running a dockerized version of the docker registry, so that I have a private place to store my images. Again, some things to note:
Like IMAGE ID, CONTAINER ID is the true identifier for the container. It has the same form, but it identifies a different kind of object.
docker ps only outputs running containers. You can view all containers (running or stopped) with docker ps -a.
NAMES can be used to identify a started container via the --name flag.
How to avoid image and container buildup
One of my early frustrations with Docker was the seemingly constant buildup of untagged images and stopped containers. On a handful of occasions this buildup resulted in maxed out hard drives slowing down my laptop or halting my automated build pipeline. Talk about "containers everywhere"!
We can remove all untagged images by combining docker rmi with the recent dangling=true query:
docker images -q --filter "dangling=true" | xargs docker rmi
Docker won't be able to remove images that are behind existing containers, so you may have to remove stopped containers with docker rm first:
docker rm `docker ps --no-trunc -aq`
These are known pain points with Docker and may be addressed in future releases. However, with a clear understanding of images and containers, these situations can be avoided with a couple of practices:
Always remove a useless, stopped container with docker rm [CONTAINER_ID].
Always remove the image behind a useless, stopped container with docker rmi [IMAGE_ID].
While it's simplest to think of a container as a running image, this isn't quite accurate.
An image is really a template that can be turned into a container. To turn an image into a container, the Docker engine takes the image, adds a read-write filesystem on top and initialises various settings including network ports, container name, ID and resource limits. A running container has a currently executing process, but a container can also be stopped (or exited in Docker's terminology). An exited container is not the same as an image, as it can be restarted and will retain its settings and any filesystem changes.
Maybe explaining the whole workflow can help.
Everything starts with the Dockerfile. The Dockerfile is the source code of the image.
Once the Dockerfile is created, you build it to create the image of the container. The image is just the "compiled version" of the "source code" which is the Dockerfile.
Once you have the image of the container, you should redistribute it using the registry. The registry is like a Git repository -- you can push and pull images.
Next, you can use the image to run containers. A running container is very similar, in many aspects, to a virtual machine (but without the hypervisor).
Dockerfile → (Build) → Image → (Run) → Container.
Dockerfile: contains a set of Docker instructions that provisions your operating system the way you like, and installs/configure all your software.
Image: compiled Dockerfile. Saves you time from rebuilding the Dockerfile every time you need to run a container. And it's a way to hide your provision code.
Container: the virtual operating system itself. You can ssh into it and run any commands you wish, as if it's a real environment. You can run 1000+ containers from the same Image.
Workflow
Here is the end-to-end workflow showing the various commands and their associated inputs and outputs. That should clarify the relationship between an image and a container.
+------------+ docker build +--------------+ docker run -dt +-----------+ docker exec -it +------+
| Dockerfile | --------------> | Image | ---------------> | Container | -----------------> | Bash |
+------------+ +--------------+ +-----------+ +------+
^
| docker pull
|
+--------------+
| Registry |
+--------------+
To list the images you could run, execute:
docker image ls
To list the containers you could execute commands on:
docker ps
I couldn't understand the concept of image and layer in spite of reading all the questions here and then eventually stumbled upon this excellent documentation from Docker (duh!).
The example there is really the key to understand the whole concept. It is a lengthy post, so I am summarising the key points that need to be really grasped to get clarity.
Image: A Docker image is built up from a series of read-only layers
Layer: Each layer represents an instruction in the image’s Dockerfile.
Example: The below Dockerfile contains four commands, each of which creates a layer.
FROM ubuntu:15.04
COPY . /app
RUN make /app
CMD python /app/app.py
Importantly, each layer is only a set of differences from the layer before it.
Container.
When you create a new container, you add a new writable layer on top of the underlying layers. This layer is often called the “container layer”. All changes made to the running container, such as writing new files, modifying existing files, and deleting files, are written to this thin writable container layer.
Hence, the major difference between a container and an image is
the top writable layer. All writes to the container that add new or
modify existing data are stored in this writable layer. When the
container is deleted, the writable layer is also deleted. The
underlying image remains unchanged.
Understanding images cnd Containers from a size-on-disk perspective
To view the approximate size of a running container, you can use the docker ps -s command. You get size and virtual size as two of the outputs:
Size: the amount of data (on disk) that is used for the writable layer of each container
Virtual Size: the amount of data used for the read-only image data used by the container. Multiple containers may share some or all read-only image data. Hence these are not additive. I.e. you can't add all the virtual sizes to calculate how much size on disk is used by the image
Another important concept is the copy-on-write strategy
If a file or directory exists in a lower layer within the image, and another layer (including the writable layer) needs read access to it, it just uses the existing file. The first time another layer needs to modify the file (when building the image or running the container), the file is copied into that layer and modified.
I hope that helps someone else like me.
Simply said, if an image is a class, then a container is an instance of a class is a runtime object.
A container is just an executable binary that is to be run by the host OS under a set of restrictions that are preset using an application (e.g., Docker) that knows how to tell the OS which restrictions to apply.
The typical restrictions are process-isolation related, security related (like using SELinux protection) and system-resource related (memory, disk, CPU, and networking).
Until recently, only kernels in Unix-based systems supported the ability to run executables under strict restrictions. That's why most container talk today involves mostly Linux or other Unix distributions.
Docker is one of those applications that knows how to tell the OS (Linux mostly) what restrictions to run an executable under. The executable is contained in the Docker image, which is just a tarfile. That executable is usually a stripped-down version of a Linux distribution's User space (Ubuntu, CentOS, Debian, etc.) preconfigured to run one or more applications within.
Though most people use a Linux base as the executable, it can be any other binary application as long as the host OS's kernel can run it (see creating a simple base image using scratch). Whether the binary in the Docker image is an OS User space or simply an application, to the OS host it is just another process, a contained process ruled by preset OS boundaries.
Other applications that, like Docker, can tell the host OS which boundaries to apply to a process while it is running, include LXC, libvirt, and systemd. Docker used to use these applications to indirectly interact with the Linux OS, but now Docker interacts directly with Linux using its own library called "libcontainer".
So containers are just processes running in a restricted mode, similar to what chroot used to do.
IMO, what sets Docker apart from any other container technology is its repository (Docker Hub) and their management tools which makes working with containers extremely easy.
See Docker (software).
The core concept of Docker is to make it easy to create "machines" which in this case can be considered containers. The container aids in reusability, allowing you to create and drop containers with ease.
Images depict the state of a container at every point in time. So the basic workflow is:
create an image
start a container
make changes to the container
save the container back as an image
As many answers pointed this out: You build Dockerfile to get an image and you run image to get a container.
However, following steps helped me get a better feel for what Docker image and container are:
1) Build Dockerfile:
docker build -t my_image dir_with_dockerfile
2) Save the image to .tar file
docker save -o my_file.tar my_image_id
my_file.tar will store the image. Open it with tar -xvf my_file.tar, and you will get to see all the layers. If you dive deeper into each layer you can see what changes were added in each layer. (They should be pretty close to commands in the Dockerfile).
3) To take a look inside of a container, you can do:
sudo docker run -it my_image bash
and you can see that is very much like an OS.
It may help to think of an image as a "snapshot" of a container.
You can make images from a container (new "snapshots"), and you can also start new containers from an image (instantiate the "snapshot"). For example, you can instantiate a new container from a base image, run some commands in the container, and then "snapshot" that as a new image. Then you can instantiate 100 containers from that new image.
Other things to consider:
An image is made of layers, and layers are snapshot "diffs"; when you push an image, only the "diff" is sent to the registry.
A Dockerfile defines some commands on top of a base image, that creates new layers ("diffs") that result in a new image ("snapshot").
Containers are always instantiated from images.
Image tags are not just tags. They are the image's "full name" ("repository:tag"). If the same image has multiple names, it shows multiple times when doing docker images.
Image is an equivalent to a class definition in OOP and layers are different methods and properties of that class.
Container is the actual instantiation of the image just like how an object is an instantiation or an instance of a class.
I think it is better to explain at the beginning.
Suppose you run the command docker run hello-world. What happens?
It calls Docker CLI which is responsible to take Docker commands and transform to call Docker server commands. As soon as Docker server gets a command to run an image, it checks weather the images cache holds an image with such a name.
Suppose hello-world do not exists. Docker server goes to Docker Hub (Docker Hub is just a free repository of images) and asks, hey Hub, do you have an image called hello-world?
Hub responses - yes, I do. Then give it to me, please. And the download process starts. As soon as the Docker image is downloaded, the Docker server puts it in the image cache.
So before we explain what Docker images and Docker containers are, let's start with an introduction about the operation system on your computer and how it runs software.
When you run, for example, Chrome on your computer, it calls the operating system, the operating system itself calls the kernel and asks, hey I want to run this program. The kernel manages to run files from your hard disk.
Now imagine that you have two programs, Chrome and Node.js. Chrome requires Python version 2 to run and Node.js requires Python version 3 to run. If you only have installed Python v2 on your computer, only Chrome will be run.
To make both cases work, somehow you need to use an operating system feature known as namespacing. A namespace is a feature which gives you the opportunity to isolate processes, hard drive, network, users, hostnames and so on.
So, when we talk about an image we actually talk about a file system snapshot. An image is a physical file which contains directions and metadata to build a specific container. The container itself is an instance of an image; it isolates the hard drive using namespacing which is available only for this container. So a container is a process or set of processes which groups different resources assigned to it.
A Docker image packs up the application and environment required by the application to run, and a container is a running instance of the image.
Images are the packing part of Docker, analogous to "source code" or a "program". Containers are the execution part of Docker, analogous to a "process".
In the question, only the "program" part is referred to and that's the image. The "running" part of Docker is the container. When a container is run and changes are made, it's as if the process makes a change in its own source code and saves it as the new image.
As in the programming aspect,
Image is source code.
When source code is compiled and build, it is called an application.
Similar to that "when an instance is created for the image", it is called a "container".
I would like to fill the missing part here between docker images and containers. Docker uses a union file system (UFS) for containers, which allows multiple filesystems to be mounted in a hierarchy and to appear as a single filesystem. The filesystem from the image has been mounted as a read-only layer, and any changes to the running container are made to a read-write layer mounted on top of this. Because of this, Docker only has to look at the topmost read-write layer to find the changes made to the running system.
I would state it with the following analogy:
+-----------------------------+-------+-----------+
| Domain | Meta | Concrete |
+-----------------------------+-------+-----------+
| Docker | Image | Container |
| Object oriented programming | Class | Object |
+-----------------------------+-------+-----------+
Docker Client, Server, Machine, Images, Hub, Composes are all projects tools pieces of software that come together to form a platform where ecosystem around creating and running something called containers, now if you run the command docker run redis something called docker CLI reached out to something called the Docker Hub and it downloaded a single file called an image.
Docker Image:
An image is a single file containing all the dependencies and all the configuration required to run a very specific program, for example redis is the image that you just downloaded (by running command docker run redis) was supposed to run.
This is a single file that gets stored on your hard drive and at some point time you can use this image to create something called a container.
A container is an instance of an image and you can kind of think it as being like a running program with it's own isolated set of hardware resources so it kind of has its own little set or its own little space of memory has its own little space of networking technology and its own little space of hard drive space as well.
Now lets examine when you give bellow command:
sudo docker run hello-world
Above command will starts up the docker client or docker CLI, Docker CLI is in charge of taking commands from you kind of doing a little bit of processing on them and then communicating the commands over to something called the docker server, and docker server is in charge of the heavy lifting when we ran the command Docker run hello-world,
That meant that we wanted to start up a new container using the image with the name of hello world, the hello world image has a tiny tittle program inside of it whose sole purpose or sole job is to print out the message that you see in the terminal.
Now when we ran that command and it was issued over to the docker server a series of actions very quickly occurred in background. The Docker server saw that we were trying to start up a new container using an image called hello world.
The first thing that the docker server did was check to see if it already had a local copy like a copy on your personal machine of the hello world image or that hello world file.So the docker server looked into something called the image cache.
Now because you and I just installed Docker on our personal computers that image cache is currently empty, We have no images that have already been downloaded before.
So because the image cache was empty the docker server decided to reach out to a free service called Docker hub. The Docker Hub is a repository of free public images that you can freely download and run on your personal computer. So Docker server reached out to Docker Hub and and downloaded the hello world file and stored it on your computer in the image-cache, where it can now be re-run at some point the future very quickly without having to re-downloading it from the docker hub.
After that the docker server will use it to create an instance of a container, and we know that a container is an instance of an image, its sole purpose is to run one very specific program. So the docker server then essentially took that image file from image cache and loaded it up into memory to created a container out of it and then ran a single program inside of it. And that single programs purpose was to print out the message that you see.
What a container is:
First of all an image is a blueprint for how to create a container.
A container is a process or a set of processes that have a grouping of resource specifically assigned to it, in the bellow is a diagram that anytime that we think about a container we've got some running process that sends a system call to a kernel, the kernel is going to look at that incoming system call and direct it to a very specific portion of the hard drive, the RAM, CPU or what ever else it might need and a portion of each of these resources is made available to that singular process.
An image is to a class as a container to an object.
A container is an instance of an image as an object is an instance of a class.
*In docker, an image is an immutable file that holds the source code and information needed for a docker app to run. It can exist independent of a container.
*Docker containers are virtualized environments created during runtime and require images to run. The docker website has an image that kind of shows this relationship:
Just as an object is an instance of a class in an object-oriented programming language, so a Docker container is an instance of a Docker image.
For a dummy programming analogy, you can think of Docker has a abstract ImageFactory which holds ImageFactories they come from store.
Then once you want to create an app out of that ImageFactory, you will have a new container, and you can modify it as you want. DotNetImageFactory will be immutable, because it acts as a abstract factory class, where it only delivers instances you desire.
IContainer newDotNetApp = ImageFactory.DotNetImageFactory.CreateNew(appOptions);
newDotNetApp.ChangeDescription("I am making changes on this instance");
newDotNetApp.Run();
In short:
Container is a division (virtual) in a kernel which shares a common OS and runs an image (Docker image).
A container is a self-sustainable application that will have packages and all the necessary dependencies together to run the code.
A Docker container is running an instance of an image. You can relate an image with a program and a container with a process :)
Dockerfile is like your Bash script that produce a tarball (Docker image).
Docker containers is like extracted version of the tarball. You can have as many copies as you like in different folders (the containers).
An image is the blueprint from which container/s (running instances) are build.
Long story short.
Docker Images:
The file system and configuration(read-only) application which is used to create containers.
Docker Containers:
The major difference between a container and an image is the top writable layer. Containers are running instances of Docker images with top writable layer. Containers run the actual applications. A container includes an application and all of its dependencies. When the container is deleted, the writable layer is also deleted. The underlying image remains unchanged.
Other important terms to notice:
Docker daemon:
The background service running on the host that manages the building, running and distributing Docker containers.
Docker client:
The command line tool that allows the user to interact with the Docker daemon.
Docker Store:
Store is, among other things, a registry of Docker images. You can think of the registry as a directory of all available Docker images
A picture from this blog post is worth a thousand words.
Summary:
Pull image from Docker hub or build from a Dockerfile => Gives a
Docker image (not editable).
Run the image (docker run image_name:tag_name) => Gives a running
Image i.e. container (editable)
An image is like a class and container is like an object that class and so you can have an infinite number of containers behaving like the image. A class is a blueprint which isnt doing anything on its own. You have to create instances of the object un your program to do anything meaningful. And so is the case with an image and a container. You define your image and then create containers running that image. It isnt exactly similar because object is an instance of a class whereas a container is something like an empty hollow place and you use the image to build up a running host with exactly what the image says
An image or a container image is a file which contains your application code, application runtime, configurations, dependent libraries. The image is basically wraps all these into a single, secure immutable unit. Appropriate docker command is used to build the image. The image has image id and image tag. The tag is usually in the format of <docker-user-name>/image-name:tag.
When you start running your application using the image you actually start a container. So your container is a sandbox in which you run your image. Docker software is used to manage both the image and container.
Image is a secured package which contains your application artifact, libraries, configurations and application runtime. Container is the runtime representation of your image.

Resources