I've noticed that the same docker images built on different platforms (OS where docker engine running) are different. For example, I used to build a heavy docker image on travis CI (ubuntu), then pull it to my local machine (macos) and when I build the image (no modifications) on my mac, it just re-used image layers from the downloaded image. It's no longer the case now, now it builds another image from the scratch. I've looked it up under docker images and saw that it switched the tag between two images built on ubuntu and macos.
Did they change it? Are docker images built on different platforms no longer compatible?
P.S. Using the same docker version (docker-ce 17.06.2)
Related
Unfortunately, I'm in an environment where I have to use CentOS 4.8 for business reasons.
I'd like to create an image in Docker to manage this horrible old version of OS.
I checked that some users put the image on the Docker Hub, and I checked that some images are working normally.
And the question here is, how did they create these images? Is it possible to create an image by ISO itself, or by other means, rather than an image derived from the official image distributed by Docker? (Official image exists from CentOS 5 version onwards)
and I've found that I can extract Ubuntu images from ISO through searching.
However, there is no talk of CentOS and RHEL clone OSs.
Thank you.
I'm deployed a nodejs app using docker, I don't know how to update the deploy after my nodejs app updated.
Currently, I have to remove the old docker container and image when updating the nodejs app each time.
I expect that it's doesn't need to remove the old image and container when I nodejs app updated.
You tagged this "production". The standard way I've done this is like so:
Develop locally without Docker. Make all of your unit tests pass. Build and run the container locally and run integration tests.
Build an "official" version of the container. Tag it with a time stamp, version number, or source control tag; but do not tag it with :latest or a branch name or anything else that would change over time.
docker push the built image to a registry.
On the production system, change your deployment configuration to reference the version tag you just built. In some order, docker run a container (or more) with the new image, and docker stop the container(s) with the old image.
When it all goes wrong, change your deployment configuration back to the previous version and redeploy. (...oops.) If the old versions of the images aren't still on the local system, they can be fetched from the registry.
As needed docker rm old containers and docker rmi old images.
Typically much of this can be automated. A continuous integration system can build software, run tests, and push built artifacts to a registry; cluster managers like Kubernetes or Docker Swarm are good at keeping some number of copies of some version of a container running somewhere and managing the version upgrade process for you. (Kubernetes Deployments in particular will start a copy of the new image before starting to shut down old ones; Kubernetes Services provide load balancers to make this work.)
None of this is at all specific to Node. As far as the deployment system is concerned there aren't any .js files anywhere, only Docker images. (You don't copy your source files around separately from the images, or bind-mount a source tree over the image contents, and you definitely don't try to live-patch a running container.) After your unfortunate revert in step 5, you can run exactly the failing configuration in a non-production environment to see what went wrong.
But yes, fundamentally, you need to delete the old container with the old image and start a new container with the new image.
Copy the new version to your container with docker cp, then restart it with docker restart <name>
https://blog.ubuntu.com/2018/07/09/minimal-ubuntu-released
says
The 29MB Docker image for Minimal Ubuntu 18.04 LTS serves as a highly
efficient container...
...
On Dockerhub, the new Ubuntu 18.04 LTS image is now the new Minimal
Ubuntu 18.04 image. Launching a Docker instance with docker run
ubuntu:18.04 therefore launches a Docker instance with the latest
Minimal Ubuntu.
I ran the exact command mentioned:
docker run ubuntu:18.04
Then I ran "docker images" which said:
REPOSITORY TAG IMAGE ID CREATED SIZE
ubuntu 18.04 16508e5c265d 5 days ago 84.1MB
Why does that output say 84.1MB? The Ubuntu web page I quoted says it should be 29MB.
Am I doing something wrong?
Am I measuring the size incorrectly?
How can I get an Ubuntu image that's only 29MB?
The article states that Docker Hub hosts a "standard" image, which is bigger than the cloud image. The cloud image is the new thing they introduced and it weighs 29MB while the standard image weighs 32MB.
The cloud image is not available on Docker Hub.
But still, where did the 84MB come from? It's because you are downloading a compressed image from the registry. Which, in this case, only weighs 32MB. Once downloaded, it's decompressed into its usable format and stored locally on your machine.
Meaning everything is in order. Where do you get that cloud image from? Well, I'd start by looking at:
[...] are available for use now in Amazon EC2, Google Compute Engine (GCE) [...]
If you'd like to use it with a private cloud, this is where you download the image from link to Ubuntu Minimal Cloud Images
-edit-
addressing your comment, those private cloud sizes may vary. This is at least partially, if not mostly, due to differences between various hypervisor stacks. As is hinted at in the article:
Cloud images also contain the optimised kernel for each cloud and supporting boot utilities.
--
Just as an update, these days (~three years later), the latest 18:04 image weighs 25MB in its compressed format, so the exact numbers from my original answer are no longer valid, but the point is still valid.
I'm exploring using docker so that we deploy new docker images instead of specific file changes, so all of the needs of the application come with each deployment etc.
Question 1:
If I add a new application file, say 10 MB, to a docker image when I deploy the new image, using the tools in Docker tool box, will this require the deployment of an entirely new image to my containers or do docker deployments just take the difference between the 2, similar to git version control?
Another way to put it, I looked on a list of docker base images and saw a version of ubuntu that is 188 MB. If I commit a new application to a docker image, using this base image, will my docker containers need to pull the full 188 MB, which they are already running, plus the application or is there a differential way of just getting what has changed?
Supplementary Question
Am I correct in assuming when using docker, deploying images is the intended approach? Meaning any new changes should require a new image deployment so that images are treated as immutable? When I was using AWS we followed this approach with AMI (Amazon Machine Images) but storing AMIs had low overhead, for docker I don't know yet.
Or is it a better practice to deploy dockerfiles and have the new image be built on the container itself?
Docker uses a layered union filesystem, only one copy of a layer will be pulled by a docker engine and stored on its filesystem. When you build an image, docker will check its layer cache to see if the same parent layer and same command have been used to build an existing layer, and if so, the cache is reused instead of building a new layer. Once any step in the build creates a new layer, all following steps will create new layers, so the order of your Dockerfile matters. You should add frequently changing steps to the end of the Dockerfile so the earlier steps can be cached.
Therefore, if you use a 200MB base image, have 50MB of additions, but only 10MB are new additions at the end of your Dockerfile, you'd push 250MB the first time to a docker engine, but only 10MB to an engine that already had a previous copy of that image, or 50MB to an engine that just had the 200MB base image.
The best practice with images is to build them once, push them to a registry (either self hosted using the registry image, cloud hosted by someone like AWS, or on Docker Hub), and then pull that image to each engine that needs to run it.
For more details on the layered filesystem, see https://docs.docker.com/engine/userguide/storagedriver/imagesandcontainers/
You can also work a little, in order to create smaller images.
You can use Alpine or Busybox instead of using bigger Ubuntu, Debian or Bitnami (Debian light).
A smaller image is more secure as less tools are available.
Some reading
http://blog.xebia.com/how-to-create-the-smallest-possible-docker-container-of-any-image/
https://www.dajobe.org/blog/2015/04/18/making-debian-docker-images-smaller/
You have 2 great tools in order to make smaller docker images
https://github.com/docker-slim/docker-slim
and
https://github.com/mvanholsteijn/strip-docker-image
Some examples with docker-slim
https://hub.docker.com/r/k3ck3c/grafana-xxl.slim/
shows
size before -> 357.3 MB
and using docker-slim -> 18.73 MB
or about simh
https://hub.docker.com/r/k3ck3c/simh_bitnami.slim/
size 5.388 MB
when the original
k3ck3c/simh_bitnami 88.86 MB
a popular netcat image
chilcano/netcat is 135.2 MB
when a netcat based on Alpine is 7.812 MB
and based on busybox will need 2 or 3 MB
I have an app running on MongoDB, Node JS Api, React front end, Nginx proxy, etc. I have all of these setup as individual images and running locally (OSX) in separate linked containers, which I run with Docker Compose. In production, I have setup a (one) Ubuntu server on Digital Ocean at the moment, and expect to quickly scale as needed to multiple servers.
My question is what is the best way to handle the underlying Linux base image for each of these containers?
1) Should all of the linux setup (apt-gets, node / mongo installs, etc) exist on the Linux machine and outside of Docker and one could simply create a snapshot of this image, spin up a new server instance, and run the desired Docker container if you needed to quickly scale, or
2) Should all of the linux setup exist within a 'base' Ubuntu image, which the mongo, node, and nginx images build on top of. This results in each image's size growing significantly since they each have a separate instance of Ubuntu, plus all of the package dependencies to run mongo, node, and nginx, or
3) Should each process (mongo, node, nginx) have a separate linux base Docker image since they each have separate dependencies? Again, each image would be grow because they each would run an instance of Ubuntu.
What is the proper way to handle this with Docker?
The answer is #2, but I suspect you may not fully understand the relationship between container and image.
How Docker uses images
First of all an image from the the Docker docs:
Containers are created from images. An image is only downloaded and cached locally. Images are distributed via Registries.
Image layers
What makes Docker images different from virtual machine images is how they're built and stored. Again from the docs:
Each image consists of a series of layers.
Docker makes use of union file systems to combine these
layers into a single image. Union file systems allow files and
directories of separate file systems, known as branches, to be
transparently overlaid, forming a single coherent file system.
One of the reasons Docker is so lightweight is because of these
layers. When you change a Docker image—for example, update an
application to a new version— a new layer gets built. Thus, rather
than replacing the whole image or entirely rebuilding, as you may do
with a virtual machine, only that layer is added or updated. Now you
don’t need to distribute a whole new image, just the update, making
distributing Docker images faster and simpler.
So, your mongo, node, and nginx images will be thin layers on top of a base image containing your basic Linux setup. That base image will only be downloaded once and will be re-used as a component layer by the other images.