Is it bad practice to use mysql:latest as docker image? - docker

Docker was built to make sure that code can run on any device without any issues of missing libraries, wrong versions, etc.
However if you use the "latest" tag instead of a fixed version number, then you actually don't know on beforehand with which version of the image you'll end up. Perhaps your code is at the time of writing compatible with the "latest" version but not in the future. Is it then a bad practice to use this tag? Is it a better practice to use a fixed version number?

Usually, probably, for the reasons you state, you likely want to avoid ...:latest tags here.
For databases in particular, there can be differences in the binary interfaces, and there definitely are differences in the on-disk storage. If you were expecting MySQL 5.7 but actually got MySQL 8.0 you might get some surprising behavior. If a MySQL 9.0 comes out and you get an automatic upgrade then your local installation might break.
A generic reason to avoid ...:latest is that Docker will use a local copy of an image with the right tag, without checking Docker Hub or another repository. So again in the hypothetical case where MySQL 9.0 comes out, but system A has an old mysql:latest that's actually 8.0, docker run mysql on an otherwise clean system B will get a different image, and that inconsistency can be problematic.
You will probably find Docker image tags for every specific patch release of the database, but in general only the most recent build gets security updates. I'd suggest that pinning to a minor version mysql:8.0 is a good generic approach; you may want the more specific version mysql:8.0.29 in your production environment if you need to be extremely conscious of the upgrades.
Some other software is less picky this way. I'd point at the Redis in-memory data store and the Nginx HTTP server as two things where their network interfaces and configuration have remained very stable. Probably nothing will go wrong if you use nginx:latest or redis:latest, even if different systems do wind up with different versions of the image.

Related

Should I Set docker image version in docker-compose?

Imagine I have docker-compose.yml for adding mongo as a container, is it a good thing to set verion in front of image name or let it to be the latest by default?
version: '3.8'
services:
mongo:
image: mongo:4.0
ports:
- "27017:27017"
Actually what is the pros and cons for application in development and production realeses?
image: mongo:4.0 VS image: mongo
Including a version number as you've done is good practice. I'd generally use a major-only image tag (mongo:4) or a major+minor tag (mongo:4.4) but not a super-specific version (mongo:4.4.10) unless you have automation to update it routinely.
Generally the Docker Hub images get rebuilt fairly routinely; but, within a given patch line, only the most-recent versions get patches. Say the debian:focal base image gets a security update. As of this writing, the mongo image has 4, 4.4, and 4.4.10 tags, so all of those get rebuilt, but e.g. 4.4.9 won't. So using a too-specific version could mean you don't get important updates.
Conversely, using latest means you just don't care what version you have. Your question mentions mongo:4.0 but mongo:latest is currently version 5.0.5; are there compatibility issues with that major-version upgrade?
The key rules here are:
If you already have some image:tag locally, launching a container will not pull it again, even if it's updated in the repository.
Minor-version tags like mongo:4.4 will continue to get updates as long as they are supported, but you may need to docker-compose pull to get updates.
Patch-version tags like mongo:4.4.9 will stop getting updates as soon as there's a newer patch version, even if you docker pull mongo:4.4.9.
Using a floating tag like ...:latest or a minor-version tag could mean different systems get different builds of the image, depending on what they have locally. (Your coworker could have a different mongo:latest than you; this is a bigger problem in cluster environments like Kubernetes.)
I think it's a good thing to put version in front of the image name, you can manage them more easily, but you have to be careful to pass the update regularly to avoid loopholes.

Do Docker images change? How to ensure they do not?

One of the main benefits of Docker is reproducibility. One can specify exactly which programs and libraries get installed how and where, for example.
However, I'm trying to think this through and can't wrap my head around it. How I understand reproducibility is that if you request a certain tag, you will receive the same image with the same contents every time. However there are two issues with that:
Even if I try to specify a version as thoroughly as possible, for example python:3.8.3, I seem to have no guarantee that it points to a static non-changing image? A new version could be pushed to it at any time.
python:3.8.3 is a synonym for python:3.8.3-buster which refers to the Debian Buster OS image this is based on. So even if Python doesn't change, the underlying OS might have changes in some packages, is that correct? I looked at the official Dockerfile and it does not specify a specific version or build of Debian Buster.
If you depend on external docker images, your Docker image indeed has no guarantee of reproducability. The solution is to import the Python:3.8.3 image into your own Docker Registry, ideally a docker registry that can prevent overriding of tags (immutability), e.g. Harbor.
However, reproducibility if your Docker image is harder then only the base image you import. E.g. if you install some pip packages, and one of the pip packages does not pin a version of a package they depend on, you still have no guarantee that rebuilding your Docker image leads to the same image. Hosting those python packages in your own pip artifactory is again the solution here.
Addressing your individual concerns.
Even if I try to specify a version as thoroughly as possible, for example python:3.8.3, I seem to have no guarantee that it points to a static non-changing image? A new version could be pushed to it at any time.
I posted this in my comment on your question, but addressing it here as well. Large packages use semantic versioning. In order for trust to work, it has to be established. This method of versioning introduces trust and consistency to an otherwise (sometimes arbitrary) system.
The trust is that when they uploaded 3.8.3, it will remain as constant as possible for the future. If they added another patch, they will upload 3.8.4, if they added a feature, they will upload 3.9.0, and if they broke a feature, they would create 4.0.0. This ensures you, the user, that 3.8.3 will be the same, every time, everywhere.
Frameworks and operating systems often backport patches. PHP is known for this. If they find a security hole in v7 that was in v5, they will update all versions of v5 that had it. While all the v5 versions were updated from their original published versions, functionality remained constant. This is important, this is the trust.
So, unless you were "utilizing" that security hole to do what you needed to do, or relying on a bug, you should feel confident that 3.8.3 from DockerHub should always be used.
NodeJS is a great example. They keep all their old deprecated versions available in Docker Hub for archival sake.
I have been utilizing named tags (NOT latest) from Docker Hub in all my projects for work and home, and I've never into an issue after deployment where a project crashed because something changed "under my feet". In fact, just last week, I rebuilt and updated some code on an older version of NodeJS (from 4 years ago) which required a repull, and because it was a named version (not latest), it worked exactly as expected.
python:3.8.3 is a synonym for python:3.8.3-buster which refers to the Debian Buster OS image this is based on. So even if Python doesn't change, the underlying OS might have changes in some packages, is that correct? I looked at the official Dockerfile and it does not specify a specific version or build of Debian Buster.
Once a child image (python) is built off a parent image (buster), it is immutable. The exception is if the child image (python) was rebuilt at a later date and CHOOSES to use a different version of the parent image (buster). But this is considered bad-form, sneaky and undermines the PURPOSE of containers. I don't know any major package that does this.
This is like doing a git push --force on your repository after you changed around some commits. It's seriously bad practice.
The system is designed and built on trust, and in order for it to be used, adopted and grow, the trust must remain. Always check the older tags of any container you want to use, and be sure they allow their old deprecated tags to live on.
Thus, when you download python:3.8.3 today, or 2 years from now, it should function exactly the same.
For example, if you docker pull python:2.7.8, and then docker inspect python:2.7.8 you'll find that it is the same container that was created 5 years ago.
"Created": "2014-11-26T22:30:48.061850283Z",

Docker, update image or just use bind-mounts for website code?

I'm using Django but I guess the question is applicable to any web project.
In our case, there are two types of codes, the first one being python code (run in django), and others are static files (html/js/css)
I could publish new image when there is a change in any of the code.
Or I could use bind mounts for the code. (For django, we could bind-mount the project root and static directory)
If I use bind mounts for code, I could just update the production machine (probably with git pull) when there's code change.
Then, docker image will handle updates that are not strictly our own code changes. (such as library update or new setup such as setting up elasticsearch) .
Does this approach imply any obvious drawback?
For security reasons is advised to keep an operating system up to date with the last security patches but docker images are meant to be released in an immutable fashion in order we can always be able to reproduce productions issues outside production, thus the OS will not update itself for security patches being released. So this means we need to rebuild and deploy our docker image frequently in order to stay on the safe side.
So I would prefer to release a new docker image with my code and static files, because they are bound to change more often, thus requiring frequent release, meaning that you keep the OS more up to date in terms of security patches without needing to rebuild docker images in production just to keep the OS up to date.
Note I assume here that you release new code or static files at least in a weekly basis, otherwise I still recommend to update at least once a week the docker images in order to get the last security patches for all the software being used.
Generally the more Docker-oriented solutions I've seen to this problem learn towards packaging the entire application in the Docker image. That especially includes application code.
I'd suggest three good reasons to do it this way:
If you have a reproducible path to docker build a self-contained image, anyone can build and reproduce it. That includes your developers, who can test a near-exact copy of the production system before it actually goes to production. If it's a Docker image, plus this code from this place, plus these static files from this other place, it's harder to be sure you've got a perfect setup matching what goes to production.
Some of the more advanced Docker-oriented tools (Kubernetes, Amazon ECS, Docker Swarm, Hashicorp Nomad, ...) make it fairly straightforward to deal with containers and images as first-class objects, but trickier to say "this image plus this glop of additional files".
If you're using a server automation tool (Ansible, Salt Stack, Chef, ...) to push your code out, then it's straightforward to also use those to push out the correct runtime environment. Using Docker to just package the runtime environment doesn't really give you much beyond a layer of complexity and some security risks. (You could use Packer or Vagrant with this tool set to simulate the deploy sequence in a VM for pre-production testing.)
You'll also see a sequence in many SO questions where a Dockerfile COPYs application code to some directory, and then a docker-compose.yml bind-mounts the current host directory over that same directory. In this setup the container environment reflects the developer's desktop environment and doesn't really test what's getting built into the Docker image.
("Static files" wind up in a gray zone between "is it the application or is it data?" Within the context of this question I'd lean towards packaging them into the image, especially if they come out of your normal build process. That especially includes the primary UI to the application you're running. If it's things like large image or video assets that you could reasonably host on a totally separate server, it may make more sense to serve those separately.)

How to simply use docker for deployment?

Docker seems to be the incredible new tool to solve all developer headaches when it comes to packaging and releasing an application, yet i'm unable to find simple solutions for just upgrading a existing application without having to build or buy into whole "cloud" systems.
I don't want any kubernetes cluster or docker-swarm to deploy hundreds of microservices. Just simply replace an existing deployment process with a container for better encapsulation and upgradability.
Then maybe upgrade this in the future, if the need for more containers increases so manual handling would not make sense anymore
Essentially the direct app dependencies (Language and Runtime, dependencies) should be bundled up without the need to "litter" the host server with them.
Lower level static services, like the database, should still be in the host system, as well as a entry router/load-balancer (simple nginx proxy).
Does it even make sense to use it this way? And if so, is there any "best practice" for doing something like this?
Update:
For the application i want to use it on, i'm already using Gitlab-CI.
Tests are already run inside a docker environment via Gitlab-CI, but deployment still happens the "old way" (syncing the git repo to the server and automatically restarting the app, etc).
Containerizing the application itself is not an issue, and i've also used full docker deployments via cloud services (mostly Heroku), but for this project something like this is overkill. No point in paying hundreds of $$ for a cloud server environment if i need pretty much none of the advantages of it.
I've found several of "install your own heroku" kind of systems but i don't need or want to manage the complexity of a dynamic system.
I suppose basically a couple of remote bash commands for updating and restarting a docker container (after it's been pushed to a registry by the CI) on the server, could already do the job - though probably pretty unreliably compared to the current way.
Unfortunately, the "best practice" is highly subjective, as it depends entirely on your setup and your organization.
It seems like you're looking for an extremely minimalist approach to Docker containers. You want to simply put source code and dependencies into a container and push that out to a system. This is definitely possible with Docker, but the manner of doing this is going to require research from you to see what fits best.
Here are the questions I think you should be asking to get started:
1) Is there a CI tool that will help me package together these containers, possibly something I'm already using? (Jenkins, GitLab CI, CircleCI, TravisCI, etc...)
2) Can I use the official Docker images available at Dockerhub (https://hub.docker.com/), or do I need to make my own?
3) How am I going to store Docker Images? Will I host a basic Docker registry (https://hub.docker.com/_/registry/), or do I want something with a bit more access control (Gitlab Container Registry, Harbor, etc...)
That really only focuses on the Continuous Integration part of your question. Once you figure this out, then you can start to think about how you want to deploy those images (Possibly even using one of the tools above).
Note: Also, Docker doesn't eliminate all developer headaches. Does it solve some of the problems? Absolutely. But what Docker, and the accompanying Container mindset, does best is shift many of those issues to the left. What this means is that you see many of the problems in your processes early, instead of those problems appearing when you're pushing to prod and you suddenly have a fire drill. Again, Docker should not be seen as a solve-all. If you go into Docker thinking it will be a solve-all, then you're setting yourself up for failure.

How to use Liberty 8.5.5.9 Docker

we believe new Websphere Liberty 16.0.0.2 has an important bug related to JAX-RS 2.0 client, which prevents standard REST calls from apps deployed to work. The last version we know to be this bug free is 8.5.5.9, but Dockerfile of the official Docker by IBM has already been updated to 16.0.0.2
Even though we use Dockers, I am no Docker geek. Is it possible to specify in my Dockerfile in first line:
FROM websphere-liberty:webProfile7
That I want the version of the Docker that includes 8.5.5.9 and not the last one? Which one would it be? (other Docker, like Solr, explain the different versions in the doc)
If you look at the 'tags' tab on Docker Hub you will see that there are other historical tags still available including websphere-liberty:8.5.5.9-webProfile7. Note that these images represent a snapshot in time e.g. they are not rebuilt when new versions of the base Ubuntu image are created. The intention is that Liberty provides zero migration and therefore you should always be able to use the latest. You have obviously found the counter-example...

Resources