How to use Liberty 8.5.5.9 Docker - docker

we believe new Websphere Liberty 16.0.0.2 has an important bug related to JAX-RS 2.0 client, which prevents standard REST calls from apps deployed to work. The last version we know to be this bug free is 8.5.5.9, but Dockerfile of the official Docker by IBM has already been updated to 16.0.0.2
Even though we use Dockers, I am no Docker geek. Is it possible to specify in my Dockerfile in first line:
FROM websphere-liberty:webProfile7
That I want the version of the Docker that includes 8.5.5.9 and not the last one? Which one would it be? (other Docker, like Solr, explain the different versions in the doc)

If you look at the 'tags' tab on Docker Hub you will see that there are other historical tags still available including websphere-liberty:8.5.5.9-webProfile7. Note that these images represent a snapshot in time e.g. they are not rebuilt when new versions of the base Ubuntu image are created. The intention is that Liberty provides zero migration and therefore you should always be able to use the latest. You have obviously found the counter-example...

Related

Do Docker images change? How to ensure they do not?

One of the main benefits of Docker is reproducibility. One can specify exactly which programs and libraries get installed how and where, for example.
However, I'm trying to think this through and can't wrap my head around it. How I understand reproducibility is that if you request a certain tag, you will receive the same image with the same contents every time. However there are two issues with that:
Even if I try to specify a version as thoroughly as possible, for example python:3.8.3, I seem to have no guarantee that it points to a static non-changing image? A new version could be pushed to it at any time.
python:3.8.3 is a synonym for python:3.8.3-buster which refers to the Debian Buster OS image this is based on. So even if Python doesn't change, the underlying OS might have changes in some packages, is that correct? I looked at the official Dockerfile and it does not specify a specific version or build of Debian Buster.
If you depend on external docker images, your Docker image indeed has no guarantee of reproducability. The solution is to import the Python:3.8.3 image into your own Docker Registry, ideally a docker registry that can prevent overriding of tags (immutability), e.g. Harbor.
However, reproducibility if your Docker image is harder then only the base image you import. E.g. if you install some pip packages, and one of the pip packages does not pin a version of a package they depend on, you still have no guarantee that rebuilding your Docker image leads to the same image. Hosting those python packages in your own pip artifactory is again the solution here.
Addressing your individual concerns.
Even if I try to specify a version as thoroughly as possible, for example python:3.8.3, I seem to have no guarantee that it points to a static non-changing image? A new version could be pushed to it at any time.
I posted this in my comment on your question, but addressing it here as well. Large packages use semantic versioning. In order for trust to work, it has to be established. This method of versioning introduces trust and consistency to an otherwise (sometimes arbitrary) system.
The trust is that when they uploaded 3.8.3, it will remain as constant as possible for the future. If they added another patch, they will upload 3.8.4, if they added a feature, they will upload 3.9.0, and if they broke a feature, they would create 4.0.0. This ensures you, the user, that 3.8.3 will be the same, every time, everywhere.
Frameworks and operating systems often backport patches. PHP is known for this. If they find a security hole in v7 that was in v5, they will update all versions of v5 that had it. While all the v5 versions were updated from their original published versions, functionality remained constant. This is important, this is the trust.
So, unless you were "utilizing" that security hole to do what you needed to do, or relying on a bug, you should feel confident that 3.8.3 from DockerHub should always be used.
NodeJS is a great example. They keep all their old deprecated versions available in Docker Hub for archival sake.
I have been utilizing named tags (NOT latest) from Docker Hub in all my projects for work and home, and I've never into an issue after deployment where a project crashed because something changed "under my feet". In fact, just last week, I rebuilt and updated some code on an older version of NodeJS (from 4 years ago) which required a repull, and because it was a named version (not latest), it worked exactly as expected.
python:3.8.3 is a synonym for python:3.8.3-buster which refers to the Debian Buster OS image this is based on. So even if Python doesn't change, the underlying OS might have changes in some packages, is that correct? I looked at the official Dockerfile and it does not specify a specific version or build of Debian Buster.
Once a child image (python) is built off a parent image (buster), it is immutable. The exception is if the child image (python) was rebuilt at a later date and CHOOSES to use a different version of the parent image (buster). But this is considered bad-form, sneaky and undermines the PURPOSE of containers. I don't know any major package that does this.
This is like doing a git push --force on your repository after you changed around some commits. It's seriously bad practice.
The system is designed and built on trust, and in order for it to be used, adopted and grow, the trust must remain. Always check the older tags of any container you want to use, and be sure they allow their old deprecated tags to live on.
Thus, when you download python:3.8.3 today, or 2 years from now, it should function exactly the same.
For example, if you docker pull python:2.7.8, and then docker inspect python:2.7.8 you'll find that it is the same container that was created 5 years ago.
"Created": "2014-11-26T22:30:48.061850283Z",

what is the best practicies of storing images in container registry

I need different images for dev,stage, and prod environments, how should I store images in dokckerhub?
should I use tags
my_app:prod
my_app:dev
my_app:stage
or maybe include env name in image like this
my_app_stage
my_app_stage
my_app_stage
Tags are primarily meant for versioning, as the default tag latest implies. If you use it for other meaning without versioning info, like tagging environment as my_app:dev and my_app:prod, there's no strict rule to prohibit that, but it could cause problem for deployment of the containers.
Imagine you have a container defined in docker-compose.yml that specifies my_app:prod as image. It's fine when you're developing locally, but when you deploy to production with Docker Compose or an orchestration service like Kubernetes, depending on policy, the controller can choose to reuse images from its local cache instead of pulling from registry every time. Now you just completed a new version of the image, and pushed it to Docker Hub feeling assured. Too bad it's still under the same name and tag, so the controller considers it's the same and uses the cached image, causing your old version to be deployed.
It could be worse than that. Not all nodes or clusters are configured the same, some will pull the latest version from the registry while some don't. Your swarm or deployment now contains a mixed set of old and new container versions, producing erratic behavior at best.
Now you know better and push your new version as my_app/prod:v2.0 and update the config. All controllers see the new version and pull down to use for replacing and scaling containers. Everything is consistent.
A simple version number as tag may sound a bit too simple, as practically you could have many properties that you find useful to add to an image, to help with documentation or query maybe. Or you need a specific name and tag so you can push to a certain cloud provider. Luckily you don't have to sacrifice versioning to do that, as Docker allows you to apply as many tags as you like:
docker build -t my_app:latest -t my_app:v2.0 -t my_app:prod -t cloud_user/app_image_id:v2.0 .

Can Nexus Repository Pro check container root priviligies etc

Looking into Nexus Repository Pro to be used as Docker container image registry.
As I understand it can do vulnerability scanning but can it also can check if containers runs under root user?
Is it possible to validate with such rule?
Is it also possible to do version check, e.g. if a container base image has updates?
NXRM doesn't do anything but store the images and provide them on request.
If you are using a Docker proxy, you can search to see if new images are available via the CLI but there is nothing in NXRM that will automatically (automagically) relay this for you. It is basically an interim service between you and the proxied location (often docker hub).
FYI, vulnerability scanning is done by the sister application: Lifecycle. There are aspects of it that work with OSS as well. Doesn't answer your question but since you made a statement in description that isn't fully accurate, thought you (or others) might be interested.

How can I retrieve an older image for Docker instead of latest?

What command(s) do I have to run to retrieve an older image of a software offered in Docker?
I have problems with the latest image of localstack so I though I could try older versions to see what happens. However, I saw a comment in an issue mentioning editing the YAML file this way:
image: localstack/localstack:0.9
but no other info... (it was not the point of the issue, so it's understandable).
I've been looking around and saw many posts about getting the latest image (i.e. docker update ...), but nothing that would allow me to go back in time except for images that I would happen to already have.
Just the change above had absolutely no effect. I'm wondering how can I get Docker to download an older image so I can run that older one instead of the latest? I'm also wondering about how to find a list of available tags for a given docker to make sure I use a version that actually exists.
You should look for existing tags at desired repo's hub.
If you are using docker-compose the correct way to do it is:
image: <image>:<tag>
If no tag was found then it is not available or does not exist.
Here are tags available at localstack hub

How to update software inside a docker container?

I am very new to Docker and currently trying to get my head around if there is any best practice guide to update software that runs inside a docker container in a very large distributed environment. I already found couple of posts around updating a MySQL database in docker, etc. It gives a good hint for any software that stores data, but what if you want to update other parts or your own software package or services that are distributed and used by several other docker images through docker-compose?
Is there someone with real life experience doing that in such an environment who can help me or other newbies to understand the best practices in docker if there are any.
Thanks for your help!
You never update software in a running container. You pull down a new version from the hub. If we assume you're using the latest tag (which is a bad idea, always pin your versions) of your image and it's one of the official library images or the publicly available that uses automated builds you'll get the latest version of the container image when you pull the image.
This assume you've also separated the data out of your container either as a host volume or using the data container pattern.
The container should be considered immutable, if you change it's state it's no longer a true version of the image.

Resources