I want to create nexus 3 docker with pre-define configuration (few repos and dummy artifacts) for testing my library.
I can't call the nexus API from the docker file, because it require running nexus.
I tried to up the nexus 3 container, config it manually and create image from container
docker commit ...
the new image created, but when I start the new container from it, it doesn't contains all my manual configuration that I did before.
How can I customize the nexus 3 image?
If I understand well, you are trying to create a portable, standalone customized nexus3 installation in a self-contained docker image for testing/distribution purpose.
Doing this by extending the official nexus3 docker image will not work. Have a look at their Dockerfile: it defines a volume for /nexus_data and there is currently no way of removing this from a child image.
It means that when your start a container without any specific options, a volume is created for each new container. This is why your committed image starts with blank data. The best you can do is to name the data volume when you start the container (option -v nexus_data:/nexus_data for docker run) so that the same volume is being reused. But the data will still be in your local docker installation, not in the image.
To do what you wish, you need to recreate you own docker image without a data volume. You can do it from the above official Dockerfile, just remove the volume line. Then you can customize and commit your container to an image which will contain the data.
Related
I just started working with docker this week and came across a 'dockerfile'. I was reading up on what this file does, and the official documentation basically mentions that the FROM keyword is needed to build a "base image". These base images are pulled from Docker hub, or downloaded from there.
Silly question - Are base images always pulled from docker hub?
If so and if I understand correctly I am assuming that running the dockerfile to create an image is not done very often (only when needing to create an image) and once the image is created then the image is whats run all the time?
So the dockerfile then can be migrated to which ever enviroment and things can be set up all over again quickly?
Pardon the silly question I am just trying to understand the over all flow and how dockerfile fits into things.
If the local (on your host) Docker daemon (already) has a copy of the container image (i.e. it's been docker pull'd) specified by FROM in a Dockerfile then it's cached and won't be repulled.
Container images include a tag (be wary of ever using latest) and the image name e.g. foo combined with the tag (which defaults to latest if not specified) is the full name of the image that's checked i.e. if you have foo:v0.0.1 locally and FROM:v0.0.1 then the local copy is used but FROM foo:v0.0.2 will pull foo:v0.0.2.
There's an implicit docker.io prefix i.e. docker.io/foo:v0.0.1 that references the Docker registry that's being used.
You could repeatedly docker build container images on the machines where the container is run but this is inefficient and the more common mechanism is that, once a container image is built, it is pushed to a registry (e.g. DockerHub) and then pulled from there by whatever machines need it.
There are many container registries: DockerHub, Google Artifact Registry, Quay etc.
There are tools other than docker that can be used to interact with containers e.g. (Red Hat's) Podman.
I have a docker-compose environment setup like so:
Oracle
Filesystem
App
...
etc...
The filesystem container downloads the latest code from our repo and exposes its volume for other containers to mount. This works great except that containers that need to use the code to do builds can't access it since the volume isn't mounted until the containers are run.
I'd like to avoid checkout/downloading the code since the codebase is over 3 gig right now... Hence trying to do something spiffier.
Is there a better way to do this?
As you mentioned, Docker volumes won't work as volumes are used when the container start.
The best solution for your situation is to use Docker multistage Builds. The idea here is to have an image which has the code base and other images can access this code directly from this image.
You basically have an image, that is responsible for pulling the code:
FROM alpine/git
RUN git clone ...
You then build this image, either separately or as the first image in a compose file.
Other images can then use this image as such:
FROM code-image as code
COPY --from=code /git/<code-repository> /code
This will make the code available to all the images, and it will only be pulled once from the remote repo.
I have 2 machines(separate hosts) running docker and I am using the same image on both the machines. How do I keep both the images in sync. For eg. suppose I make changes to the image in one of the hosts and want the changes to reflect in the other host as well. I can commit the image and copy the image over to the other host. Is there any other efficient way of doing this??
Some ways I can think of:
1. with a Docker registry
the workflow here is:
HOST A: docker commit, docker push
HOST B: docker pull
2. by saving the image to a .tar file
the workflow here is:
HOST A: docker save
HOST B: docker load
3. with a Dockerfile and by building the image again
the workflow here is:
provide a Dockerfile together with your code / files required
everytime your code has changed and you want to make a release, use docker build to create a new image.
from the hosts that you want to take the update, you will have to get the updated source code (maybe by using a version control software like Git), and then docker build the image
4. CI/CD pipeline
you can see a video here: docker.com/use-cases/cicd
Keep in mind that containers are considered to be ephemeral. This means that updating an image inside another host will then require:
to stop and remove any old container (running with the outdated image)
to run a new one (with the updated image)
I quote from: Best practices for writing Dockerfiles
General guidelines and recommendations
Containers should be ephemeral
The container produced by the image your Dockerfile defines should be as ephemeral as possible. By “ephemeral,” we mean that it can be stopped and destroyed and a new one built and put in place with an absolute minimum of set-up and configuration.
You can perform docker push to upload you image to docker registry and perform a docker pull to get the latest image from another host.
For more information please look at this
Is it possible?
Here's my nomad job file
job "test-job" {
...
group "test-group" {
driver = "docker"
config {
image = "<image-name>"
}
...
}
}
I understand it is possible to COPY a file to a Docker image via docker build of a Dockerfile.
But I want to avoid of the explicit creation of a new Docker image from the 'image-name' image.
I also understand it is possible to copy a file to a running Docker container derived from a Docker image.
But since I use Nomad to roll out Docker images and populate containers it would be convenient for me if Nomad could copy (by creation on-the-fly of the new Docker layer with file copied).
So I wonder if and how it is possible?
I'm not sure I understand the question...
If you are trying to get the docker image onto the nomad agent, so nomad can run it.. Nomad will do that for you.
If you are trying to add files to a docker image after it's been built during the deploy..., Nomad has a few options.. sort of.
You can use volumes (https://www.nomadproject.io/docs/drivers/docker.html#volumes) and mount a dir with a file in it.
Alternatively you can use the artifact (https://www.nomadproject.io/docs/job-specification/artifact.html) and put it in /local (Nomad creates this folder for you), which I believe is exposed to docker containers by default.
Or, a different way, is just have the Docker container on startup just go fetch the file(s) you need. A lot of people use consul-template for that, which is now built-in to Nomad (https://www.nomadproject.io/docs/job-specification/template.html).
None of these methods are actually creating a new docker image layer however, there is no way to do that at deploy time, that should be done at build time.
I'm working from my local laptop and preparing a Dockerfile that I want to use for deployment later on the server. The problem is server contains only docker client/daemon, but has no connectivity to official docker registry and neither it provides it's own image registry.
Is it possible to build my image locally, ship it to the server and run a container on it without going through the trouble of creating my own image registry?
You can save an image using docker save imagename which creates a tarfile and then use docker load to create an image on the server from that tarfile.
Don't confuse this with docker export which creates a tar from a container. See Difference between save and export in Docker. As shown in that link an exported container might be smaller because it flattens layers. If size matters you might consider commiting a container and exporting it right afterwards.