Can I use/build custom created docker image to create another image? - docker

I want to create an image A in one instance/server, but in that instance there is no internet connection.
So can I create an image B with all packages installed at my machine and then Push to x.x.x.x and use it in image A as the FROM tag?
It will look like :
FROM x.x.x.x/B:latest
RUN ***
ENTRYPOINT
Please suggest a correct solution for this problem.

Yes, you can.
First though, you say image A is on a server that has no internet connection. If that is true, then you can't access the built image B that you've pushed to x.x.x.x unless the x.x.x.x that you refer to is localhost.
To answer the question fully with the assumption that there's no internet:
Dockerfile B contains all the stuff you want in your base image. Build that. Then move the image to the internet-less server that you're building image A on. (To move the image, check out docker export or docker save commands and/or google 'moving a docker image from one host to another'. My initial search lead me here: https://blog.giantswarm.io/moving-docker-container-images-around/)
(note: for anyone that wants to do this and you have a internet connection, you would push image B to a repo and then pull the image straight from there in Dockerfile A which would skip the moving from host to host part.)
Then, just like you've written already, the Dockerfile for image A should have:
FROM imageB:latest
to pull from your first image. It's all pretty easy. Long story long, yes, you can build your own images and then build other images based off of that image.

Short answer - of course, you can. You can use any image, including your own, to build new images.
Long answer - consider Docker multistage build for you purpose. This allows reduce the amount of images and the space occupied by your docker registry.
You can read more on https://docs.docker.com/develop/develop-images/multistage-build/
In short - you create a single Dockerfile where you define several images, based on each other. In case you don't need your base image outside from derived, this is your case. Following example will illustrate:
Dockerfile
# First create base image
FROM foo:latest as mybase # note we tag it 'mybase'
RUN blahblah
FROM mybase # here we derive new image, note Dockerfile is same!
RUN blahblahblah

Related

Making sure the "platform" part of a 3rd party image from DockerHub is up-to-date

I would like to use other peoples images from DockerHub. I trust the applications that the containers are for and I trust the maintainers to update the DockerHub image if their application gets an update. I however do not trust them to docker build --pull every time the baseimage they use gets an update (for example when they use FROM debian:bullseye-slim in their Dockerfile).
What I want: to check continuously that everything (including the "platform" aka the Debian baseimage in this example) is up to date without building everything myself (if possible).
My main problem: There seems to be no way to check a base image from a pulled docker image (see Docker missing layer IDs in output and other stackexchange questions about the same thing). Therefore I cannot automatically and generally compare the layers of the image from my running container with the upstream baseimage (or even the date of the last update of the application image with that of the baseimage).
Is there a way to do what I described above? Or do I really have to
a) fully trust the maintainer of the DockerHub image to have their build pipeline in check or
b) build everything myself (in which case: why would there even exist a DockerHub instead of a DockerFile-Hub)?
If that may be relevant: This is sort of a extension of this older question: https://serverfault.com/questions/611082/how-to-handle-security-updates-within-docker-containers

Override a volume when Building docker image from another docker image

sorry if the question is basic but would it be possible to build a docker image from another one with a different volume in the new image? My use case is the following:
Start From image library/odoo (cfr. https://hub.docker.com/_/odoo/)
upload folders into the volume "/mnt/extra-addons"
build a new image, tag it then put it in our internal image repo
how can we achieve that? I would like to avoid putting extra folders into the host filesystem
thanks a lot
This approach seems to work best until the Docker development team adds the capability you are looking for.
Dockerfile
FROM percona:5.7.24 as dbdata
MAINTAINER monkey#blackmirror.org
FROM centos:7
USER root
COPY --from=dbdata / /
Do whatever you want . This eliminates the VOLUME issue. Heck maybe I'll write tool to automatically do this :)
You have a few options, without involving the host OS that runs the container.
Make your own Dockerfile, inherit from the library/odoo Docker image using a FROM instruction, and COPY files into the /mnt/extra-addons directory. This still involves your host OS somewhat, but may be acceptable since you wouldn't necessarily be building the Docker image on the same host you were running it.
Make your own Dockerfile, as in (1), but use an entrypoint script to download the contents of /mnt/extra-addons at runtime. This would increase your container startup time since the download would need to take place before running your service, but no host directories would need be involved.
Personally I would opt for (1) if your build pipeline supports it. That would bake the addons right into the image, so the image itself would be a complete, ready-to-go build artifact.

How can I see Dockerfile for each docker image?

I have the following docker images.
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
hello-world latest 48b5124b2768 2 months ago 1.84 kB
docker/whalesay latest 6b362a9f73eb 22 months ago 247 MB
Is there a way I can see the Dockerfile of each docker image on my local system?
The answer at Where to see the Dockerfile for a docker image? does not help me because it does not exactly show the Dockerfile but the commands run to create the image. I want the Dockerfile itself.
As far as I know, no, you can't. Because a Dockerfile is used for building the image, it is not packed with the image itself. That means you should reverse engineer it. You can use docker inspect on an image or container, thus getting some insight and a feel of how it is configured. The layers an image are also visible, since you pull them when you pull a specific image, so that is also no secret.
However, you can usually see the Dockerfile in the repository of the image itself on Dockerhub. I can't say most repositories have Dockerfiles attached, but the most of the repositories I seen do have it.
Different repository maintainers may opt for different ways to document the Dockerfiles. You can see the Dockerfile tab on the repository page if automatic builds are set up. But when multiple parallel versions are available (like for Ubuntu), maintainers usually opt to put links the Dockerfiles for different versions in the description. If you take a look here: https://hub.docker.com/_/ubuntu/, under the "Supported tags" (again, for Ubuntu), you can see there are links to multiple Dockerfiles, for each respective Ubuntu version.
As the images are downloaded directly from the Dockerhub, only the image is pulled from the docker hub into your machine. If you want to see the dockerfile, then you can go to docker hub and type the image name and version name in the tag format (e.g ubuntu:14.04) this will open the image along with Docker file details. Also keep in mind, only if the owner of the image shared their Dockerfile, you can see it. Otherwise not. Most official images will not provide you with Dockerfile.
Hope it helps!
You can also regenerate the dockerfile from an image or use the docker history <image name> command to see what is inside.
check this: Link to answer
TL;DR
So if you have a docker image that was built by a dockerfile, you can recover this information (All except from the original FROM command, which is important, I’ll grant that. But you can often guess it, especially by entering the container and asking “What os are you?”). However, the maker of the image could have manual steps that you’d never know about anyways, plus they COULD just export an image, and re-import it and there would be no intermediate images at that point.
One approach could be to save the image in a image.tar file. Next extract the file and try to explore if you can find Dockerfile in any of the layer directories.
docker image save -o hello.tar hello-world
This will output a hello.tar file.
hello.tar is the compressed output image file and hello-world is the name of the image you are saving.
After that, extract the compressed file and explore the image layer directories. You may find Dockerfile in one of the directories.
However, there is one thing to note, if the image was built while ignoring the Dockerfile in the .dockerignore. Then you will not find the Dockerfile by this approach.

docker how to add host entry to generic image available in docker repository

A generic selenium/node-firefox docker image available in docker repository. I need to make changes/append to the image so that it have our test environment host entries.
What would be the best approach to do this. Should I just take the source and make the changes and build my own image?
In terms of maintainability it is possible to do it such a way that it always gets the base image and my changes append to it to make a new image? If so how can this be done?
When you run a docker container, there is an add-host argument that lets you specify what host entries you need to make available to the container. This would be similar to if you updated the /etc/hosts file.
docker run --add-host myserver:192.168.0.100 the-image-name
You don't need to update the source image to accomplish this. If you need to perform customizations to a docker image beyond what the runtime arguments give you, you can always derive your own Dockerfile from the image (although you should research best practices around deriving image files and not making deeply nested image files).
Here is a reference page.

Different images in containers

I want to create separated containers with a single service in each (more or less). I am using the php7-apache image which seems to use a base image of debian:jessie, php7 and apache. Since apache and php in this case are pretty intertwined I don't mind using this container.
I want to start adding other services to their own containers (git for example) and was considering using a tiny base image like busybox or alpinebox for these containers to keep image size down.
That said, I have read that using the same base image as other containers only gives you the 'penalty' of the one time image download of the base OS (debian jessie) which is then cached - while using tiny OSes in other containers will download those OSes on top of the base OS.
What is the best practice in this case? Should I use the same base image (debian jessie) for all the containers in this case?
You may want to create a base image from scratch. Create a base image from scratch.
From docker documentation
You can use Docker’s reserved, minimal image, scratch, as a starting point for building containers. Using the scratch “image” signals to the build process that you want the next command in the Dockerfile to be the first filesystem layer in your image.
While scratch appears in Docker’s repository on the hub, you can’t pull it, run it, or tag any image with the name scratch. Instead, you can refer to it in your Dockerfile. For example, to create a minimal container using scratch:
This example creates the hello-world image used in the tutorials. If you want to test it out, you can clone the image repo

Resources