Get dockerfile / docker commands from docker image - docker

Is it possible to get back the docker commands which were run to produce a given docker image? Since each line of a docker file should map to a single layer, it seems this would be possible, but I don't see anything in the docs.

docker history <image>
Does pretty much that.

Is it possible to get back the docker commands which were run to produce a given docker image?
No, considering you have commands like docker export / docker import which allows to flatten an image:
docker export <containerID> | docker import - <imagename>
The resulting image would be build from a container, and include only one layer. Not even a docker history would be able to give clues as to the original images and their Dockerfile which where part of the original container.

You can use combinations of two docker commands to achieve what you want:
docker inspect <image>
and
docker history <image>
Or you can use this cool service to see how that image being generated, each layer is a command in your docker file:
https://imagelayers.io/?images=java:latest,golang:latest,node:latest,python:latest,php:latest,ruby:latest

I guess it depends on where you got the image from.
In the case of these docker containers of mine from the Docker Hub you can use
this link from the right hand side of the webpage to follow it to this github repo containing the Dockerfile(s).
I do not think there is a command to "unassemble" a container / image and get back the instructions which made it.

For the images you create image metadata (labels) can be used to store Dockerfile
https://docs.docker.com/engine/userguide/labels-custom-metadata/
Initial solution was proposed here https://speakerdeck.com/garethr/managing-container-configuration-with-metadata
This approach of storing Dockerfile is not very efficient - it requires container to be started in order to extract the Dockerfile.
I personally use different approach - encode Dockerfile with Base64 and pass such encoded string using external arguments to set image label. This way you can read content of Dockerfile directly from image using inspect.
You can find detailed example here: https://gist.github.com/lruslan/3dea3b3d52a66531b2a1

Related

How can I use the containers offer by webdevops/*?

I'm learning about Docker Containers, so, I found this repo with a lot of images and references, can anyone help me in order to understand how can I use those images?
I know the docker run --rm command
With docker you first need a docker image. A docker image is a representation of an application that docker can understand and run.
The most common ways to get one is to use docker pull or to generate yours with docker build.
You can check the images you got with docker images
Once you got your image you can run it with docker run MyImage, this will create a container, a container is a running application.

Is it possible to specify a custom Dockerfile for docker run?

I have searched high and low for an answer to this question. Perhaps it's not possible!
I have some Dockerfiles in directories such as dev/Dockerfile and live/Dockerfile. I cannot find a way to provide these custom Dockerfiles to docker run. docker build has the -f option, but I cannot find a corresponding option for docker run in the root of the app. At the moment I think I will write my npm/gulp script to simply change into those directories, but this is clearly not ideal.
Any ideas?
You can't - that's not how Docker works:
The Dockerfile is used with docker build to build an image
The resulting image is used with docker run to run a container
If you need to make changes at run-time, then you need to modify your base image so that it can take, e.g. a command-line option to docker run, or a configuration file as a mount.

How can I find the images being pulled from the SHA1s?

When we run a docker container if the relevant image is not in the local repo it is being downloaded but in a specific sequence i.e parent images etc.
If I don’t know anything about the image how could I find from which images is being based on based on the layers pulled as displayed in a docker run?
The output only shows the SHA1s on any docker run etc
AFAIK, you can't, there is no reverse function for a hash.
Docker just tries to get the image from local, when its not available tries to fetch it from the registry. The default registry is DockerHub.
When you don't specify any tag when running the container ie: docker run ubuntu instead of docker run ubuntu:16.04 the default latest is used. You'll have to visit the registry and search which version the latest tag is pointing to.
Usually in DockerHub there is a link pointing the GitHub repo where you can find the Dockerfile, in the Dockerfile you can find how its built, including the root image.
You also can get some extra info with docker image inspect image:tag, but you'll find more hashes in the layers.
Take a look to dockerfile-from-image
"Similar to how the docker history command works, the dockerfile-from-image script is able to re-create the Dockerfile (approximately) that was used to generate an image using the metadata that Docker stores alongside each image layer."
With this, maybe you can get the source of the image.

How can I pull a Docker image in a Dockerfile?

I have a very simple system consisting of two containers, and I can successfully orchestrate them on my local machine with docker compose. I would like to put this system in a single VM in the cloud and allow others to easily do the same.
Because my preferred cloud provider provides easy access to a container OS, I would like to fit this system in a single container for easy distribution and deployment. I don't think I'm doing anything to violate the difficulties here, so I was hoping to use a Docker-in-Docker setup and make a single composite image that runs docker compose to bring up my two containers, just like on my local machine.
But, when I try to add
RUN docker pull my/image1
RUN docker pull my/image2
to the composite Dockerfile that extends the Docker image, those commands fail upon build because the Docker daemon is not running.
What I'm trying to accomplish here is to pull the two sub-images into my composite image at build time to minimize startup time of the composite image. Is there a way to do that?
There is a way to do this, but it is probably a bad idea.
Use docker-machine to create a docker-machine instance.
Use docker-machine env to get the credentials for your newly created docker-machine instance. These will be a couple of environment variables.
Add something like ARG DOCKER_HOST="tcp://172.16.62.130:2376" for each of the credentials created in the previous step. Put it in your Dockerfile before the first RUN docker ....
After the last ARG .. but before the first RUN docker ... put in some ENV DOCKER_HOST=${DOCKER_HOST} for all credential variables.
This should enable the docker pull to work, but it does not really solve your problem because the pull happens on the docker-machine and does not get captured in the docker image.
To get your desired effect you would need to additionally have
RUN docker save ... to export the pulled image to a tar archive file on the image.
Then you would have to add corresponding logic to docker load ... import the tar archive file.
The bottom line is that you can do this, but you probably should not. I don't think it will save you any time. It will probably cost you time.

Where to see the Dockerfile for a docker image?

Is there a way to see the Dockerfile that generated an image I downloaded, to use as a template for my own docker images?
Use
docker history --no-trunc IMAGE_NAME_OR_ID
This will show all commands run in the image building process in reverse order. It's not exactly a Dockerfile, but you can find all essential content.

Resources