Dockerfile with entrypoint only from base image - docker

I have a very simple Dockerfile like the below :-
FROM my-base-image
COPY abc.properties /opt/conf/
Now my base-image has a docker entrypoint (at the end of its Dockerfile) but this resulting image as you can see has none. Does this work or do we need to have a docker entrypoint/CMD in any given Dockerfile. Also what would be the order of execution for the COPY instruction in the resulting image. With this i mean since this Dockerfile has no entrypoint it would execute one from the base image but will that be executed after the COPY instruction or will the base image entrypoint be executed first and then execute this COPY instruction when the container starts
Just looking for concepts in docker.

Several of the Dockerfile directives (notably ENTRYPOINT and CMD, but also EXPOSE, LABEL, and MAINTAINER) just set metadata in the image; they don't really do anything themselves. Within a single Dockerfile this will work just fine:
FROM ubuntu:18.04
WORKDIR /app
# Just remembers this in the image metadata; doesn't actually run it
CMD ["/app/main.sh"]
# ...we should actually copy the file in too
COPY main.sh /app
When you have one Dockerfile built FROM another image it acts almost entirely like you ran all of the commands in the first Dockerfile, then all of the commands in the second Dockerfile. Since CMD and ENTRYPOINT just set metadata, the second image inherits this metadata.
Building and running an image are two separate steps. In the example you show, the COPY directive happens during the docker build step, and the base image's command doesn't take effect until the later docker run step. (This is also true in Docker Compose; a common question is about why Dockerfile steps can't connect to other containers declared in the Compose YAML file.)
There is one exception, and it's around ENTRYPOINT. If you have a base image that declares an ENTRYPOINT and a CMD both, and you redeclare the ENTRYPOINT in a derived image, it resets the CMD as well (very last paragraph in this section). This usually isn't a practical problem.

When you build an image, the Dockerfiles are merged as per their instructions. Building an image doesnt mean it runs. So ideally your instructions from your Base Dockerfile and your current Dockerfile will be packed. Since you have mentioned CMD entrypoint in the Base Dockerfile, that will be used for execution of your image inside the container, when you use docker run.
So when you build your image, the COPY statement from your child Dockerfile will also get set. and your image must be built fine.
Execute your docker build and docker run and let us know

Related

how ONBUILD docker instruction works

I have two Dockerfile,
Dockerfile1
FROM centos:centos7
WORKDIR /root
ONBUILD COPY ./onbuilddemo.txt /tmp/onbuilddemo.txt
Dockerfile2
FROM onbuilddemo:latest
FROM adoptopenjdk/openjdk8:jre8u352-b05-ea-ubuntu-nightly
EXPOSE 8080
WORKDIR /root
CMD ["npm", "start"]
The image created out of dockerfile1 is onbuilddemo:latest
Now, when Im running the container built out of the image created from Dockerfile2 , then Im not seeing the file (onbuilddemo.txt) created/available in /tmp folder
Can someone please help , what Im missing . Thanks
You never used the onbuilddemo:latest image for anything, and if built with buildkit, this first step would be completely skipped:
FROM onbuilddemo:latest
FROM adoptopenjdk/openjdk8:jre8u352-b05-ea-ubuntu-nightly
A multi-stage build is used to split build dependencies from the runtime image. It does not merge multiple images together (there's no way to universally do this with arbitrary Linux filesystems that would result in a lot of broken use cases).
You need to remove the second from step, or copy the file from the first to second stage (using copy --from), or add the onbuild definition to the other base image.
Note that onbuild tends to be a bad idea, it's hard to debug and is rarely documented in places that someone is looking to explain the behavior of their build. If you can't run the steps in an entrypoint, consider templating the Dockerfile so that it's clear exactly what's being performed in the build.

How does entrypoint.sh file run in a Docker?

I'm working with a debian base image that comes with an entrypoint.sh file stored in /bin by default.
If I don't define ENTRYPOINT in a Dockerfile, and I do a docker run, I see that entrypoint.sh runs automatically executing a bunch of commands.
If I define an ENTRYPOINT in a Dockerfile, and I do a docker run, I see that entrypoint.sh never runs and the command defined in ENTRYPOINT takes precedence.
My question is, what triggers entrypoint.sh to run? Is this a default behavior of docker?
Solved the problem by referring to the base image. The ENTRYPOINT of the base image was running entrypoint.sh.

Can we have a docker container without a shell?

Can we build a docker container that doesn't have any shell in that container ? Is it possible to create a container without a shell ?
Yes, you can create a container from scratch, which does not contain anything even bash, it will only contain binaries that you copy during build time, otherwise, it will be empty.
FROM scratch
COPY hello /
CMD ["/hello"]
You can use Docker’s reserved, minimal image, scratch, as a starting
point for building containers. Using the scratch “image” signals to
the build process that you want the next command in the Dockerfile to
be the first filesystem layer in your image.
While scratch appears in Docker’s repository on the hub, you can’t
pull it, run it, or tag any image with the name scratch. Instead, you
can refer to it in your Dockerfile. For example, to create a minimal
container using scratch:
scratch-docker-image
Using this as a base image, you can create your custom image, for example you only node runtime not thing more, then you try form scratch-node.
FROM node as builder
WORKDIR /app
COPY package.json package-lock.json index.js ./
RUN npm install --prod
FROM astefanutti/scratch-node
COPY --from=builder /app /
ENTRYPOINT ["node", "index.js"]

Can i reference a Dockerfile in a Dockerfile?

I have a Dockerfile that creates the build image I want to use here: ~/build/Dockerfile then I use a standard image to deploy
The image built from ~/build/Dockerfile is not Published anywhere, I know I can simply copy paste the one Dockerfile into the other, however it would be better if I could simply reference it so..
Is it possible to somehow reference the Dockerfile itself when deploying?
like so:
FROM [insert something that creates an image using ~/build/Dockerfile] as build-env
... build operations ....
FROM some-image
COPY --from=build-env /built .
ENTRYPOINT [blah]
This won't work but is there some other way to accomplish this?
No you can't do it because you have to provide an image to FROM.
Change the COPY line to
COPY --from=step1 /built .
And write a script to build your image:
cd path1
docker build -t step1 .
cd path2
docker build -t final_image .
(if you don't want to hard code step1 in the Dockerfile, replace it with a var and call with ARG)
Generally things in Docker space like the docker run command and the FROM directive will use a local image if it exists; it doesn't need to be pushed to a repository. That means you can build your first image and refer to it in the later Dockerfile by name. (There's no way to refer to the other Dockerfile per se.)
Newer versions of Docker have an extended form of the Dockerfile COPY command which
accepts a flag --from=<name|index>.... In case a build stage with a specified name can’t be found an image with the same name is attempted to be used instead.
So if you ahead of time run
docker build -t build-env ~/build
then the exact syntax you show in your proposed Dockerfile will work
FROM some-image
COPY --from=build-env /built .
and it doesn't matter that the intermediate build image isn't actually pushed anywhere.

Which .sh will run first: base image or the .sh present in the Dockerfile?

Suppose I have a base image:
FROM ubuntu:trusty
.
.
COPY ./temp1.sh /sbin/entrypoint.sh
ENTRYPOINT ["/sbin/entrypoint.sh"]
CMD ["start-service"]
I called the image on my Dockerfile
FROM letsdoit/baseimage
.
.
COPY ./temp2.sh /sbin/entrypoint.sh
ENTRYPOINT ["/sbin/entrypoint.sh"]
CMD ["start-service"]
So what will be the order of execution?
Your settings for ENTRYPOINT and CMD overwrite the ones of the base image. This means the base script will not run at all.
Furthermore, you even have overwritten the /sbin/entrypoint.sh file with your own version. So the original script is not even available in the file system of the container.
When you launch the image, unless you change settings on the docker run command, it will run (only) /sbin/entrypoint.sh, and that will receive a single command-line argument start-service.
The CMD and ENTRYPOINT in the derived image totally overwrite the corresponding settings in the base image. A container only runs one command and that’s the most recent ENTRYPOINT (or the docker run --entrypoint if you manually set that).

Resources