Import Dockerfile from different local directory via FROM - docker

I want to create a multistage build process whereas each of the docker files are nested inside their own directories locally with their corresponding dependencies that are ADDed in for each Docker file. Is there a way to import a Docker file from a different directory locally whereas I am able to import it with Docker's FROM command, to create several stages in a build?
If not, would I be able to ADD the other staged Docker file's into the current Docker file and then use FROM inside the docker container, deleting it after it is added and used in FROM?
Maybe I am thinking about multistage builds the wrong way.
Or can I simply run FROM {path/to/docker/locally}? This is not working for me.

If you use Docker 20.10+, you can do this:
# syntax = edrevo/dockerfile-plus
INCLUDE+ {path/to/docker/locally}
RUN ...
The INCLUDE+ instruction gets imported by the first line in the Dockerfile. You can read more about the dockerfile-plus at https://github.com/edrevo/dockerfile-plus

Is there a way to import a Docker file from a different directory locally whereas I am able to import it with Docker's FROM command, to create several stages in a build?
No. The FROM instruction is used to import pre-built images only, it can not be used to import a Dockerfile.
If not, would I be able to ADD the other staged Docker file's into the current Docker file?
No. The ADD instruction can only copy files inside the build context (which usually is the current working directory) to containers. You cannot ADD ../something /something.
Or can I simply run FROM {path/to/docker/locally}?
No. But one way that is gonna work for you is, to build the other image first, say its name is first-image:latest, then you can use the COPY instruction to copy files from that image:
COPY --from=first-image:latest /some/file /some/file

The best solution i have found to this so far is to use Dockerfile stages
Put all this in a single Dockerfile:
FROM php:8.1-fpm AS base
RUN apt-get update && apt-get install -y \
libzip-dev libonig-dev \
libwebp-dev # ...
RUN docker-php-ext-install zip mbstring pdo pdo_mysql gd bcmath sockets opcache soap gmp intl
# Worker
FROM base AS worker
RUN apt install -y supervisor
# CLI
FROM base AS cli
RUN apt-get install -yq git unzip
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
And then you can use it in docker-compose.yml:
services:
worker:
build:
context: php
dockerfile: Dockerfile
target: worker
cli:
build:
context: php
dockerfile: Dockerfile
target: cli
This will make the containers build efficiently and the base will be built only once.
You can also use the base itself as a target.

If I understand the question correctly, what you are trying to achieve is to build an image and then use the same image and build something (the next »stage«) on top of it.
Docker supports this concept, but not by importing another Dockerfile.
Dockerfiles and the directory they are in are meant to be self-contained, i. e. the only things you need to build the image are these directory contents. Therefore, you cannot load other Dockerfiles from other directories.
The way to go would be to build a base image, e. g. myappimage, using a Dockerfile myappimage/Dockerfile. After you have built it, you can refer to this base image in another Dockerfile (e. g. mytestingimage/Dockerfile) using FROM myappimage to build an image (mytestingimage) on top of myappimage.
Then, mytestingimage is exactly like myappimage but with additional layers added by your ADD and other commands.

Related

How to extend/inherit/join from two seperate Dockerfiles, multi-stage builds?

I have a deployment process which I currently achieve via docker-machine and docker-compose. (I have multiple services deployed which are interelated - one a Django application, and another the resty-auto-ssl Docker image (ref: https://github.com/Valian/docker-nginx-auto-ssl)
My docker-compose file is something like:
services:
web:
nginx:
postgres:
(N.B. I'm not using postgres in production, that's merely as example).
What I need to do, is to essentially bundle all of this up into one built Docker image.
Each service references a different Dockerfile base, one for the Django application:
FROM python:3.7.2-slim
RUN apt-get update && apt-get -y install cron && apt-get -y install nano
ENV PYTHONUNBUFFERED 1
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
COPY ./requirements.txt /usr/src/app/requirements.txt
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
COPY . /usr/src/app
RUN ["chmod", "+x", "/usr/src/app/init.sh"]
And one for the valian/docker-nginx-auto-ssl image:
FROM valian/docker-nginx-auto-ssl
COPY nginx.conf /usr/local/openresty/nginx/conf/
I assume theoretically I could some how join these two Dockerfiles into one? Would this be a case of utilising multi-stage Docker builds (https://docs.docker.com/v17.09/engine/userguide/eng-image/multistage-build/#before-multi-stage-builds) to be used into a joined docker-compose service?
I don't believe you can join images, a Dockerfile image is like a VM hard disk, it would be like saying I want to join 2 hard disk images together. These images may even be different versions of Linux and now even Windows. If you want 1 single image, you could build one yourself by starting off with a base mage like Alpine Linux and then install all the dependencies you want.
But the good news the images you use from Dockfile you can get the source for these, so all the hard work of what to put in your Docker is done for you.
eg. For the python bit -> https://github.com/jfloff/alpine-python
And then for nginx-auto -> https://github.com/Valian/docker-nginx-auto-ssl
Because the nginx-auto-sll is based on alphie-fat, I would suggest using that one. And then get the details from both Docker files and append them to each other.
Once you have created this image you can then use again & again. So although it might be a pain setting up initially, it pays dividends later.

Docker image for built Golang binary

I have a Go application that I build into a binary and distribute as a Docker image.
Currently, I'm using ubuntu as my base image, but this causes an issue where if a user tries to use a Timezone other than UTC or their local timezone, they get an error stating:
pod error: panic: open /usr/local/go/lib/time/zoneinfo.zip: no such file or directory
This error is caused because the LoadLocation package in Go requires that file.
I can think of two ways to fix this issue:
Continue using the ubuntu base image, but in my Dockerfile add the commands: RUN apt-get install -y tzdata
Use one of Golang's base images, eg. golang:1.7.5-alpine.
What would be the recommended way? I'm not sure if I need to or should be using a Golang image since this is the container where the pre-built binary runs. My understanding is that Golang images are good for building the binary in the first place.
I prefer to use multi-stage build. On 1st step you use special golang building container for installing all dependencies and build an application. On 2nd stage I copy binary file to empty alpine container. This allows having all required tooling and minimal docker image simultaneously (in my case 6MB instead of 280MB).
Example of Dockerfile:
# build stage
FROM golang:1.8
ADD . /src
RUN set -x && \
cd /src && \
go get -t -v github.com/lisitsky/go-site-search-string && \
CGO_ENABLED=0 GOOS=linux go build -a -o goapp
# final stage
FROM alpine
WORKDIR /app
COPY --from=0 /src/goapp /app/
ENTRYPOINT /goapp
EXPOSE 8080
Since not all OS have localized timezone installed, this is what I did to make it work:
ADD https://github.com/golang/go/raw/master/lib/time/zoneinfo.zip /usr/local/go/lib/time/zoneinfo.zip
The full example of multi-step Docker image for adding timezone is here
This is more of a vote, but apt-get is what we (my company's tech group) do in situations like this. It gives us complete control over the hierarchy of images, but this is assuming you may have future images based on this one.
You can use the system's tzdata. Or you can copy $GOROOT/lib/time/zoneinfo.zip into your image, which is a trimmed version of the system one.

Best way to extend Dockerfile to pull deps/env vars from common/shared repositories

We have application repositories with common shared repos. Our application repos contain Dockerfiles and what we are trying to do is whenever common/shared repos change and depend on other libraries or env vars we want to have Dockerfiles in these common/shared repos as well. And the Dockerfile in the application repo will include them so that any deps/env changes are pulled in from the common/shared repos.
After googling for "docker include another dockerfile" I found the github issue https://github.com/docker/docker/issues/735. Which is exactly what we are looking for but this issue doesn't provide a clear solution. Is there best way to achieve this as of now? Thanks
The simple answer is to create you own base image. In your base image goes any common code. And then other Dockerfiles refer to this image in their FROM line.
Dockerfile for base image:
# pick an image to work off of
FROM debian:latest
# do your common stuff here
Then run docker build -t mybase:latest .
Now in the other images you want to create, they have the following Dockerfile:
FROM mybase:latest
# do your non-common stuff here
Here's a way, but it might not cover everything you're looking for. If the common repo has a shell script that installs dependencies and sets environment variables then you can invoke the shell script while building the image, by copying the shell script to the image and running it.
Say for example, you have a file called env_script.sh in your common_repo that looks like like
#!/bin/bash
apt install -y libpng-dev libfreetype6-dev pkg-config
pip install flask
export PYTHONPATH="${PYTHONPATH}:/usr/local/my_"
then the Dockerfile of your application would use it as below:
COPY ./common_repo/env_script.sh /tmp/ # Copy the shell script
RUN /tmp/env_script.sh # Run the shell script
RUN rm /tmp/env_script.sh # Remove temporary file after done

Docker container with build output and no source

I have a build process that converts typescript into javascript, minifies and concatenates css files, etc.
I would like to put those files into an nginx docker container, but I don't want the original javascript / css source to be included, nor the tools that I use to build them. Is there a good way to do this, or do I have to run the build outside docker (or in a separately defined container), then COPY the relevant files in?
This page talks about doing something similar in a manual way, but doesn't explain how to automate the process e.g. with docker-compose or something.
Create a docker images with all required tools to build your code also that can clone code and build it. After build it have to copy
into docker volume for example volume name is /opt/webapp.
Launch build docker container using build image in step 1
docker run -d -P --name BuildContainer -v /opt/webapp:/opt/webapp build_image_name
Launch nginx docker container that will use shared volume of build docker in which your build code resides.
docker run -d -P --name Appserver -v /opt/webapp:/usr/local/nginx/html nginx_image_name
After building and shipping your build code to Appserver . you can delete BuildContainer because that is no more required.
Advantage of above steps:
your build code will in host machine so if one Appserver docker fail or stop then your build code will be safe in host machine and you can launch new docker using that build code.
if you create docker image for building code then every time no need to install required tool while launching docker.
you can build your code in host machine also but if you want your code should be build in fresh environment every time then this will be good. or if you use same host machine to build/compile code every time then some older source code may create problem or git clone error etc.
EDIT:
you can append :ro (Read only) to volume by which one container will not affect another. you can Read more about docker volume Here . Thanks #BMitch for suggestion.
The latest version of docker supports multi-stage builds where build products can be copied from on container to another.
https://docs.docker.com/engine/userguide/eng-image/multistage-build/
This is an ideal scenario for a multi-stage build. You perform the compiling in the first stage, copy the output of that compile to the second stage, and only ship that second stage. Each stage is an independent image that begins with a FROM line. And to transfer files between stages, there's now a COPY --from syntax. The result looks roughly like:
# first stage with your full compile environment, e.g. maven/jdk
FROM maven as build
WORKDIR /src
COPY src /src
RUN mvn install
# second stage starts below with just a jre base image
FROM openjdk:jre
# copy the jar from the first stage here
COPY --from=build /src/result.jar /app
CMD java -jar /app/result.jar
Original answer:
Two common options:
As mentioned, you can build outside and copy the compiled result into the container.
You merge your download, build, and cleanup step into a single RUN command. This is a common best practice to minimize the size of each layer.
An example Dockerfile for the second option would look like:
FROM mybase:latest
RUN apt-get update && apt-get install tools \
&& git clone https://github.com/myproj \
&& cd myproj \
&& make \
&& make install
&& cd .. \
&& apt-get rm tools && apt-get clean \
&& rm -rf myproj
The lines would be a little more complicated than that, but that's the gist.
As #dnephin suggested in his comments on the question and on #pl_rock's answer, the standard docker tools are not designed to do this, but you can use a third party tool like one of the following:
dobi (48 GitHub stars)
packer (6210 GitHub stars)
rocker (759 GitHub stars)
conveyor (152 GitHub stars)
(GitHub stars correct when I wrote the answer)
We went with dobi as it was the first one we heard of (because of this question), but it looks like packer is the most popular.
Create a docker file to run your build process, then run cleanup code
Example:
FROM node:latest
# Provides cached layer for node_modules
ADD package.json /tmp/package.json
RUN cd /tmp && npm install
RUN mkdir -p /dist && cp -a /tmp/node_modules /dist/
RUN cp /tmp/package.json /dist
ADD . /tmp
RUN cd /tmp && npm run build
RUN mkdir -p /dist && cp -a /tmp/. /dist
#run some clean up code here
RUN npm run cleanup
# Define working directory
WORKDIR /dist
# Expose port
EXPOSE 4000
# Run app
CMD ["npm", "run", "start"]
In your docker compose file
web:
build: ../project_path
environment:
- NODE_ENV=production
restart: always
ports:
- "4000"

How do I dockerize an existing application...the basics

I am using windows and have boot2docker installed. I've downloaded images from docker hub and run basic commands. BUT
How do I take an existing application sitting on my local machine (lets just say it has one file index.php, for simplicity). How do I take that and put it into a docker image and run it?
Imagine you have the following existing python2 application "hello.py" with the following content:
print "hello"
You have to do the following things to dockerize this application:
Create a folder where you'd like to store your Dockerfile in.
Create a file named "Dockerfile"
The Dockerfile consists of several parts which you have to define as described below:
Like a VM, an image has an operating system. In this example, I use ubuntu 16.04. Thus, the first part of the Dockerfile is:
FROM ubuntu:16.04
Imagine you have a fresh Ubuntu - VM, now you have to install some things to get your application working, right? This is done by the next part of the Dockerfile:
RUN apt-get update && \
apt-get upgrade -y && \
apt-get install -y python
For Docker, you have to create a working directory now in the image. The commands that you want to execute later on to start your application will search for files (like in our case the python file) in this directory. Thus, the next part of the Dockerfile creates a directory and defines this as the working directory:
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
As a next step, you copy the content of the folder where the Dockerfile is stored in to the image. In our example, the hello.py file is copied to the directory we created in the step above.
COPY . /usr/src/app
Finally, the following line executes the command "python hello.py" in your image:
CMD [ "python", "hello.py" ]
The complete Dockerfile looks like this:
FROM ubuntu:16.04
RUN apt-get update && \
apt-get upgrade -y && \
apt-get install -y python
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY . /usr/src/app
CMD [ "python", "hello.py" ]
Save the file and build the image by typing in the terminal:
$ docker build -t hello .
This will take some time. Afterwards, check if the image "hello" how we called it in the last line has been built successfully:
$ docker images
Run the image:
docker run hello
The output shout be "hello" in the terminal.
This is a first start. When you use Docker for web applications, you have to configure ports etc.
Your index.php is not really an application. The application is your Apache or nginx or even PHP's own server.
Because Docker uses features not available in the Windows core, you are running it inside an actual virtual machine. The only purpose for that would be training or preparing images for your real server environment.
There are two main concepts you need to understand for Docker: Images and Containers.
An image is a template composed of layers. Each layer contains only the differences between the previous layer and some offline system information. Each layer is fact an image. You should always make your image from an existing base, using the FROM directive in the Dockerfile (Reference docs at time of edit. Jan Vladimir Mostert's link is now a 404).
A container is an instance of an image, that has run or is currently running. When creating a container (a.k.a. running an image), you can map an internal directory from it to the outside. If there are files in both locations, the external directory override the one inside the image, but those files are not lost. To recover them you can commit a container to an image (preferably after stopping it), then launch a new container from the new image, without mapping that directory.
You'll need to build a docker image first, using a dockerFile, you'd probably setup apache on it, tell the dockerFile to copy your index.php file into your apache and expose a port.
See http://docs.docker.com/reference/builder/
See my other question for an example of a docker file:
Switching users inside Docker image to a non-root user (this is for copying over a .war file into tomcat, similar to copying a .php file into apache)
First off, you need to choose a platform to run your application (for instance, Ubuntu). Then install all the system tools/libraries necessary to run your application. This can be achieved by Dockerfile. Then, push Dockerfile and app to git or Bitbucket. Later, you can auto-build in the docker hub from github or Bitbucket. The later part of this tutorial here has more on that. If you know the basics just fast forward it to 50:00.

Resources