Docker multistage doesn't call entrypoint - docker

I have a grails app running in Docker, and I was trying to add the Apache Derby server to run in the same image using Docker multi stage. But when I add Derby, then the grails app doesn't run.
So I started with this:
$ cat build/docker/Dockerfile
FROM azul/zulu-openjdk:13.0.3
EXPOSE 8080
VOLUME ["/AppData/derby"]
WORKDIR /app
COPY holder-0.1.jar application.jar
COPY app-entrypoint.sh app-entrypoint.sh
RUN chmod +x app-entrypoint.sh
RUN apt-get update && apt-get install -y dos2unix && dos2unix app-entrypoint.sh
ENTRYPOINT ["/app/app-entrypoint.sh"]
So far, so good this starts off grails as a web server, and I can connect to the web app. But then I added Derby....
FROM azul/zulu-openjdk:13.0.3
EXPOSE 8080
VOLUME ["/AppData/derby"]
WORKDIR /app
COPY holder-0.1.jar application.jar
COPY app-entrypoint.sh app-entrypoint.sh
RUN chmod +x app-entrypoint.sh
RUN apt-get update && apt-get install -y dos2unix && dos2unix app-entrypoint.sh
ENTRYPOINT ["/app/app-entrypoint.sh"]
FROM datagrip/derby-server
WORKDIR /derby
Now when I start the container, Derby runs, but the grails app doesn't run at all. This is obvious from what is printed on the terminal, but I also logged in and did a ps aux to verify it.
Now I suppose I could look into creating my own startup script to start the Derby server, although this would seem to violate the independence of the two images' configurations.
Other people might say, I should use two containers, but I was hoping to keep it simple, derby is a very simple database, I don't feel the need for this complexity here.
Am I just trying to push the concept of multi stage docker containers too far?
Is it actually normal at all for docker containers to have more than one process start up? Will I have to fudge it and come up with my own entry point that starts Derby server in the background before starting grails in the foreground? Or is this all just wrong, and I really should be using multiple containers?

It is fine for Docker to have multiple processes in one container but the concept is different: one container, one process. Having a database separately is certainly how it should be done.
Now the problem with your Dockerfile is that after you've declared a second FROM, you have effectively discarded most of what you've done so far. You may use a previous stage to copy some files from it (this is normally used to build some binaries) but Docker will not do that for you, unless you explicitly define what to copy. Thus your actual entrypoint is the one declared in datagrip/derby-server image.
I suggest you get started with docker-compose. It's a nice tool to run several containers without complications. With a file like this:
version: "3.0"
services:
app:
build:
context: .
database:
image: datagrip/derby-server
docker-compose will build an image for the app (if the Dockefile is in the same directory but this can be customised) and start a database as well. The database can be access from the application container as just 'database' (it is a resolvable name). See this reference for more options.

Related

Docker: Best practices for installing dependencies - Dockerfile or ENTRYPOINT?

Being relatively new to Docker development, I've seen a few different ways that apps and dependencies are installed.
For example, in the official Wordpress image, the WP source is downloaded in the Dockerfile and extracted into /usr/src and then this is installed to /var/www/html in the entrypoint script.
Other images download and install the source in the Dockerfile, meaning the entrypoint just deals with config issues.
Either way the source scripts have to be updated if a new version of the source is available, so one way versus the other doesn't seem to make updating for a new version any more efficient.
What are the pros and cons of each approach? Is one recommended over the other for any specific sorts of setup?
Generally you should install application code and dependencies exclusively in the Dockerfile. The image entrypoint should never download or install anything.
This approach is simpler (you often don't need an ENTRYPOINT line at all) and more reproducible. You might run across some setups that run commands like npm install in their entrypoint script; this work will be repeated every time the container runs, and the container won't start up if the network is unreachable. Installing dependencies in the Dockerfile only happens once (and generally can be cached across image rebuilds) and makes the image self-contained.
The Docker Hub wordpress image is unusual in that the underlying Wordpress libraries, the custom PHP application, and the application data are all stored in the same directory tree, and it's typical to use a volume mount for that application tree. Its entrypoint script looks for a wp-includes/index.php file inside the application source tree, and if it's not there it copies it in. That's a particular complex entrypoint script.
A generally useful pattern is to keep an application's data somewhere separate from the application source tree. If you're installing a framework, install it as a library using the host application's ordinary dependency system (for example, list it in a Node package.json file rather than trying to include it in a base image). This is good practice in general; in Docker it specifically lets you mount a volume on the data directory and not disturb the application.
For a typical Node application, for example, you might install the application and its dependencies in a Dockerfile, and not have an ENTRYPOINT declared at all:
FROM node:14
WORKDIR /app
# Install the dependencies
COPY package.json yarn.lock ./
RUN yarn install
# Install everything else
COPY . ./
# Point at some other data directory
RUN mkdir /data
ENV DATA_DIR=/data
# Application code can look at process.env.DATA_DIR
# Usual application metadata
EXPOSE 3000
CMD yarn start
...and then run this with a volume mounted for the data directory, leaving the application code intact:
docker build -t my-image .
docker volume create my-data
docker run -p 3000:3000 -d -v my-data:/data my-image

How to extend/inherit/join from two seperate Dockerfiles, multi-stage builds?

I have a deployment process which I currently achieve via docker-machine and docker-compose. (I have multiple services deployed which are interelated - one a Django application, and another the resty-auto-ssl Docker image (ref: https://github.com/Valian/docker-nginx-auto-ssl)
My docker-compose file is something like:
services:
web:
nginx:
postgres:
(N.B. I'm not using postgres in production, that's merely as example).
What I need to do, is to essentially bundle all of this up into one built Docker image.
Each service references a different Dockerfile base, one for the Django application:
FROM python:3.7.2-slim
RUN apt-get update && apt-get -y install cron && apt-get -y install nano
ENV PYTHONUNBUFFERED 1
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
COPY ./requirements.txt /usr/src/app/requirements.txt
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
COPY . /usr/src/app
RUN ["chmod", "+x", "/usr/src/app/init.sh"]
And one for the valian/docker-nginx-auto-ssl image:
FROM valian/docker-nginx-auto-ssl
COPY nginx.conf /usr/local/openresty/nginx/conf/
I assume theoretically I could some how join these two Dockerfiles into one? Would this be a case of utilising multi-stage Docker builds (https://docs.docker.com/v17.09/engine/userguide/eng-image/multistage-build/#before-multi-stage-builds) to be used into a joined docker-compose service?
I don't believe you can join images, a Dockerfile image is like a VM hard disk, it would be like saying I want to join 2 hard disk images together. These images may even be different versions of Linux and now even Windows. If you want 1 single image, you could build one yourself by starting off with a base mage like Alpine Linux and then install all the dependencies you want.
But the good news the images you use from Dockfile you can get the source for these, so all the hard work of what to put in your Docker is done for you.
eg. For the python bit -> https://github.com/jfloff/alpine-python
And then for nginx-auto -> https://github.com/Valian/docker-nginx-auto-ssl
Because the nginx-auto-sll is based on alphie-fat, I would suggest using that one. And then get the details from both Docker files and append them to each other.
Once you have created this image you can then use again & again. So although it might be a pain setting up initially, it pays dividends later.

How to run any commands in docker volumes?

After couple of days testing and working on docker (i am in general trying to migrate from vagrant to docker) i encountered a huge problem which i am not sure how or where to fix it.
docker-compose.yml
version: "3"
services:
server:
build: .
volumes:
- ./:/var/www/dev
links:
- database_dev
- database_testing
- database_dev_2
- mail
- redis
ports:
- "80:8080"
tty: true
#the rest are only images of database redis and mailhog with ports
Dockerfile
example_1
FROM ubuntu:latest
LABEL Yamen Nassif
SHELL ["/bin/bash", "-c"]
RUN apt-get install vim mc net-tools iputils-ping zip curl git -y
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
RUN cd /var/www/dev
RUN composer install
Dockerfile
example_2
....
RUN apt-get install apache2 openssl php7.2 php7.2-common libapache2-mod-php7.2 php7.2-fpm php7.2-mysql php7.2-curl php7.2-dom php7.2-zip php7.2-gd php7.2-json php7.2-opcache php7.2-xml php7.2-cli php7.2-intl php7.2-mbstring php7.2-redis -y
# basically 2 files with just rooting to /var/www/dev
COPY docker/config/vhosts /etc/apache2/sites-available/
RUN service apache2 restart
....
now the example_1 composer.json file/directory not found
and example_2 apache will says the root dir is not found
file/directory = /var/www/dev
i guess its because its a volume and it wont be up until the container is fully up because if i launch the container without the prev commands which will lead to an error i can then login to the container and execute the commands from command line without anyerror
HOW TO FIX THIS ?
In your first Dockerfile, use the COPY directive to copy your application into the image before you do things like RUN composer install. It'd look something like
FROM php:7.0-cli
COPY . /usr/src/app
WORKDIR /usr/src/app
RUN composer install
(cribbed from the php image documentation; that image may not have composer preinstalled).
In both Dockerfiles, remember that each RUN command creates a new empty container, runs its command, and cleans up after itself. That means commands like RUN cd ... have no effect, and you can't start a service in the background in one RUN command and have it available later; it will get stopped before the Dockerfile moves on to the next line.
In the second Dockerfile, commands like service or systemctl or initctl just don't work in Docker and you shouldn't try to use them. Standard practice is to start the server process as a foreground process when the container launches via a default CMD directive. The flip side of this is that, since the server won't start until docker run time, your volume will be available at that point. I might RUN mkdir in the Dockerfile just to be sure it exists.
The problem seems the execution order. At image build time /var/www/dev is available. When you start a container from that image the container /var/www/dev is overwritten with your local mount.
If you need no access from your host, the you can simple skip the extra volume.
In case you want use it in other containers to, the you should work with symlinks.

Best practice for running a non trusted .net core application inside a docker container

Let's say I want to run inside a docker container some third party .net core application I don't fully trust.
To simplify, let's assume that application is the simple Hello World console app generated by dotnet new. This is just the 2 files Program.cs and project.json.
Right now I have tried the following approach:
Copy that application into some folder of my host
Create a new container using the microsoft/dotnet image, mounting that folder as a volume, running a specific command for building and running the app:
$ docker run --rm -it --name dotnet \
-v /some/temp/folder/app:/app \
microsoft/dotnet:latest \
/bin/sh -c 'cd /app && dotnet restore && dotnet run'
I was also considering the idea of having a predefined dockerfile with microsoft/dotnet as the base image. It will basically embed the application code, set it as the working dir and run the restore, build and run commands.
FROM microsoft/dotnet:latest
COPY . /app
WORKDIR /app
RUN ["dotnet", "restore"]
RUN ["dotnet", "build"]
ENTRYPOINT ["dotnet", "run"]
I could then copy the predefined dockerfile into the temp folder, build a new image just for that particular application and finally run a new container using that image.
Is the dockerfile approach overkill for simple command line apps? What would be the best practice for running those untrusted applications? (which might be one I completely ignore)
EDIT
Since I will discard the container after it runs and the docker command will be generated by some application, I will probably stay with the first option of just mounting a volume.
I have also found this blog post where they built a similar sanbox environment and ended up following the same mounted volume approach
As far I know, what happens in docker, stays in docker.
When you link a volume (-v) to the image, the process can alter the files in the folder you mounted. But only there. The process cannot follow any symlinks or step out of the mounted folder since it's forbidden for obvious security reasons.
When you don't link anything and copy the application code into the image, it's definitely isolated.
The tcp/udp ports exposition is up to you as well as memory/cpu consumption and you can even isolate the process from internet e.g. like that
Therefore, I don't think that using dockerfile is an overkill and I'd summarize it like this:
When you want to run it once, try it and forget it - use command line if you are ok with typing the nasty command. If you plan to use it more - create a Dockerfile. I don't see much space for declaring "best practice" here, considering it an question of personal preferences.

How do I dockerize an existing application...the basics

I am using windows and have boot2docker installed. I've downloaded images from docker hub and run basic commands. BUT
How do I take an existing application sitting on my local machine (lets just say it has one file index.php, for simplicity). How do I take that and put it into a docker image and run it?
Imagine you have the following existing python2 application "hello.py" with the following content:
print "hello"
You have to do the following things to dockerize this application:
Create a folder where you'd like to store your Dockerfile in.
Create a file named "Dockerfile"
The Dockerfile consists of several parts which you have to define as described below:
Like a VM, an image has an operating system. In this example, I use ubuntu 16.04. Thus, the first part of the Dockerfile is:
FROM ubuntu:16.04
Imagine you have a fresh Ubuntu - VM, now you have to install some things to get your application working, right? This is done by the next part of the Dockerfile:
RUN apt-get update && \
apt-get upgrade -y && \
apt-get install -y python
For Docker, you have to create a working directory now in the image. The commands that you want to execute later on to start your application will search for files (like in our case the python file) in this directory. Thus, the next part of the Dockerfile creates a directory and defines this as the working directory:
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
As a next step, you copy the content of the folder where the Dockerfile is stored in to the image. In our example, the hello.py file is copied to the directory we created in the step above.
COPY . /usr/src/app
Finally, the following line executes the command "python hello.py" in your image:
CMD [ "python", "hello.py" ]
The complete Dockerfile looks like this:
FROM ubuntu:16.04
RUN apt-get update && \
apt-get upgrade -y && \
apt-get install -y python
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY . /usr/src/app
CMD [ "python", "hello.py" ]
Save the file and build the image by typing in the terminal:
$ docker build -t hello .
This will take some time. Afterwards, check if the image "hello" how we called it in the last line has been built successfully:
$ docker images
Run the image:
docker run hello
The output shout be "hello" in the terminal.
This is a first start. When you use Docker for web applications, you have to configure ports etc.
Your index.php is not really an application. The application is your Apache or nginx or even PHP's own server.
Because Docker uses features not available in the Windows core, you are running it inside an actual virtual machine. The only purpose for that would be training or preparing images for your real server environment.
There are two main concepts you need to understand for Docker: Images and Containers.
An image is a template composed of layers. Each layer contains only the differences between the previous layer and some offline system information. Each layer is fact an image. You should always make your image from an existing base, using the FROM directive in the Dockerfile (Reference docs at time of edit. Jan Vladimir Mostert's link is now a 404).
A container is an instance of an image, that has run or is currently running. When creating a container (a.k.a. running an image), you can map an internal directory from it to the outside. If there are files in both locations, the external directory override the one inside the image, but those files are not lost. To recover them you can commit a container to an image (preferably after stopping it), then launch a new container from the new image, without mapping that directory.
You'll need to build a docker image first, using a dockerFile, you'd probably setup apache on it, tell the dockerFile to copy your index.php file into your apache and expose a port.
See http://docs.docker.com/reference/builder/
See my other question for an example of a docker file:
Switching users inside Docker image to a non-root user (this is for copying over a .war file into tomcat, similar to copying a .php file into apache)
First off, you need to choose a platform to run your application (for instance, Ubuntu). Then install all the system tools/libraries necessary to run your application. This can be achieved by Dockerfile. Then, push Dockerfile and app to git or Bitbucket. Later, you can auto-build in the docker hub from github or Bitbucket. The later part of this tutorial here has more on that. If you know the basics just fast forward it to 50:00.

Resources