I am new in docker world. So I have existing Dockerfile which somewhat looks like below:
# Base image
FROM <OS_IMAGE>
# Install dependencies
RUN zypper --gpg-auto-import-keys ref -s && \
zypper -n install git net-tools libnuma1
# Create temp user
RUN useradd -ms /bin/bash userapp
# Creating all the required folders that is required for installment.
RUN mkdir -p /home/folder1/
RUN mkdir -p /home/folder2/
RUN sudo pip install --upgrade pip
RUN python3 code_which_takes_time.py
# Many more stuff below this.
So code_which_takes_time.py takes time to run which will download many stuff and will execute it.
So the requirement is whenever we add more statements below RUN python3 code_which_takes_time.py will unnecessary will execute this python script everytime while building an image.
So I would like to split this image into 2 Dockerfiles.
One file you can run only once. This file will have time consuming stuff which can be run only once while building an image.
Second one will be used to add anymore statements which will added as more layers on top of the existing image.
Because if I run docker build -t "test" . for the current file, it will execute my python script again and again. It's time consuming and I don't want to run it again and again.
My questions:
How can split Dockerfile as I mentioned above.?
How can I build an image with 2 image files.?
How can I run these 2 files?
As of now I do :
Build and run: docker build -t "test" . && docker run -it "test"
Just Build : docker build -t "test" .
Just Run : docker run -it "test"
One thing i can suggest after reading the scenario is that, You want to split your workflow in two Dockerfiles, As far as i know, you can easily break them.
Maintain your first Dockerfile, which will build an image with your python code code_which_takes_time.py executed, commit that image with name "Root_image".
After that when you want to add other tasks in that "Root_image", like RUN python3 etc, simply create a new Dockerfile and use FROM Root_image in that Dockerfile and do the stuff you want to do in it. After performing your task commit your work and named it as "Child_image", eventually your child image is the one inherited from that "Root_image".
Related
Please see the command below:
docker build -t iansbasecontainer:v1 -f DotnetDebug.Dockerfile .
It creates one container as shown below:
DotnetDebug.Dockerfile looks like this:
FROM microsoft/aspnetcore:2.0 AS base
# Install the SSHD server
RUN apt-get update \
&& apt-get install -y --no-install-recommends openssh-server \
&& mkdir -p /run/sshd \
&& echo "root:Docker!" | chpasswd
#Copy settings file. See elsewhere to find them.
COPY sshd_config /etc/ssh/sshd_config
COPY authorized_keys root/.ssh/authorized_keys
# Install Visual Studio Remote Debugger
RUN apt-get install zip unzip
RUN curl -sSL https://aka.ms/getvsdbgsh | bash /dev/stdin -v latest -l ~/vsdbg
EXPOSE 2222
I then run this command:
docker build -t iansimageusesbasecontainer:v1 -f DebugASP.Dockerfile .
However, two images appear:
DebugASP.Dockerfile looks like this:
FROM iansbasecontainer:v1 AS base
WORKDIR /app
MAINTAINER Vladimir Vladimir#akopyan.me
FROM microsoft/aspnetcore-build:2.0 AS build
WORKDIR /src
COPY ./DebugSample .
RUN dotnet restore
FROM build AS publish
RUN dotnet publish -c Debug -o /app
FROM base AS final
COPY --from=publish /app /app
COPY ./StartSSHAndApp.sh /app
EXPOSE 5000
CMD /app/StartSSHAndApp.sh
#If you wish to only have SSH running and start
#your service when you start debugging
#then use just the SSH server, you don't need the script
#CMD ["/usr/sbin/sshd", "-D"]
Why do two images appear? Please note I am relatively new to Docker so this may be a simple answer. I have spent the last few hours Googling it.
Also why is the repository and tag set to: .
Why do two images appear?
As mentioned here:
When using multi-stage builds, each stage produces a new image. That image is stored in the local image cache and will be used on subsequent builds (as part of the caching mechanism). You can run each build-stage (and/or tag the stage, if desired).
Read more about multi-stage builds here.
Docker produces intermediate(aka <none>:<none>) images for each layer, which are later used for final image. You can actually see them if execute docker images -a command.
But what you see is called dangling image. It happens, because some intermediate image is no longer used by final image. In case of multi-stage builds -- images for previous stages are not used in final image, so they become dangling.
Dangling images are useless and use your space, so it's recommended to regularly get rid of them(it's called pruning). You can do that with command:
docker image prune
I have 2 different docker files (production and test environment). I want to build a single multistage docker file with these 2 dockerfiles.
First dockerfile as below:
FROM wildfly:17.0.0
USER jboss
RUN mkdir /opt/wildfly/install && mkdir /opt/wildfly/install/config
COPY --chown=jboss:jboss install /opt/wildfly/install
COPY --chown=jboss:jboss install.sh /opt/wildfly/bin
RUN mkdir -p $JBOSS_HOME/standalone/data/datastorage
CMD ["/opt/wildfly/bin/install.sh"]
Second dockerfile as below:
FROM wildfly:17.0.0
USER jboss
RUN mkdir /opt/wildfly/install && mkdir /opt/wildfly/install/config
COPY --chown=jboss:jboss ./install /opt/wildfly/install
COPY --chown=jboss:jboss install.sh /opt/wildfly/bin
RUN rm /opt/wildfly/install/wildfly-scripts/Setup.cli
RUN mv /opt/wildfly/install/wildfly-scripts/SetupforTest.cli /opt/wildfly/install/wildfly-scripts/Setup.cli
RUN rm /opt/wildfly/install/wildfly-scripts/Properties.properties
RUN mv /opt/wildfly/install/wildfly-scripts/Properties-test.properties /opt/wildfly/install/wildfly-scripts/Properties.properties
RUN mkdir -p $JBOSS_HOME/standalone/data/datastorage
CMD ["/opt/wildfly/bin/install.sh"]
Question How to create a multistage docker file for these 2 docker files?
Thanks in advance.
I'm not sure what you are trying to do requires a multi-stage build. Instead, what you may want to do is use a custom base image (for dev) which would be your first code block and use that base image for production.
If the first image was tagged as shanmukh/wildfly:dev:
FROM shanmukh/wildfly:dev
USER jboss
RUN rm /opt/wildfly/install/wildfly-scripts/Setup.cli
RUN mv /opt/wildfly/install/wildfly-scripts/SetupforTest.cli /opt/wildfly/install/wildfly-scripts/Setup.cli
RUN rm /opt/wildfly/install/wildfly-scripts/Properties.properties
RUN mv /opt/wildfly/install/wildfly-scripts/Properties-test.properties /opt/wildfly/install/wildfly-scripts/Properties.properties
This could be tagged as shanmukh/wildfly:prod.
The reason why I don't think you want a multi-stage build is because you mentioned you are trying to handle two environments (test and production).
Even if you did want to say use a multi-stage build for production, from what I can see, there is no reason to. If your initial build stage included installing build dependencies such as a compiler, copying code and building it, then it'd be efficient to use a multi-stage build as the final image would not include the initial dependencies (such as the compiler and other potential dangerous dev tools) or any unneeded build artifacts that were produced.
Hope this helps.
What is the right way for running multiple commands in one action?
For example:
I want to run a python script as action. Before running this script I need to install the requirements.txt.
I can think of several options:
Create a Dockerfile with the command RUN pip install -r requirements.txt in it.
Use the python:3 image, and run the pip install -r requirements.txt in the entrypoint.sh file before running the arguments from args in main.workflow.
use both pip install and python myscript.py as args
Another example:
I want to run a script that exists in my repository, then compare 2 files (its output and a file that already exists).
This is a process that includes two commands, whereas in the first example, the pip install command can be considered a building command rather than a test command.
the question:
Can I create another Docker for another command, which will contain the output of the previous Docker?
I'm looking for guidelines for the location of the command in Dockerfile, in entrypoint or in args.
You can run multiple commands using a pipe | on the run attribute. Check this out:
name: My Workflow
on: [push]
jobs:
runMultipleCommands:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v1
- run: |
echo "A initial message"
pip install -r requirements.txt
echo "Another message or command"
python myscript.py
bash some-shell-script-file.sh -xe
- run: echo "One last message"
On my tests, running a shell script like ./myscript.sh returns a ``. But running it like bash myscript.sh -xe worked like expected.
My workflow file |
Results
If you want to run this inside the docker machine, an option could be run some like this on you run clause:
docker exec -it pseudoName /bin/bash -c "cd myproject; pip install -r requirements.txt;"
Regard to the "create another Docker for another command, which will contain the output of the previous Docker", you could use multistage-builds on your dockerfile. Some like:
## First stage (named "builder")
## Will run your command (using add git as sample) and store the result on "output" file
FROM alpine:latest as builder
RUN apk add git > ./output.log
## Second stage
## Will copy the "output" file from first stage
FROM alpine:latest
COPY --from=builder ./output.log .
RUN cat output.log
# RUN your checks
CMD []
This way the apk add git result was saved to a file, and this file was copied to the second stage, that can run any check on the results.
I am trying to add Glide to my Golang project but I'm not getting my container working. I am currently using:
# create image from the official Go image
FROM golang:alpine
RUN apk add --update tzdata bash wget curl git;
# Create binary directory, install glide and fresh
RUN mkdir -p $$GOPATH/bin
RUN curl https://glide.sh/get | sh
RUN go get github.com/pilu/fresh
# define work directory
ADD . /go
WORKDIR /go/src
RUN glide update && fresh -c ../runner.conf main.go
as per #craigchilds94's post. When I run
docker build -t docker_test .
It all works. However, when I change the last line from RUN glide ... to CMD glide ... and then start the container with:
docker run -it --volume=$(PWD):/go docker_test
It gives me an error: /bin/sh: glide: not found. Ignoring the glide update and directly starting fresh results in the same: /bin/sh fresh: not found.
The end goal is to be able to mount a volume (for the live-reload) and be able to use it in docker-compose so I want to be able to build it, but I do not understand what is going wrong.
This should probably work for your purposes:
# create image from the official Go image
FROM golang:alpine
RUN apk add --update tzdata bash wget curl git;
# Create binary directory, install glide and fresh
RUN go get -u github.com/Masterminds/glide
RUN go get -u github.com/pilu/fresh
# define work directory
ADD . /go
WORKDIR /go/src
ENTRYPOINT $GOPATH/bin/fresh -c /go/src/runner.conf /go/src/main.go
As far as I know you don't need to run the glide update after you've just installed glide. You can check this Dockerfile I wrote that uses glide:
https://github.com/timogoosen/dockerfiles/blob/master/btcd/Dockerfile
and here is the REAMDE: https://github.com/timogoosen/dockerfiles/blob/master/btcd/README.md
This article gives a good overview of the difference between: CMD, RUN and entrypoint: http://goinbigdata.com/docker-run-vs-cmd-vs-entrypoint/
To quote from the article:
"RUN executes command(s) in a new layer and creates a new image. E.g., it is often used for installing software packages."
In my opinion installing packages and libraries can happen with RUN.
For starting your binary or commands I would suggest use ENTRYPOINT see:"ENTRYPOINT configures a container that will run as an executable." you could use CMD too for running:
$GOPATH/bin/fresh -c /go/src/runner.conf /go/src/main.go
something like this might work, I didn't test this part:
CMD ["$GOPATH/bin/fresh", "-c", "/go/src/runner.conf /go/src/main.go"]
I have a build process that converts typescript into javascript, minifies and concatenates css files, etc.
I would like to put those files into an nginx docker container, but I don't want the original javascript / css source to be included, nor the tools that I use to build them. Is there a good way to do this, or do I have to run the build outside docker (or in a separately defined container), then COPY the relevant files in?
This page talks about doing something similar in a manual way, but doesn't explain how to automate the process e.g. with docker-compose or something.
Create a docker images with all required tools to build your code also that can clone code and build it. After build it have to copy
into docker volume for example volume name is /opt/webapp.
Launch build docker container using build image in step 1
docker run -d -P --name BuildContainer -v /opt/webapp:/opt/webapp build_image_name
Launch nginx docker container that will use shared volume of build docker in which your build code resides.
docker run -d -P --name Appserver -v /opt/webapp:/usr/local/nginx/html nginx_image_name
After building and shipping your build code to Appserver . you can delete BuildContainer because that is no more required.
Advantage of above steps:
your build code will in host machine so if one Appserver docker fail or stop then your build code will be safe in host machine and you can launch new docker using that build code.
if you create docker image for building code then every time no need to install required tool while launching docker.
you can build your code in host machine also but if you want your code should be build in fresh environment every time then this will be good. or if you use same host machine to build/compile code every time then some older source code may create problem or git clone error etc.
EDIT:
you can append :ro (Read only) to volume by which one container will not affect another. you can Read more about docker volume Here . Thanks #BMitch for suggestion.
The latest version of docker supports multi-stage builds where build products can be copied from on container to another.
https://docs.docker.com/engine/userguide/eng-image/multistage-build/
This is an ideal scenario for a multi-stage build. You perform the compiling in the first stage, copy the output of that compile to the second stage, and only ship that second stage. Each stage is an independent image that begins with a FROM line. And to transfer files between stages, there's now a COPY --from syntax. The result looks roughly like:
# first stage with your full compile environment, e.g. maven/jdk
FROM maven as build
WORKDIR /src
COPY src /src
RUN mvn install
# second stage starts below with just a jre base image
FROM openjdk:jre
# copy the jar from the first stage here
COPY --from=build /src/result.jar /app
CMD java -jar /app/result.jar
Original answer:
Two common options:
As mentioned, you can build outside and copy the compiled result into the container.
You merge your download, build, and cleanup step into a single RUN command. This is a common best practice to minimize the size of each layer.
An example Dockerfile for the second option would look like:
FROM mybase:latest
RUN apt-get update && apt-get install tools \
&& git clone https://github.com/myproj \
&& cd myproj \
&& make \
&& make install
&& cd .. \
&& apt-get rm tools && apt-get clean \
&& rm -rf myproj
The lines would be a little more complicated than that, but that's the gist.
As #dnephin suggested in his comments on the question and on #pl_rock's answer, the standard docker tools are not designed to do this, but you can use a third party tool like one of the following:
dobi (48 GitHub stars)
packer (6210 GitHub stars)
rocker (759 GitHub stars)
conveyor (152 GitHub stars)
(GitHub stars correct when I wrote the answer)
We went with dobi as it was the first one we heard of (because of this question), but it looks like packer is the most popular.
Create a docker file to run your build process, then run cleanup code
Example:
FROM node:latest
# Provides cached layer for node_modules
ADD package.json /tmp/package.json
RUN cd /tmp && npm install
RUN mkdir -p /dist && cp -a /tmp/node_modules /dist/
RUN cp /tmp/package.json /dist
ADD . /tmp
RUN cd /tmp && npm run build
RUN mkdir -p /dist && cp -a /tmp/. /dist
#run some clean up code here
RUN npm run cleanup
# Define working directory
WORKDIR /dist
# Expose port
EXPOSE 4000
# Run app
CMD ["npm", "run", "start"]
In your docker compose file
web:
build: ../project_path
environment:
- NODE_ENV=production
restart: always
ports:
- "4000"