I use boot2docker and want to build a simple docker image with the Dockerfile:
# Pull base image.
FROM elasticsearch
# Install Marvel plugin
RUN \
&& export ES_HOME=/usr/share/elasticsearch \
&& cd $ES_HOME \
&& bin/plugin -u file:///c/Users/buliov1/dev/elastic/plugins/marvel-latest.zip -i elasticsearch/marvel/latest
The path /c/Users/buliov1/dev/elastic/plugins/marvel-latest.zip is present and accessible on the machine where I build the dockerfile .
The problem is that inside the build i get
Failed: FileNotFoundException[/c/Users/buliov1/dev/elastic/plugins/marvel-latest.zip (No such file or directory)].
I searched through the documentation and the only solution I see is to use ADD/COPY and copy first the file inside the image and then run the command that uses the file.
I don't know how exactly docker build works but , is there a way to build it without copying the file first?
A docker build process is running inside Docker containers and has no access to the host filesystem. The only way to get files into the build environment is through the use of the ADD or COPY mechanism (or by fetching them over the network using, e.g., curl or wget or something).
Related
I am pretty new to Docker. I need to do the following tasks:
Run Jenkins instance in Docker
Configure it to auto-install JobDSL plugin on startup
I wrote DockerFile
FROM java:8
EXPOSE 8080
ADD jenkins.war jenkins.war
ENTRYPOINT ["java","-jar","jenkins.war"]
and then I run docker run ...
But there is one problem I can't use the console but I have to use the console to install the plugin. I tried to solve this problem using & at the end. It did not help. P.S I can't use the jenkins image
Jenkins use a JENKINS_HOME directory where it stores config, jobs and plugins.
One way to achieve what you want is perhaps to set the plugins in this directory before running jenkins.
If you use the official jenkins image, then perhaps you can use a data volume to store that and run docker to use this data volume: docker run -V /your/data/volume:/var/jenkins_home jenkins/jenkins
If you don't want a data volume, and want an image with the plugins, then you can add to you dockerfile something like:
RUN mkdir -p ~/.jenkins/plugins && \
cd ~/.jenkins/plugins && \
wget http://your/plugins/plugins.jpi
finally you can mix a little bit both by creating a shell script that check if the plugins directory exist, and if not get the plugin file, then start jenkins. This shell script would be your image entry point.
NOTE: The file you need to download as plugins is the .jpi file! not the .hpi.
As reference, here an example:
FROM java:8
RUN wget https://updates.jenkins-ci.org/download/war/2.121.2/jenkins.war && \
mkdir -p ~/.jenkins/plugins && \
cd ~/.jenkins/plugins && \
wget https://repo.jenkins-ci.org/releases/org/jenkins-ci/plugins/job-dsl/1.33/job-dsl-1.33.jpi
ENTRYPOINT ["java","-jar","jenkins.war"]
I have a build process that converts typescript into javascript, minifies and concatenates css files, etc.
I would like to put those files into an nginx docker container, but I don't want the original javascript / css source to be included, nor the tools that I use to build them. Is there a good way to do this, or do I have to run the build outside docker (or in a separately defined container), then COPY the relevant files in?
This page talks about doing something similar in a manual way, but doesn't explain how to automate the process e.g. with docker-compose or something.
Create a docker images with all required tools to build your code also that can clone code and build it. After build it have to copy
into docker volume for example volume name is /opt/webapp.
Launch build docker container using build image in step 1
docker run -d -P --name BuildContainer -v /opt/webapp:/opt/webapp build_image_name
Launch nginx docker container that will use shared volume of build docker in which your build code resides.
docker run -d -P --name Appserver -v /opt/webapp:/usr/local/nginx/html nginx_image_name
After building and shipping your build code to Appserver . you can delete BuildContainer because that is no more required.
Advantage of above steps:
your build code will in host machine so if one Appserver docker fail or stop then your build code will be safe in host machine and you can launch new docker using that build code.
if you create docker image for building code then every time no need to install required tool while launching docker.
you can build your code in host machine also but if you want your code should be build in fresh environment every time then this will be good. or if you use same host machine to build/compile code every time then some older source code may create problem or git clone error etc.
EDIT:
you can append :ro (Read only) to volume by which one container will not affect another. you can Read more about docker volume Here . Thanks #BMitch for suggestion.
The latest version of docker supports multi-stage builds where build products can be copied from on container to another.
https://docs.docker.com/engine/userguide/eng-image/multistage-build/
This is an ideal scenario for a multi-stage build. You perform the compiling in the first stage, copy the output of that compile to the second stage, and only ship that second stage. Each stage is an independent image that begins with a FROM line. And to transfer files between stages, there's now a COPY --from syntax. The result looks roughly like:
# first stage with your full compile environment, e.g. maven/jdk
FROM maven as build
WORKDIR /src
COPY src /src
RUN mvn install
# second stage starts below with just a jre base image
FROM openjdk:jre
# copy the jar from the first stage here
COPY --from=build /src/result.jar /app
CMD java -jar /app/result.jar
Original answer:
Two common options:
As mentioned, you can build outside and copy the compiled result into the container.
You merge your download, build, and cleanup step into a single RUN command. This is a common best practice to minimize the size of each layer.
An example Dockerfile for the second option would look like:
FROM mybase:latest
RUN apt-get update && apt-get install tools \
&& git clone https://github.com/myproj \
&& cd myproj \
&& make \
&& make install
&& cd .. \
&& apt-get rm tools && apt-get clean \
&& rm -rf myproj
The lines would be a little more complicated than that, but that's the gist.
As #dnephin suggested in his comments on the question and on #pl_rock's answer, the standard docker tools are not designed to do this, but you can use a third party tool like one of the following:
dobi (48 GitHub stars)
packer (6210 GitHub stars)
rocker (759 GitHub stars)
conveyor (152 GitHub stars)
(GitHub stars correct when I wrote the answer)
We went with dobi as it was the first one we heard of (because of this question), but it looks like packer is the most popular.
Create a docker file to run your build process, then run cleanup code
Example:
FROM node:latest
# Provides cached layer for node_modules
ADD package.json /tmp/package.json
RUN cd /tmp && npm install
RUN mkdir -p /dist && cp -a /tmp/node_modules /dist/
RUN cp /tmp/package.json /dist
ADD . /tmp
RUN cd /tmp && npm run build
RUN mkdir -p /dist && cp -a /tmp/. /dist
#run some clean up code here
RUN npm run cleanup
# Define working directory
WORKDIR /dist
# Expose port
EXPOSE 4000
# Run app
CMD ["npm", "run", "start"]
In your docker compose file
web:
build: ../project_path
environment:
- NODE_ENV=production
restart: always
ports:
- "4000"
I need hello.rpm when I build from Dockerfile, but this rpm file is not available online.
Right now I'm serving the file by firing up a temporary web server, but ideally I'd like to run a build command which makes this local file available inside the container.
Is this possible?
My build command:
docker build -t fredrik/helloworld:1.0 .
My Dockerfile:
FROM centos:6
RUN rpm -ivh hello.rpm
Why don't you COPY the file inside the container before executing RUN?
FROM centos:6
COPY hello.rpm /tmp/hello.rpm
RUN rpm -ivh /tmp/hello.rpm
This assumes that hello.rpm is next to your Dockerfile when you build it.
Otherwise if an internet connection is not a limiting factor while you're working just:
Upload the file a cloud as Dropbox
Go to your docker shell and wget https://www.cloudnameXYZ.com/filename
I've been reading through the Docker documentation and can't seem to work out if its possible to create a custom command/directive. Basically I need to make an HTTP request to external service to retrieve some assets that need to be included within my container. Rather than referencing them using Volumes I want to effectively inject them into the container during the build process, a bit like dependancy injection.
Assuming you are referring to download some files using http (HTTP GET) as one of the example in the question. You can try this.
RUN wget https://wordpress.org/plugins/about/readme.txt
or
RUN curl https://wordpress.org/plugins/about/readme.txt
The example Dockerfile with download shell script
PROJ-DIR
- files
- test.sh
- Dockerfile
files/test.sh
#!/bin/sh
wget https://wordpress.org/plugins/about/readme.txt
Dockerfile
FROM centos:latest
COPY files/test.sh /opt/
RUN chmod u+x /opt/test.sh
RUN rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY*
RUN yes | yum install wget
RUN /opt/test.sh
RUN rm /opt/test.sh
Build the image
docker build -t test_img .
I am using windows and have boot2docker installed. I've downloaded images from docker hub and run basic commands. BUT
How do I take an existing application sitting on my local machine (lets just say it has one file index.php, for simplicity). How do I take that and put it into a docker image and run it?
Imagine you have the following existing python2 application "hello.py" with the following content:
print "hello"
You have to do the following things to dockerize this application:
Create a folder where you'd like to store your Dockerfile in.
Create a file named "Dockerfile"
The Dockerfile consists of several parts which you have to define as described below:
Like a VM, an image has an operating system. In this example, I use ubuntu 16.04. Thus, the first part of the Dockerfile is:
FROM ubuntu:16.04
Imagine you have a fresh Ubuntu - VM, now you have to install some things to get your application working, right? This is done by the next part of the Dockerfile:
RUN apt-get update && \
apt-get upgrade -y && \
apt-get install -y python
For Docker, you have to create a working directory now in the image. The commands that you want to execute later on to start your application will search for files (like in our case the python file) in this directory. Thus, the next part of the Dockerfile creates a directory and defines this as the working directory:
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
As a next step, you copy the content of the folder where the Dockerfile is stored in to the image. In our example, the hello.py file is copied to the directory we created in the step above.
COPY . /usr/src/app
Finally, the following line executes the command "python hello.py" in your image:
CMD [ "python", "hello.py" ]
The complete Dockerfile looks like this:
FROM ubuntu:16.04
RUN apt-get update && \
apt-get upgrade -y && \
apt-get install -y python
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY . /usr/src/app
CMD [ "python", "hello.py" ]
Save the file and build the image by typing in the terminal:
$ docker build -t hello .
This will take some time. Afterwards, check if the image "hello" how we called it in the last line has been built successfully:
$ docker images
Run the image:
docker run hello
The output shout be "hello" in the terminal.
This is a first start. When you use Docker for web applications, you have to configure ports etc.
Your index.php is not really an application. The application is your Apache or nginx or even PHP's own server.
Because Docker uses features not available in the Windows core, you are running it inside an actual virtual machine. The only purpose for that would be training or preparing images for your real server environment.
There are two main concepts you need to understand for Docker: Images and Containers.
An image is a template composed of layers. Each layer contains only the differences between the previous layer and some offline system information. Each layer is fact an image. You should always make your image from an existing base, using the FROM directive in the Dockerfile (Reference docs at time of edit. Jan Vladimir Mostert's link is now a 404).
A container is an instance of an image, that has run or is currently running. When creating a container (a.k.a. running an image), you can map an internal directory from it to the outside. If there are files in both locations, the external directory override the one inside the image, but those files are not lost. To recover them you can commit a container to an image (preferably after stopping it), then launch a new container from the new image, without mapping that directory.
You'll need to build a docker image first, using a dockerFile, you'd probably setup apache on it, tell the dockerFile to copy your index.php file into your apache and expose a port.
See http://docs.docker.com/reference/builder/
See my other question for an example of a docker file:
Switching users inside Docker image to a non-root user (this is for copying over a .war file into tomcat, similar to copying a .php file into apache)
First off, you need to choose a platform to run your application (for instance, Ubuntu). Then install all the system tools/libraries necessary to run your application. This can be achieved by Dockerfile. Then, push Dockerfile and app to git or Bitbucket. Later, you can auto-build in the docker hub from github or Bitbucket. The later part of this tutorial here has more on that. If you know the basics just fast forward it to 50:00.