I've been reading through the Docker documentation and can't seem to work out if its possible to create a custom command/directive. Basically I need to make an HTTP request to external service to retrieve some assets that need to be included within my container. Rather than referencing them using Volumes I want to effectively inject them into the container during the build process, a bit like dependancy injection.
Assuming you are referring to download some files using http (HTTP GET) as one of the example in the question. You can try this.
RUN wget https://wordpress.org/plugins/about/readme.txt
or
RUN curl https://wordpress.org/plugins/about/readme.txt
The example Dockerfile with download shell script
PROJ-DIR
- files
- test.sh
- Dockerfile
files/test.sh
#!/bin/sh
wget https://wordpress.org/plugins/about/readme.txt
Dockerfile
FROM centos:latest
COPY files/test.sh /opt/
RUN chmod u+x /opt/test.sh
RUN rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY*
RUN yes | yum install wget
RUN /opt/test.sh
RUN rm /opt/test.sh
Build the image
docker build -t test_img .
Related
I am pretty new to Docker. I need to do the following tasks:
Run Jenkins instance in Docker
Configure it to auto-install JobDSL plugin on startup
I wrote DockerFile
FROM java:8
EXPOSE 8080
ADD jenkins.war jenkins.war
ENTRYPOINT ["java","-jar","jenkins.war"]
and then I run docker run ...
But there is one problem I can't use the console but I have to use the console to install the plugin. I tried to solve this problem using & at the end. It did not help. P.S I can't use the jenkins image
Jenkins use a JENKINS_HOME directory where it stores config, jobs and plugins.
One way to achieve what you want is perhaps to set the plugins in this directory before running jenkins.
If you use the official jenkins image, then perhaps you can use a data volume to store that and run docker to use this data volume: docker run -V /your/data/volume:/var/jenkins_home jenkins/jenkins
If you don't want a data volume, and want an image with the plugins, then you can add to you dockerfile something like:
RUN mkdir -p ~/.jenkins/plugins && \
cd ~/.jenkins/plugins && \
wget http://your/plugins/plugins.jpi
finally you can mix a little bit both by creating a shell script that check if the plugins directory exist, and if not get the plugin file, then start jenkins. This shell script would be your image entry point.
NOTE: The file you need to download as plugins is the .jpi file! not the .hpi.
As reference, here an example:
FROM java:8
RUN wget https://updates.jenkins-ci.org/download/war/2.121.2/jenkins.war && \
mkdir -p ~/.jenkins/plugins && \
cd ~/.jenkins/plugins && \
wget https://repo.jenkins-ci.org/releases/org/jenkins-ci/plugins/job-dsl/1.33/job-dsl-1.33.jpi
ENTRYPOINT ["java","-jar","jenkins.war"]
I am trying to add Glide to my Golang project but I'm not getting my container working. I am currently using:
# create image from the official Go image
FROM golang:alpine
RUN apk add --update tzdata bash wget curl git;
# Create binary directory, install glide and fresh
RUN mkdir -p $$GOPATH/bin
RUN curl https://glide.sh/get | sh
RUN go get github.com/pilu/fresh
# define work directory
ADD . /go
WORKDIR /go/src
RUN glide update && fresh -c ../runner.conf main.go
as per #craigchilds94's post. When I run
docker build -t docker_test .
It all works. However, when I change the last line from RUN glide ... to CMD glide ... and then start the container with:
docker run -it --volume=$(PWD):/go docker_test
It gives me an error: /bin/sh: glide: not found. Ignoring the glide update and directly starting fresh results in the same: /bin/sh fresh: not found.
The end goal is to be able to mount a volume (for the live-reload) and be able to use it in docker-compose so I want to be able to build it, but I do not understand what is going wrong.
This should probably work for your purposes:
# create image from the official Go image
FROM golang:alpine
RUN apk add --update tzdata bash wget curl git;
# Create binary directory, install glide and fresh
RUN go get -u github.com/Masterminds/glide
RUN go get -u github.com/pilu/fresh
# define work directory
ADD . /go
WORKDIR /go/src
ENTRYPOINT $GOPATH/bin/fresh -c /go/src/runner.conf /go/src/main.go
As far as I know you don't need to run the glide update after you've just installed glide. You can check this Dockerfile I wrote that uses glide:
https://github.com/timogoosen/dockerfiles/blob/master/btcd/Dockerfile
and here is the REAMDE: https://github.com/timogoosen/dockerfiles/blob/master/btcd/README.md
This article gives a good overview of the difference between: CMD, RUN and entrypoint: http://goinbigdata.com/docker-run-vs-cmd-vs-entrypoint/
To quote from the article:
"RUN executes command(s) in a new layer and creates a new image. E.g., it is often used for installing software packages."
In my opinion installing packages and libraries can happen with RUN.
For starting your binary or commands I would suggest use ENTRYPOINT see:"ENTRYPOINT configures a container that will run as an executable." you could use CMD too for running:
$GOPATH/bin/fresh -c /go/src/runner.conf /go/src/main.go
something like this might work, I didn't test this part:
CMD ["$GOPATH/bin/fresh", "-c", "/go/src/runner.conf /go/src/main.go"]
I have a build process that converts typescript into javascript, minifies and concatenates css files, etc.
I would like to put those files into an nginx docker container, but I don't want the original javascript / css source to be included, nor the tools that I use to build them. Is there a good way to do this, or do I have to run the build outside docker (or in a separately defined container), then COPY the relevant files in?
This page talks about doing something similar in a manual way, but doesn't explain how to automate the process e.g. with docker-compose or something.
Create a docker images with all required tools to build your code also that can clone code and build it. After build it have to copy
into docker volume for example volume name is /opt/webapp.
Launch build docker container using build image in step 1
docker run -d -P --name BuildContainer -v /opt/webapp:/opt/webapp build_image_name
Launch nginx docker container that will use shared volume of build docker in which your build code resides.
docker run -d -P --name Appserver -v /opt/webapp:/usr/local/nginx/html nginx_image_name
After building and shipping your build code to Appserver . you can delete BuildContainer because that is no more required.
Advantage of above steps:
your build code will in host machine so if one Appserver docker fail or stop then your build code will be safe in host machine and you can launch new docker using that build code.
if you create docker image for building code then every time no need to install required tool while launching docker.
you can build your code in host machine also but if you want your code should be build in fresh environment every time then this will be good. or if you use same host machine to build/compile code every time then some older source code may create problem or git clone error etc.
EDIT:
you can append :ro (Read only) to volume by which one container will not affect another. you can Read more about docker volume Here . Thanks #BMitch for suggestion.
The latest version of docker supports multi-stage builds where build products can be copied from on container to another.
https://docs.docker.com/engine/userguide/eng-image/multistage-build/
This is an ideal scenario for a multi-stage build. You perform the compiling in the first stage, copy the output of that compile to the second stage, and only ship that second stage. Each stage is an independent image that begins with a FROM line. And to transfer files between stages, there's now a COPY --from syntax. The result looks roughly like:
# first stage with your full compile environment, e.g. maven/jdk
FROM maven as build
WORKDIR /src
COPY src /src
RUN mvn install
# second stage starts below with just a jre base image
FROM openjdk:jre
# copy the jar from the first stage here
COPY --from=build /src/result.jar /app
CMD java -jar /app/result.jar
Original answer:
Two common options:
As mentioned, you can build outside and copy the compiled result into the container.
You merge your download, build, and cleanup step into a single RUN command. This is a common best practice to minimize the size of each layer.
An example Dockerfile for the second option would look like:
FROM mybase:latest
RUN apt-get update && apt-get install tools \
&& git clone https://github.com/myproj \
&& cd myproj \
&& make \
&& make install
&& cd .. \
&& apt-get rm tools && apt-get clean \
&& rm -rf myproj
The lines would be a little more complicated than that, but that's the gist.
As #dnephin suggested in his comments on the question and on #pl_rock's answer, the standard docker tools are not designed to do this, but you can use a third party tool like one of the following:
dobi (48 GitHub stars)
packer (6210 GitHub stars)
rocker (759 GitHub stars)
conveyor (152 GitHub stars)
(GitHub stars correct when I wrote the answer)
We went with dobi as it was the first one we heard of (because of this question), but it looks like packer is the most popular.
Create a docker file to run your build process, then run cleanup code
Example:
FROM node:latest
# Provides cached layer for node_modules
ADD package.json /tmp/package.json
RUN cd /tmp && npm install
RUN mkdir -p /dist && cp -a /tmp/node_modules /dist/
RUN cp /tmp/package.json /dist
ADD . /tmp
RUN cd /tmp && npm run build
RUN mkdir -p /dist && cp -a /tmp/. /dist
#run some clean up code here
RUN npm run cleanup
# Define working directory
WORKDIR /dist
# Expose port
EXPOSE 4000
# Run app
CMD ["npm", "run", "start"]
In your docker compose file
web:
build: ../project_path
environment:
- NODE_ENV=production
restart: always
ports:
- "4000"
I use boot2docker and want to build a simple docker image with the Dockerfile:
# Pull base image.
FROM elasticsearch
# Install Marvel plugin
RUN \
&& export ES_HOME=/usr/share/elasticsearch \
&& cd $ES_HOME \
&& bin/plugin -u file:///c/Users/buliov1/dev/elastic/plugins/marvel-latest.zip -i elasticsearch/marvel/latest
The path /c/Users/buliov1/dev/elastic/plugins/marvel-latest.zip is present and accessible on the machine where I build the dockerfile .
The problem is that inside the build i get
Failed: FileNotFoundException[/c/Users/buliov1/dev/elastic/plugins/marvel-latest.zip (No such file or directory)].
I searched through the documentation and the only solution I see is to use ADD/COPY and copy first the file inside the image and then run the command that uses the file.
I don't know how exactly docker build works but , is there a way to build it without copying the file first?
A docker build process is running inside Docker containers and has no access to the host filesystem. The only way to get files into the build environment is through the use of the ADD or COPY mechanism (or by fetching them over the network using, e.g., curl or wget or something).
Working with Docker and I notice almost everywhere the "RUN" command starts with an apt-get upgrade && apt-get install etc.
What if you don't have internet access and simply want to do a "dpkg -i ./deb-directory/*.deb" instead?
Well, I tried that and I keep failing. Any advice would be appreciated:
dpkg: error processing archive ./deb-directory/*.deb (--install):
cannot access archive: No such file or directory
Errors were encountered while processing: ./deb-directory/*.deb
INFO[0002] The command [/bin/sh -c dpkg -i ./deb-directory/*.deb] returned a non-zero code: 1`
To clarify, yes, the directory "deb-directory" does exist. In fact it is in the same directory as the Dockerfile where I build.
This is perhaps a bug, I'll open a ticket on their github to know.
Edit: I did it here.
Edit2:
Someone answered a better way of doing this on the github issue.
* is a shell metacharacter. You need to invoke a shell for it to be expanded.
docker run somecontainer sh -c 'dpkg -i /debdir/*.deb'
!!! Forget the following but I leave it here to keep track of my reflexion steps !!!
The problem comes from the * statement which doesn't seem to work well with the docker run dpkg command. I tried your command inside a container (using an interactive shell) and it worked well. It looks like dpkg is trying to install the so called ./deb-directory/*.deb file which doesn't exist instead of installing all the .deb files contained there.
I just implemented a workaround. Copy a .sh script in your container, chmod +x it and then use it as your command.
(FYI, prefer using COPY instead of ADD when the file isn't remotely copied. Check the best practices for writing Dockerfiles for more info.)
This is my Dockerfile for example purpose:
FROM debian:latest
MAINTAINER Vrakfall <jeremy#artphotolaurent.be>
COPY install.sh /
#debdir is a directory
COPY debdir /debdir
RUN chmod +x /install.sh
CMD ["/install.sh"]
The install.sh (copied at the root directory) simply contains:
#!/bin/bash
dpkg -i /debdir/*.deb
And the following
docker build -t debiantest .
docker run debiantest
works well and install all the packages contained in the /debdir directory.