I am pretty new to Docker. I need to do the following tasks:
Run Jenkins instance in Docker
Configure it to auto-install JobDSL plugin on startup
I wrote DockerFile
FROM java:8
EXPOSE 8080
ADD jenkins.war jenkins.war
ENTRYPOINT ["java","-jar","jenkins.war"]
and then I run docker run ...
But there is one problem I can't use the console but I have to use the console to install the plugin. I tried to solve this problem using & at the end. It did not help. P.S I can't use the jenkins image
Jenkins use a JENKINS_HOME directory where it stores config, jobs and plugins.
One way to achieve what you want is perhaps to set the plugins in this directory before running jenkins.
If you use the official jenkins image, then perhaps you can use a data volume to store that and run docker to use this data volume: docker run -V /your/data/volume:/var/jenkins_home jenkins/jenkins
If you don't want a data volume, and want an image with the plugins, then you can add to you dockerfile something like:
RUN mkdir -p ~/.jenkins/plugins && \
cd ~/.jenkins/plugins && \
wget http://your/plugins/plugins.jpi
finally you can mix a little bit both by creating a shell script that check if the plugins directory exist, and if not get the plugin file, then start jenkins. This shell script would be your image entry point.
NOTE: The file you need to download as plugins is the .jpi file! not the .hpi.
As reference, here an example:
FROM java:8
RUN wget https://updates.jenkins-ci.org/download/war/2.121.2/jenkins.war && \
mkdir -p ~/.jenkins/plugins && \
cd ~/.jenkins/plugins && \
wget https://repo.jenkins-ci.org/releases/org/jenkins-ci/plugins/job-dsl/1.33/job-dsl-1.33.jpi
ENTRYPOINT ["java","-jar","jenkins.war"]
Related
I have the below Dockerfile to create a nodejs server. It clones a repo that contains the script that starts the node servers but that script needs to be executable.
The first time I start this docker container, it works as expected. I know the chown is doing something because without it, the script "does not exist" because it does not have the x permissions.
However, if I then open a TTY into the container and inspect the file, the permissions have not changed. If the container is restarted, it gives an error that the file does not exist (x permission is missing). If I execute the chmod manually after the first run, I can restart the container without issues.
I have tried it as part of the combined RUN command (as below) and as a separate RUN command, both written normally and as RUN ["chmod", "755" "/opt/youtube4me/start_all.sh"] and both with 755 and +x. All of those ways work the first start but do not permanently change the permissions.
I'm very new to docker and I think I'm missing some basic concept that explains that chmod is run in some sort of context (maybe?)
FROM node:10.19.0-alpine
RUN apk update && \
apk add git && \
git clone https://github.com/0502-crew/youtube4me.git /opt/youtube4me && \
chmod 755 /opt/youtube4me/start_all.sh
COPY config/. /opt/youtube4me/
EXPOSE 45011
WORKDIR /opt/youtube4me
CMD ["/opt/youtube4me/start_all.sh"]
UPDATE 1
I run these commands with docker:
sudo docker build . -t 0502-crew/youtube4me
sudo docker container run -d --name youtube4me --network host 0502-crew/youtube4me
start_all.sh
#!/bin/sh
# Reset to master in case any force pushes were done
git reset --hard origin/master
# Start client server
cd client && npm i && npm run build && nohup npm run prod &
# Start api server
cd api && npm i && npm run build && nohup npm run prod &
# Keep server running by tailing nothing, forever...
tail -f /dev/null
The config folder from which I COPY just contains 2 .ts files that are in .gitignore so need to be added manually:
config/api/src/config/Config.ts
config/client/src/config/Config.ts
UPDATE 2
If I replace the CMD line with CMD ["sh", "-c", "tail -f /dev/null"] then the x permissions remain on the sh file. It seems that executing the script removes the permissions...
As #dennisvandehoef pointed out, the fundamentals of my approach are off. Including a git clone in the Dockerfile will, for example, have as side effect that building the image a second time will not clone the repo again, even if files have changed.
However, I did find a solution to this approach so I might as well share it for the sake of knowledge sharing:
I develop on Windows so I don't need to set permissions to a script but Linux does need it. I found out you can still set the +x permission bit using git on Windows and commit it. When docker clones the repo, chmod is then no longer needed at all.
git update-index --chmod=+x start_all.sh
If you then ls the files with this command
git ls-files --stage
you will see that all other files are 100644 but the sh file is 100755 (so permission 755).
Next just commit it
git commit -m"Made sh executable"
I had to delete my docker images afterwards and rebuild it to ensure the git clone was performed again.
I'll rewrite my dockerfile at a later point but for now it works as intented.
It is still a mystery to me that the x bit disappears when the chmod is part of the Dockerfile
You are building a docker-file for the project you are pulling from git and your start_all script includes a git reset --hard origin/master which is a bad practice since now you cannot create versions of your docker images.
Also you copy these 2 files to the wrong directory. with COPY config/. /opt/youtube4me/ you copy them directly to the rood of your project. and not to the given locations in config/api/src/config/ and config/client/src/config/Config.ts
I realize that fixing these problems will not fix this chmod problem itself. But it might make it go away for your specific use case.
Also if it are secrets you are excluding from git, you also should not add them to your docker-image while building since then (after you pushed it to docker) it will be public again. Therefore it is a good practice to never add them while building.
Did you try something like this:
Docker file
FROM node:10.19.0-alpine
RUN mkdir /opt/youtube4me
COPY . /opt/youtube4me/
WORKDIR /opt/youtube4me
RUN chmod 755 /opt/youtube4me/start_all.sh
EXPOSE 45011
CMD ["/opt/youtube4me/start_all.sh"]
dockerignore
api/src/config/Config.ts
client/src/config/Config.ts
script
# Start client server
cd client && npm i && npm run build && nohup npm run prod &
# Start api server
cd api && npm i && npm run build && nohup npm run prod &
# Keep server running by tailing nothing, forever...
tail -f /dev/null
You then need to run docker run with 2 volumes to add the config to it. This is been done with -v. example: docker run -v $(pwd)config/api/src/config/:api/src/config/ -v $(pwd)config/client/src/config/:client/src/config/
Never the less I wonder why you are running both services in one docker image. If this is just for local execution you can also think about creating 2 separate docker images, and use docker-compose to spawn it all.
Addition 1:
I thought a bit about this and it is also a good practice to use environment variables to config a docker-image instead of adding a file to it. You might want to switch to that as well. I advise reading this article to get a better understanding of the why and other possibilities to do this.
Addition 2:
I created a pull request on your code, that is an example of how it could look with docker-compose. It currently does not build because of the config options. But it will give you some more insights.
I developed a few ROS packages and I want to put the packages in a docker container because installing all the ROS packages all the time is tedious. Therefore I created a dockerfile that uses a base ROS image, installed all the necessary dependencies, copied my workspace, built the workspace in the docker container and sourced everything afterward. You can find the docker file here:
FROM ros:kinetic-ros-base
RUN apt-get update && apt-get install locales
RUN locale-gen en_US.UTF-8
ENV LANG en_US.UTF-8
RUN apt-get update && apt-get install -y \
&& rm -rf /var/likb/apt/lists/*
COPY . /catkin_ws/src/
WORKDIR /catkin_ws
RUN /bin/bash -c '. /opt/ros/kinetic/setup.bash; catkin_make'
RUN /bin/bash -c '. /opt/ros/kinetic/setup.bash; source devel/setup.bash'
CMD ["roslaunch", "master_launch sim_perception.launch"]
The problem is: When I run the docker container wit the "run" command, docker doesn't seem to know that I sourced my new ROS workspace and therefore it cannot launch automatically my launch script. If I run the docker container as bash script with "run -it bash" I can source my workspace again and then roslaunch my .launch file.
So can someone tell me how to write my dockerfile correctly so I launch my .launch file automatically when I run the container? Thanks!
From Docker Docs
Each RUN instruction is run independently and won't effect next instruction so when you run last Line no PATH are saved from ROS.
You need Source .bashrc or every environment you need using source first.
You can wrap everything you want (source command and roslaunch command) inside a sh file then just run that file at the end
If you review the convention of ros_entrypoint.sh you can see how best to source the workspace you would like in the docker. We're all so busy learning how to make docker and ros do the real things, it's easy to skip over some of the nuance of this interplay. This sucked forever for me; hope this is helpful for you.
I looked forever and found what seemed like only bad advice, and in the absence of an explicit standard or clear guidance I've settled into what seems like a sane approach that also allows you to control what launches at runtime with environment variables. I now consider this as the right solution for my needs.
In the Dockerfile for the image you want to set the start/launch behavior;
towards the end; you should use ADD line to insert your own ros_entrypoint.sh (example included); Set it as the ENTRYPOINT and then a CMD to run by default run something when the docker start.
note: you'll (obviously?) need to run the docker build process for these changes to be effective
Dockerfile looks like this:
all your other dockerfile ^^
.....
# towards the end
COPY ./ros_entrypoint.sh /
ENTRYPOINT ["/ros_entrypoint.sh"]
CMD ["bash"]
Example ros_entryppoint.sh:
#!/bin/bash
set -e
# setup ros environment
if [ -z "${SETUP}" ]; then
# basic ros environment
source "/opt/ros/$ROS_DISTRO/setup.bash"
else
#from environment variable; should be a absolute path to the appropriate workspaces's setup.bash
source $SETUP
fi
exec "$#"
Used in this way the docker will automatically source either the basic ros bits... or if you provide another workspace's setup.bash path in the $SETUP environment variable, it will be used in the container.
So a few ways to work with this:
From the command line prior to running docker
export SETUP=/absolute/path/to/the/setup.bash
docker run -it your-docker-image
From the command line (inline)
docker run --env SETUP=/absolute/path/to/the/setup.bash your-docker-image
From docker-compose
service-name:
network_mode: host
environment:
- SETUP=/absolute/path/to/the_workspace/devel/setup.bash #or whatever
command: roslaunch package_name launchfile_that_needed_to_be_sourced.launch
#command: /bin/bash # wake up and do something else
I have a build process that converts typescript into javascript, minifies and concatenates css files, etc.
I would like to put those files into an nginx docker container, but I don't want the original javascript / css source to be included, nor the tools that I use to build them. Is there a good way to do this, or do I have to run the build outside docker (or in a separately defined container), then COPY the relevant files in?
This page talks about doing something similar in a manual way, but doesn't explain how to automate the process e.g. with docker-compose or something.
Create a docker images with all required tools to build your code also that can clone code and build it. After build it have to copy
into docker volume for example volume name is /opt/webapp.
Launch build docker container using build image in step 1
docker run -d -P --name BuildContainer -v /opt/webapp:/opt/webapp build_image_name
Launch nginx docker container that will use shared volume of build docker in which your build code resides.
docker run -d -P --name Appserver -v /opt/webapp:/usr/local/nginx/html nginx_image_name
After building and shipping your build code to Appserver . you can delete BuildContainer because that is no more required.
Advantage of above steps:
your build code will in host machine so if one Appserver docker fail or stop then your build code will be safe in host machine and you can launch new docker using that build code.
if you create docker image for building code then every time no need to install required tool while launching docker.
you can build your code in host machine also but if you want your code should be build in fresh environment every time then this will be good. or if you use same host machine to build/compile code every time then some older source code may create problem or git clone error etc.
EDIT:
you can append :ro (Read only) to volume by which one container will not affect another. you can Read more about docker volume Here . Thanks #BMitch for suggestion.
The latest version of docker supports multi-stage builds where build products can be copied from on container to another.
https://docs.docker.com/engine/userguide/eng-image/multistage-build/
This is an ideal scenario for a multi-stage build. You perform the compiling in the first stage, copy the output of that compile to the second stage, and only ship that second stage. Each stage is an independent image that begins with a FROM line. And to transfer files between stages, there's now a COPY --from syntax. The result looks roughly like:
# first stage with your full compile environment, e.g. maven/jdk
FROM maven as build
WORKDIR /src
COPY src /src
RUN mvn install
# second stage starts below with just a jre base image
FROM openjdk:jre
# copy the jar from the first stage here
COPY --from=build /src/result.jar /app
CMD java -jar /app/result.jar
Original answer:
Two common options:
As mentioned, you can build outside and copy the compiled result into the container.
You merge your download, build, and cleanup step into a single RUN command. This is a common best practice to minimize the size of each layer.
An example Dockerfile for the second option would look like:
FROM mybase:latest
RUN apt-get update && apt-get install tools \
&& git clone https://github.com/myproj \
&& cd myproj \
&& make \
&& make install
&& cd .. \
&& apt-get rm tools && apt-get clean \
&& rm -rf myproj
The lines would be a little more complicated than that, but that's the gist.
As #dnephin suggested in his comments on the question and on #pl_rock's answer, the standard docker tools are not designed to do this, but you can use a third party tool like one of the following:
dobi (48 GitHub stars)
packer (6210 GitHub stars)
rocker (759 GitHub stars)
conveyor (152 GitHub stars)
(GitHub stars correct when I wrote the answer)
We went with dobi as it was the first one we heard of (because of this question), but it looks like packer is the most popular.
Create a docker file to run your build process, then run cleanup code
Example:
FROM node:latest
# Provides cached layer for node_modules
ADD package.json /tmp/package.json
RUN cd /tmp && npm install
RUN mkdir -p /dist && cp -a /tmp/node_modules /dist/
RUN cp /tmp/package.json /dist
ADD . /tmp
RUN cd /tmp && npm run build
RUN mkdir -p /dist && cp -a /tmp/. /dist
#run some clean up code here
RUN npm run cleanup
# Define working directory
WORKDIR /dist
# Expose port
EXPOSE 4000
# Run app
CMD ["npm", "run", "start"]
In your docker compose file
web:
build: ../project_path
environment:
- NODE_ENV=production
restart: always
ports:
- "4000"
I've been reading through the Docker documentation and can't seem to work out if its possible to create a custom command/directive. Basically I need to make an HTTP request to external service to retrieve some assets that need to be included within my container. Rather than referencing them using Volumes I want to effectively inject them into the container during the build process, a bit like dependancy injection.
Assuming you are referring to download some files using http (HTTP GET) as one of the example in the question. You can try this.
RUN wget https://wordpress.org/plugins/about/readme.txt
or
RUN curl https://wordpress.org/plugins/about/readme.txt
The example Dockerfile with download shell script
PROJ-DIR
- files
- test.sh
- Dockerfile
files/test.sh
#!/bin/sh
wget https://wordpress.org/plugins/about/readme.txt
Dockerfile
FROM centos:latest
COPY files/test.sh /opt/
RUN chmod u+x /opt/test.sh
RUN rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY*
RUN yes | yum install wget
RUN /opt/test.sh
RUN rm /opt/test.sh
Build the image
docker build -t test_img .
I use boot2docker and want to build a simple docker image with the Dockerfile:
# Pull base image.
FROM elasticsearch
# Install Marvel plugin
RUN \
&& export ES_HOME=/usr/share/elasticsearch \
&& cd $ES_HOME \
&& bin/plugin -u file:///c/Users/buliov1/dev/elastic/plugins/marvel-latest.zip -i elasticsearch/marvel/latest
The path /c/Users/buliov1/dev/elastic/plugins/marvel-latest.zip is present and accessible on the machine where I build the dockerfile .
The problem is that inside the build i get
Failed: FileNotFoundException[/c/Users/buliov1/dev/elastic/plugins/marvel-latest.zip (No such file or directory)].
I searched through the documentation and the only solution I see is to use ADD/COPY and copy first the file inside the image and then run the command that uses the file.
I don't know how exactly docker build works but , is there a way to build it without copying the file first?
A docker build process is running inside Docker containers and has no access to the host filesystem. The only way to get files into the build environment is through the use of the ADD or COPY mechanism (or by fetching them over the network using, e.g., curl or wget or something).