I am building a Docker-image from Dockerfile using this command:
docker build -it /root/Documents/myDockerfiles/tomcat/.
The Dockerfile looks as follows:
[root#srv01 ~]# cat /root/Documents/myDockerfiles/tomcat/Dockerfile
FROM tomcat:8.0.32-jre8
MAINTAINER "John Doe <johndoe#dough.com>"
RUN apt-get update && apt-get install -y git
RUN git clone https://myusername:mypassword#mygiturl/mygroup/myproject.git
RUN cd ./myproject/ && ./gradlew war
So it basically clones an existing git-repo, cds into the cloned directory, and runs the gradle wrapper.
Problem is that the gradle-wrapper seems to get no connection to the outside, because:
Step 5 : RUN cd ./myproject && ./gradlew war
---> Running in d01b57b9f932 Downloading https://services.gradle.org/distributions/gradle-2.3-bin.zip
(here its stuck, gradlw can't communicate to the outside)
I think it must be a firewall and port issue, because when I do the same from my local laptop its not stuck, but executing it on my VM on some Extranet-Cloud-Service (whose name I can't mention) it stops right there.
So my questions are:
1.) Why is it stuck there?
2.) What can I do to prevent it from being stuck there?
Related
I've started using circleci for CI (I'm a newbie) and I want to build a docker image and push it to dockerhub inside a circleci job.
the problem is the ADD statement of the dockerfile, the error say
ADD failed: stat /var/lib/docker/tmp/docker-builder814373370/app/build: no such file or directory
docker build work fine in local. The problem seems to be the 'remote environment' create by circleci to execute docker cmd inside a job (when the job is executing inside a container). I tried multiple things to share my folder to the remote environment but nothing has worked. I also tried to execute my job inside a 'machine' to get rid of the 'remote environment' but it gives me more errors.
I think I can achieve it by storing my project online in another job and then adding the folder by https inside the dockerfile. But I'm pretty sure there is a faster way, I just don't see it.
here my dockerfile:
FROM ubuntu:20.04
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN apt-get update -yq && apt-get -yq install nodejs npm && npm install serve -g
ADD app/build/ /app
EXPOSE 5000
CMD serve -s /app -l 5000
and my circleci job:
working_directory: ~/project/
docker:
- image: circleci/buildpack-deps:stretch
steps:
- checkout
- setup_remote_docker
- run:
name: Build Docker image
command: sudo docker build . -t $IMAGE_NAME:latest
I achieve it by storing artifacts in another job and then adding the folder by https with curl and wget in a RUN statement of the dockerfile
I have the below Dockerfile to create a nodejs server. It clones a repo that contains the script that starts the node servers but that script needs to be executable.
The first time I start this docker container, it works as expected. I know the chown is doing something because without it, the script "does not exist" because it does not have the x permissions.
However, if I then open a TTY into the container and inspect the file, the permissions have not changed. If the container is restarted, it gives an error that the file does not exist (x permission is missing). If I execute the chmod manually after the first run, I can restart the container without issues.
I have tried it as part of the combined RUN command (as below) and as a separate RUN command, both written normally and as RUN ["chmod", "755" "/opt/youtube4me/start_all.sh"] and both with 755 and +x. All of those ways work the first start but do not permanently change the permissions.
I'm very new to docker and I think I'm missing some basic concept that explains that chmod is run in some sort of context (maybe?)
FROM node:10.19.0-alpine
RUN apk update && \
apk add git && \
git clone https://github.com/0502-crew/youtube4me.git /opt/youtube4me && \
chmod 755 /opt/youtube4me/start_all.sh
COPY config/. /opt/youtube4me/
EXPOSE 45011
WORKDIR /opt/youtube4me
CMD ["/opt/youtube4me/start_all.sh"]
UPDATE 1
I run these commands with docker:
sudo docker build . -t 0502-crew/youtube4me
sudo docker container run -d --name youtube4me --network host 0502-crew/youtube4me
start_all.sh
#!/bin/sh
# Reset to master in case any force pushes were done
git reset --hard origin/master
# Start client server
cd client && npm i && npm run build && nohup npm run prod &
# Start api server
cd api && npm i && npm run build && nohup npm run prod &
# Keep server running by tailing nothing, forever...
tail -f /dev/null
The config folder from which I COPY just contains 2 .ts files that are in .gitignore so need to be added manually:
config/api/src/config/Config.ts
config/client/src/config/Config.ts
UPDATE 2
If I replace the CMD line with CMD ["sh", "-c", "tail -f /dev/null"] then the x permissions remain on the sh file. It seems that executing the script removes the permissions...
As #dennisvandehoef pointed out, the fundamentals of my approach are off. Including a git clone in the Dockerfile will, for example, have as side effect that building the image a second time will not clone the repo again, even if files have changed.
However, I did find a solution to this approach so I might as well share it for the sake of knowledge sharing:
I develop on Windows so I don't need to set permissions to a script but Linux does need it. I found out you can still set the +x permission bit using git on Windows and commit it. When docker clones the repo, chmod is then no longer needed at all.
git update-index --chmod=+x start_all.sh
If you then ls the files with this command
git ls-files --stage
you will see that all other files are 100644 but the sh file is 100755 (so permission 755).
Next just commit it
git commit -m"Made sh executable"
I had to delete my docker images afterwards and rebuild it to ensure the git clone was performed again.
I'll rewrite my dockerfile at a later point but for now it works as intented.
It is still a mystery to me that the x bit disappears when the chmod is part of the Dockerfile
You are building a docker-file for the project you are pulling from git and your start_all script includes a git reset --hard origin/master which is a bad practice since now you cannot create versions of your docker images.
Also you copy these 2 files to the wrong directory. with COPY config/. /opt/youtube4me/ you copy them directly to the rood of your project. and not to the given locations in config/api/src/config/ and config/client/src/config/Config.ts
I realize that fixing these problems will not fix this chmod problem itself. But it might make it go away for your specific use case.
Also if it are secrets you are excluding from git, you also should not add them to your docker-image while building since then (after you pushed it to docker) it will be public again. Therefore it is a good practice to never add them while building.
Did you try something like this:
Docker file
FROM node:10.19.0-alpine
RUN mkdir /opt/youtube4me
COPY . /opt/youtube4me/
WORKDIR /opt/youtube4me
RUN chmod 755 /opt/youtube4me/start_all.sh
EXPOSE 45011
CMD ["/opt/youtube4me/start_all.sh"]
dockerignore
api/src/config/Config.ts
client/src/config/Config.ts
script
# Start client server
cd client && npm i && npm run build && nohup npm run prod &
# Start api server
cd api && npm i && npm run build && nohup npm run prod &
# Keep server running by tailing nothing, forever...
tail -f /dev/null
You then need to run docker run with 2 volumes to add the config to it. This is been done with -v. example: docker run -v $(pwd)config/api/src/config/:api/src/config/ -v $(pwd)config/client/src/config/:client/src/config/
Never the less I wonder why you are running both services in one docker image. If this is just for local execution you can also think about creating 2 separate docker images, and use docker-compose to spawn it all.
Addition 1:
I thought a bit about this and it is also a good practice to use environment variables to config a docker-image instead of adding a file to it. You might want to switch to that as well. I advise reading this article to get a better understanding of the why and other possibilities to do this.
Addition 2:
I created a pull request on your code, that is an example of how it could look with docker-compose. It currently does not build because of the config options. But it will give you some more insights.
This is my Dockerfile:
FROM node:7
RUN apt-get update && apt-get install -y --no-install-recommends \
rubygems build-essential ruby-dev \
&& rm -rf /var/lib/apt/lists/*
RUN npm install -gq gulp bower
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY . /usr/src/app
RUN npm install
CMD ["gulp", "start:dev"]
When I build the image, the npm install command executes with little output and really quickly. I actually build it through docker-compose which does have a volume mounted - and I cannot see the node_modules folder being created on my hose. When I launch a container on this image, I can see there is no node_modules folder. I then execute npm install and things start working - it takes 2-3 minutes to install all the packages and the node_modules folder is indeed created.
What is happening here? What am I doing wrong? Why doesn't npm install work at build time, but then it works at run time?
The npm install should have worked based on your Dockerfile. You can see the created files if you run the image without a mounted volume (DIRNAME: where your docker-compose.yml is located):
docker run --rm -it DIRNAME_node ls -ahl /usr/src/app
With docker build, all data is stored in the image. So, it's intended that you don't see any files created on your host.
If you mount a volume (generally in Linux, also in a Docker container), it overlays the directory. So you can't see the node_modules created in the build step.
I suggest you do your tests based on the Docker image itself and don't mount the volume. Then you have an immutable Docker image which is better for deployment.
Also copying up the source and running npm install means that whenever the source code changes, the npm install step cache becomes invalid.
Instead, separate the steps/caches like so;
COPY package*.json ./
RUN npm install
On Windows 10 I was having the same issue reported in this question and after some research I've found a question with the necessary steps to solve the problem.
In short, the main problem is that during the install wizard I've selected the option "Windows as containers".
To solve the issue:
1) Switch to Linux Containers: On taskbar, right click on docker icon and click on the option as presented bellow:
2) Disable "Experimental Features" on Command Line: Open docker/settings and click on Command Line:
3) Disable Experimental setting on configuration file:: On docker/settings, click on Docker Engine and certify that experimental is set to false:
The question where I found the solution was related to another problem I was facing when trying to build docker images: Unspecified error (0x80004005) while running a Docker build. Both problems were related to the same issue: when installing docker for the first time I've selected the option "windows as containers".
Hope it helps. Cheers
I have a build process that converts typescript into javascript, minifies and concatenates css files, etc.
I would like to put those files into an nginx docker container, but I don't want the original javascript / css source to be included, nor the tools that I use to build them. Is there a good way to do this, or do I have to run the build outside docker (or in a separately defined container), then COPY the relevant files in?
This page talks about doing something similar in a manual way, but doesn't explain how to automate the process e.g. with docker-compose or something.
Create a docker images with all required tools to build your code also that can clone code and build it. After build it have to copy
into docker volume for example volume name is /opt/webapp.
Launch build docker container using build image in step 1
docker run -d -P --name BuildContainer -v /opt/webapp:/opt/webapp build_image_name
Launch nginx docker container that will use shared volume of build docker in which your build code resides.
docker run -d -P --name Appserver -v /opt/webapp:/usr/local/nginx/html nginx_image_name
After building and shipping your build code to Appserver . you can delete BuildContainer because that is no more required.
Advantage of above steps:
your build code will in host machine so if one Appserver docker fail or stop then your build code will be safe in host machine and you can launch new docker using that build code.
if you create docker image for building code then every time no need to install required tool while launching docker.
you can build your code in host machine also but if you want your code should be build in fresh environment every time then this will be good. or if you use same host machine to build/compile code every time then some older source code may create problem or git clone error etc.
EDIT:
you can append :ro (Read only) to volume by which one container will not affect another. you can Read more about docker volume Here . Thanks #BMitch for suggestion.
The latest version of docker supports multi-stage builds where build products can be copied from on container to another.
https://docs.docker.com/engine/userguide/eng-image/multistage-build/
This is an ideal scenario for a multi-stage build. You perform the compiling in the first stage, copy the output of that compile to the second stage, and only ship that second stage. Each stage is an independent image that begins with a FROM line. And to transfer files between stages, there's now a COPY --from syntax. The result looks roughly like:
# first stage with your full compile environment, e.g. maven/jdk
FROM maven as build
WORKDIR /src
COPY src /src
RUN mvn install
# second stage starts below with just a jre base image
FROM openjdk:jre
# copy the jar from the first stage here
COPY --from=build /src/result.jar /app
CMD java -jar /app/result.jar
Original answer:
Two common options:
As mentioned, you can build outside and copy the compiled result into the container.
You merge your download, build, and cleanup step into a single RUN command. This is a common best practice to minimize the size of each layer.
An example Dockerfile for the second option would look like:
FROM mybase:latest
RUN apt-get update && apt-get install tools \
&& git clone https://github.com/myproj \
&& cd myproj \
&& make \
&& make install
&& cd .. \
&& apt-get rm tools && apt-get clean \
&& rm -rf myproj
The lines would be a little more complicated than that, but that's the gist.
As #dnephin suggested in his comments on the question and on #pl_rock's answer, the standard docker tools are not designed to do this, but you can use a third party tool like one of the following:
dobi (48 GitHub stars)
packer (6210 GitHub stars)
rocker (759 GitHub stars)
conveyor (152 GitHub stars)
(GitHub stars correct when I wrote the answer)
We went with dobi as it was the first one we heard of (because of this question), but it looks like packer is the most popular.
Create a docker file to run your build process, then run cleanup code
Example:
FROM node:latest
# Provides cached layer for node_modules
ADD package.json /tmp/package.json
RUN cd /tmp && npm install
RUN mkdir -p /dist && cp -a /tmp/node_modules /dist/
RUN cp /tmp/package.json /dist
ADD . /tmp
RUN cd /tmp && npm run build
RUN mkdir -p /dist && cp -a /tmp/. /dist
#run some clean up code here
RUN npm run cleanup
# Define working directory
WORKDIR /dist
# Expose port
EXPOSE 4000
# Run app
CMD ["npm", "run", "start"]
In your docker compose file
web:
build: ../project_path
environment:
- NODE_ENV=production
restart: always
ports:
- "4000"
I have a Dockerfile for a custom Jenkins master like so:
FROM jenkins
MAINTAINER me
USER root
RUN echo 2.0 > /usr/share/jenkins/ref/jenkins.install.UpgradeWizard.state
RUN apt-get update \
&& apt-get install -y sudo \
&& apt-get install -y vim \
&& rm -rf /var/lib/apt/lists/*
RUN echo "jenkins ALL=NOPASSWD: ALL" >> /etc/sudoers
USER jenkins
# COPY plugins.txt /usr/share/jenkins/plugins.txt
# RUN /usr/local/bin/plugins.sh /usr/share/jenkins/plugins.txt
ENV JAVA_OPTS="-Xmx8192m"
ENV JENKINS_OPTS="--handlerCountStartup=100 --handlerCountMax=300"
RUN /usr/local/bin/install-plugins.sh git:2.6.0
Everything works fine until the RUN /usr/local/bin/install-plugins.sh git:2.6.0 line. I get an error installing the plugins:
Creating initial locks...
Analyzing war...
Downloading plugins...
Downloading plugin: git from https://updates.jenkins.io/download/plugins/git/2.6.0/git.hpi
Downloading plugin: git-plugin from https://updates.jenkins.io/download/plugins/git-plugin/2.6.0/git-plugin.hpi
Failed to download plugin: git or git-plugin
WAR bundled plugins:
Installed plugins:
*:
Some plugins failed to download!
Not downloaded: git
The command '/bin/sh -c /usr/local/bin/install-plugins.sh git:2.6.0' returned a non-zero code: 1
Am I doing something wrong or is this an issue with Jenkins/Docker?
For those who are pulling the jenkins image from dockerHub, dont pull:
docker pull jenkins
or
docker pull jenkinsci/jenkins
rather pull the latest version using:
docker pull jenkins/jenkins
This is the latest one according to https://jenkins.io/blog/2018/12/10/the-official-Docker-image/
Your Dockerfile works for me, installs all plugins and builds the image successfully:
Analyzing war...
Downloading plugins...
Downloading plugin: git from https://updates.jenkins.io/download/plugins/git/2.6.0/git.hpi
> git depends on workflow-scm-step:1.14.2,mailer:1.17,matrix-project:1.7.1,ssh-credentials:1.12,parameterized-trigger:2.4;resolution:=optional,scm-api:1.2,token-macro:1.11;resolution:=optional,promoted-builds:2.27;resolution:=optional,credentials:2.1.4,git-client:1.21.0
Downloading plugin: workflow-scm-step from https://updates.jenkins.io/download/plugins/workflow-scm-step/latest/workflow-scm-step.hpi
...
Removing intermediate container 4f895c203944
Successfully built 31d58d1f586f
Try docker build --no-cache in case there's an issue with one of the layers in your image cache, or set up an automated build on Docker Hub and build it on Docker's servers.
I recall having problems installing with that script myself. Instead, I used the following:
RUN install-plugins.sh \
disable-failed-job \
disk-usage \
greenballs \
...
And hopefully it doesn't make a difference for this, but I have my plugin install inside of the root portion of my Dockerfile, before dropping back to running commands as USER jenkins.
Dockerfile
FROM jenkins/jenkins:latest
ENV CURL_OPTIONS -sSfLk
ENV JENKINS_OPTS --httpPort=-1
The curl timeouts for downloading plugins were insufficient in some cases, that was just fixed for image 2.19.1, and it is now configurable too using CURL_CONNECTION_TIMEOUT and other options
I had the same problem on OS X.
In my case the problem was caused by a bad DNS configuration (obtained by DHCP). When I changed the DNS to Googles DNS 8.8.8.8 it all worked perfectly.
I encountered error messages such as:
Failed to resolve host name "ftp.icm.edu.pl". Perhaps you need to configure HTTP proxy
I had a very similar issue and the solution for me was to specify the proxy within the Docker file prior to plugin install. Below is the snippet of my Dockerfile
FROM jenkins:latest
MAINTAINER Jose Estrada
USER root
ENV JAVA_OPTS="--handlerCountStartup=100 --handlerCountMax=300 --logfile=/var/log/jenkins/jenkins.log --webroot=/var/cache/jenkins/war -Dhttps.proxyHost=proxy-wsa.esl.cisco.com -Dhttps.proxyPort=80"
ENV http_proxy <PROXY Settings>
ENV https_proxy <PROXY Settings>
RUN /usr/local/bin/install-plugins.sh cisco-spark-notifier:latest
This could be a DNS issue. Please restart docker daemon and try. (sudo service docker restart)