I'm trying to build jenkins docker image locally using the jenkins Dockerfile locally and I keep getting this error.
Step 17/34 : COPY init.groovy /usr/share/jenkins/ref/init.groovy.d/tcp-slave-agent-port.groovy
COPY failed: stat /var/lib/docker/tmp/docker-builder028619870/init.groovy: no such file or directory
Here's the Dockerfile that I am using.
And this is the build command I am using(Dockerfile is in the PWD) :
docker build -t jenkins-k8s .
As you can see in these Github Repo there is a file named init.groovy.
And in the Dockerfile there is a Statement like
COPY init.groovy /SOME/PATH/IN/THE/CONTAINER
When you want to use this Dockerfile, you have to download the init.groovy as well. But there are more COPY Statements in these Dockerfile.
Dont know if you need such a big Dockerfile for your needs.
For our needs we just use the official ParentImage from DockerHub.
FROM jenkins/jenkins:2.73.3
USER root
ENV TZ=Europe/Berlin
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ >
/etc/timezone
USER jenkins
You can change the Version and the Timezone for your needs.
I recommend cloning the git repository before building the Dockerfile:
git clone https://github.com/jenkinsci/docker
cd docker
git checkout 587b2856cd225bb152c4abeeaaa24934c75aa460 # Switch to the version you mentioned in the question.
docker build -t jenkins-k8s .
By doing so, you are guaranteed to have all the files that are required to build the Dockerfile.
Related
When attempting to build a GoLang-based Docker Image, the Docker executor runs into the following error:
. . .go: $GIT_REPO#v1.9.11: reading $GIT_REPO/go.mod at revision v0.0.07: unknown revision v0.0.07
at the following RUN instruction from the Dockerfile used:
RUN go build . . .
where GIT_REPO represents the private repo. full path, including owner and name.
The Docker executor encounters this error with go1.13.x and higher; the Docker executor does not encounter this error with go1.12.x.
The vendor dir. contains all required packages. Tags are confirmed to be present.
Proper SSH keys were even added to the private Go common repo. with successful
git clone . . .
commands outside of building Docker images, but still encountering the same error above.
EDIT:
Verify your remote repo in bitbucket.org actually has the v0.0.7 tag you're trying to build against.
While a local build may work if the git tag exists locally - a docker build will pull from the remote source and fail with an error like go.mod at revision v0.0.7: unknown revision v0.0.7 - if the tag does not exist remotely.
To push your local tags to the remote repo:
git push --tags
For more granular tag operations see.
Docker builds by default can only access public repos. Since you need access to a private repo, you need to include a read-ssh key to the Docker build process (keys should never be checked into the repo!).
It is critically important, however, you do this in a multi-stage build, so you do not include your SSH keys in the final image.
This blog post walks through all the steps. But to include a working example:
To build the docker image:
SSH_PRIVATE_KEY="$(cat ~/.ssh/id_rsa)" \
docker build -t "myapp:v0.0.1" --build-arg SSH_PRIVATE_KEY .
And the Dockerfile using a bitbucket.org private repo site:
FROM golang:1.14.6 AS build
WORKDIR /bld
COPY *.go go.mod go.sum ./
ARG SSH_PRIVATE_KEY
# ***NEVER*** DO THIS IN A SINGLE-STAGE DOCKER BUILD (see below)
RUN \
mkdir -p ~/.ssh && \
umask 0077 && \
echo "${SSH_PRIVATE_KEY}" > ~/.ssh/id_rsa && \
git config --global url."git#bitbucket.org:".insteadOf https://bitbucket.org/ && \
ssh-keyscan bitbucket.org >> ~/.ssh/known_hosts
RUN \
go get && \
CGO_ENABLED=0 go build -o app
# final stage of multi-stage: will appropriately *NOT* include SSH keys
FROM scratch
COPY --from=build \
/etc/ssl /etc/ssl
COPY --from=build \
/bld/app /app/myapp
CMD ["/app/myapp"]
There are two problems to solve here:
1. How to allow Docker to access local SSH keys, safely?
2. How to tell Go not to use public registry to fetch private packages?
Short Answers
Since Docker v18.09, there's a built-in solution to handle SSH authentication during the build phase (more). It's also much easier and safer, compared to passing build arguments, and eliminates the need for a multi-stage Docker build.
Go has a GOPRIVATE environment variable to identify private packages. (more)
Long Answer
Step-by-step:
1. Make sure ssh-agent is setup and knows the SSH key
Github has a quick guide on this subject, explaining the process for different operating systems. See Generating a new SSH key and adding it to SSH agent.
2. Enable BuildKit for Docker
Without BuildKit, docker build won't recognize --ssh option.
From the Docker reference:
Easiest way from a fresh install of docker is to set the
DOCKER_BUILDKIT=1 environment variable when invoking the docker build
command, such as:
$ DOCKER_BUILDKIT=1 docker build .
To enable docker BuildKit by
default, set daemon configuration in /etc/docker/daemon.json feature
to true and restart the daemon:
{ "features": { "buildkit": true } }
Docker Desktop users can manage daemon configurations via Preferences > Docker Engine.
4. Update Dockerfile
4.1. Make sure Git uses SSH instead of HTTPs
Go tends to fetch public packages via HTTPs. You can adjust this behavior by updating git configurations:
RUN git config --global url.git#github.com:.insteadOf https://github.com/
You should probably do this on your local machine as well.
4.2. Request SSH access where it's required
Each RUN command the needs SSH access should be mounted with type=ssh. For
Example:
RUN --mount=type=ssh git clone ...
4.3. Make sure Go knows your private packages
Update GOPRIVATE variable:
RUN go env -w GOPRIVATE="github.com/your-org/private-repo"
Putting all of it together, in following sample of a Dockerfile:
FROM golang:1.16.3-alpine3.13
RUN apk update
RUN apk add git openssh
RUN mkdir /app
ADD . /app
WORKDIR /app
# You can replace github.com with any other Git host
RUN mkdir -p -m 0600 ~/.ssh && ssh-keyscan github.com >> ~/.ssh/known_hosts
# Make sure git uses SSH to fetch packages, not HTTPs
RUN git config --global url.git#github.com:.insteadOf https://github.com/
# Make Go knows which packages are private.
RUN go env -w GOPRIVATE="github.com/your-org/private-repo"
# GOPRIVATE is a comma separated list glob-patterns.
# You can use a wildcard to match every repo in an organization:
# e.g.: GOPRIVATE="github.com/your-org/*"
# Mount the build command with type `ssh`.
RUN --mount=type=ssh go get && go build -o main .
CMD ["/app/main"]
6. Build the image with --ssh option:
Having BuildKit enabled by default:
$ docker build --ssh default -t my-app:latest .
I've started using circleci for CI (I'm a newbie) and I want to build a docker image and push it to dockerhub inside a circleci job.
the problem is the ADD statement of the dockerfile, the error say
ADD failed: stat /var/lib/docker/tmp/docker-builder814373370/app/build: no such file or directory
docker build work fine in local. The problem seems to be the 'remote environment' create by circleci to execute docker cmd inside a job (when the job is executing inside a container). I tried multiple things to share my folder to the remote environment but nothing has worked. I also tried to execute my job inside a 'machine' to get rid of the 'remote environment' but it gives me more errors.
I think I can achieve it by storing my project online in another job and then adding the folder by https inside the dockerfile. But I'm pretty sure there is a faster way, I just don't see it.
here my dockerfile:
FROM ubuntu:20.04
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN apt-get update -yq && apt-get -yq install nodejs npm && npm install serve -g
ADD app/build/ /app
EXPOSE 5000
CMD serve -s /app -l 5000
and my circleci job:
working_directory: ~/project/
docker:
- image: circleci/buildpack-deps:stretch
steps:
- checkout
- setup_remote_docker
- run:
name: Build Docker image
command: sudo docker build . -t $IMAGE_NAME:latest
I achieve it by storing artifacts in another job and then adding the folder by https with curl and wget in a RUN statement of the dockerfile
I am trying to build a docker image.
I want to pull files from gitlab and copy them to another directory.
Here is the relevant part of my Dockerfile:
ENV RALPH_LOCAL_DIR="/var/local/ralph"
ENV RALPH_IMAGE_TMP_DIR="/tmp"
RUN mkdir -p $RALPH_LOCAL_DIR
RUN cd $RALPH_LOCAL_DIR
WORKDIR $RALPH_LOCAL_DIR
RUN git clone <OMITTED_FOR_THIS_POST>
WORKDIR project-ralph/ralph
COPY README.md $RALPH_IMAGE_TMP_DIR/
I get this error:
COPY failed: stat /var/lib/docker/tmp/docker-builder094767244/README.md: no such file or directory
So the copying fails. But I can list the file in the container with ls. So if I run RUN ls -la README.md he can find the file. So why can't he copy the file?
COPY copies the file from the host system. You need to run git clone there, before you start running Docker operations. (This also simplifies the case where you need credentials of some sort to run git clone: getting an ssh private key into an image to run git clone, without leaking it into the final image, is kind of tricky, and you don't really want to be doing kind of tricky things with ssh keys.)
FROM ...
WORKDIR /var/local/ralph
COPY README.md .
git clone <OMITTED_FOR_THIS_POST>
docker build -t ... .
If the file is already in the image, as in your existing Dockerfile, then you need to RUN cp or RUN mv to put it somewhere else.
RUN cp README.md /tmp
I use docker for development and in production for laravel project. I have slightly different dockerfile for development and production. For example I am mounting local directory to docker container in development environment so that I don't need to do docker build for every change in code.
As mounted directory will only be available when running the docker container I can't put commands like "composer install" or "npm install" in dockerfile for development.
Currently I am managing two docker files, is there any way that I can do this with single docker file and decide which commands to run when doing docker build by sending parameters.
What I am trying to achieve is
In docker file
...
IF PROD THEN RUN composer install
...
During docker build
docker build [PROD] -t mytag .
As a best practice you should try to aim to use one Dockerfile to avoid unexpected errors between different environments. However, you may have a usecase where you cannot do that.
The Dockerfile syntax is not rich enough to support such a scenario, however you can use shell scripts to achieve that.
Create a shell script, called install.sh that does something like:
if [ ${ENV} = "DEV" ]; then
composer install
else
npm install
fi
In your Dockerfile add this script and then execute it when building
...
COPY install.sh install.sh
RUN chmod u+x install.sh && ./install.sh
...
When building pass a build arg to specify the environment, example:
docker build --build-arg "ENV=PROD" ...
UPDATE (2020):
Since this was written 3 years ago, many things have changed (including my opinion about this topic). My suggested way of doing this, is using one dockerfile and using scripts. Please see #yamenk's answer.
ORIGINAL:
You can use two different Dockerfiles.
# ./Dockerfile (non production)
FROM foo/bar
MAINTAINER ...
# ....
And a second one:
# ./Dockerfile.production
FROM foo/bar
MAINTAINER ...
RUN composer install
While calling the build command, you can tell which file it should use:
$> docker build -t mytag .
$> docker build -t mytag-production -f Dockerfile.production .
You can use build args directly without providing additional sh script. Might look a little messy, though. But it works.
Dockerfile must be like this:
FROM alpine
ARG mode
RUN if [ "x$mode" = "xdev" ] ; then echo "Development" ; else echo "Production" ; fi
And commands to check are:
docker build -t app --build-arg mode=dev .
docker build -t app --build-arg mode=prod .
I have tried several approaches to this, including using docker-compose, a multi-stage build, passing an argument through a file and the approaches used in other answers. My company needed a good way to do this and after trying these, here is my opinion.
The best method is to pass the arg through the cmd. You can pass it through vscode while right clicking and choosing build image
Image of visual studio code while clicking image build
using this code:
ARG BuildMode
RUN echo $BuildMode
RUN if [ "$BuildMode" = "debug" ] ; then apt-get update \
&& apt-get install -y --no-install-recommends \
unzip \
&& rm -rf /var/lib/apt/lists/* \
&& curl -sSL https://aka.ms/getvsdbgsh | bash /dev/stdin -v latest -l /vsdbg ; fi
and in the build section of dockerfile:
ARG BuildMode
ENV Environment=${BuildMode:-debug}
RUN dotnet build "debugging.csproj" -c $Environment -o /app
FROM build AS publish
RUN dotnet publish "debugging.csproj" -c $Environment -o /app
The best way to do it is with .env file in your project.
You can define two variables CONTEXTDIRECTORY and DOCKERFILENAME
And create Dockerfile-dev and Dockerfile-prod
This is example of using it:
docker compose file:
services:
serviceA:
build:
context: ${CONTEXTDIRECTORY:-./prod_context}
dockerfile: ${DOCKERFILENAME:-./nginx/Dockerfile-prod}
.env file in the root of project:
CONTEXTDIRECTORY=./
DOCKERFILENAME=Dockerfile-dev
Be careful with the context. Its path starts from the directory with the dockerfile that you specified, not from docker-compose directory.
In default values i using prod, because if you forget to specify env variables, you won't be able to accidentally build a dev version in production
Solution with diffrent dockerfiles is more convinient, then scripts. It's easier to change and maintain
I have a build process that converts typescript into javascript, minifies and concatenates css files, etc.
I would like to put those files into an nginx docker container, but I don't want the original javascript / css source to be included, nor the tools that I use to build them. Is there a good way to do this, or do I have to run the build outside docker (or in a separately defined container), then COPY the relevant files in?
This page talks about doing something similar in a manual way, but doesn't explain how to automate the process e.g. with docker-compose or something.
Create a docker images with all required tools to build your code also that can clone code and build it. After build it have to copy
into docker volume for example volume name is /opt/webapp.
Launch build docker container using build image in step 1
docker run -d -P --name BuildContainer -v /opt/webapp:/opt/webapp build_image_name
Launch nginx docker container that will use shared volume of build docker in which your build code resides.
docker run -d -P --name Appserver -v /opt/webapp:/usr/local/nginx/html nginx_image_name
After building and shipping your build code to Appserver . you can delete BuildContainer because that is no more required.
Advantage of above steps:
your build code will in host machine so if one Appserver docker fail or stop then your build code will be safe in host machine and you can launch new docker using that build code.
if you create docker image for building code then every time no need to install required tool while launching docker.
you can build your code in host machine also but if you want your code should be build in fresh environment every time then this will be good. or if you use same host machine to build/compile code every time then some older source code may create problem or git clone error etc.
EDIT:
you can append :ro (Read only) to volume by which one container will not affect another. you can Read more about docker volume Here . Thanks #BMitch for suggestion.
The latest version of docker supports multi-stage builds where build products can be copied from on container to another.
https://docs.docker.com/engine/userguide/eng-image/multistage-build/
This is an ideal scenario for a multi-stage build. You perform the compiling in the first stage, copy the output of that compile to the second stage, and only ship that second stage. Each stage is an independent image that begins with a FROM line. And to transfer files between stages, there's now a COPY --from syntax. The result looks roughly like:
# first stage with your full compile environment, e.g. maven/jdk
FROM maven as build
WORKDIR /src
COPY src /src
RUN mvn install
# second stage starts below with just a jre base image
FROM openjdk:jre
# copy the jar from the first stage here
COPY --from=build /src/result.jar /app
CMD java -jar /app/result.jar
Original answer:
Two common options:
As mentioned, you can build outside and copy the compiled result into the container.
You merge your download, build, and cleanup step into a single RUN command. This is a common best practice to minimize the size of each layer.
An example Dockerfile for the second option would look like:
FROM mybase:latest
RUN apt-get update && apt-get install tools \
&& git clone https://github.com/myproj \
&& cd myproj \
&& make \
&& make install
&& cd .. \
&& apt-get rm tools && apt-get clean \
&& rm -rf myproj
The lines would be a little more complicated than that, but that's the gist.
As #dnephin suggested in his comments on the question and on #pl_rock's answer, the standard docker tools are not designed to do this, but you can use a third party tool like one of the following:
dobi (48 GitHub stars)
packer (6210 GitHub stars)
rocker (759 GitHub stars)
conveyor (152 GitHub stars)
(GitHub stars correct when I wrote the answer)
We went with dobi as it was the first one we heard of (because of this question), but it looks like packer is the most popular.
Create a docker file to run your build process, then run cleanup code
Example:
FROM node:latest
# Provides cached layer for node_modules
ADD package.json /tmp/package.json
RUN cd /tmp && npm install
RUN mkdir -p /dist && cp -a /tmp/node_modules /dist/
RUN cp /tmp/package.json /dist
ADD . /tmp
RUN cd /tmp && npm run build
RUN mkdir -p /dist && cp -a /tmp/. /dist
#run some clean up code here
RUN npm run cleanup
# Define working directory
WORKDIR /dist
# Expose port
EXPOSE 4000
# Run app
CMD ["npm", "run", "start"]
In your docker compose file
web:
build: ../project_path
environment:
- NODE_ENV=production
restart: always
ports:
- "4000"