I have a simple Dockerfile based on eclipse-temurin:11.0.15_10-jre-focal which is just creating a user and copying a jar and some config files into the user's home directory:
FROM eclipse-temurin:11.0.15_10-jre-focal
RUN apt-get update -y && apt-get install -y vim-tiny iputils-ping && rm -rf /var/lib/apt/lists/*
ARG APP_USR=bulkload
RUN useradd --user-group --create-home --base-dir /opt --shell /bin/bash $APP_USR
USER $APP_USR
COPY --chown=$APP_USR:$APP_USR target/${project.artifactId}-${project.version}-all.jar src/test/resources/bulk-load-config.json /opt/$APP_USR/
COPY --chown=$APP_USR:$APP_USR src/test/resources/*.properties /opt/$APP_USR/config/
COPY --chown=$APP_USR:$APP_USR content /opt/$APP_USR/content/
CMD ["bash"]
When I build it on Windows ("DockerVersion": "20.10.14"), everything is as expected. When I build it using Azure DevOps pipeline ("DockerVersion": "20.10.11" on Linux), there are anomalies:
User's home directory is owned by root
All the files and directories copied via COPY command are also owned by root (in spite of --chown switch)
I don't understand this behavior. The useradd command is executed inside the container so it shouldn't matter if the build is done on Windows or Linux. Furthermore, I assume that the COPY commands should fail if the --chown instruction couldn't be done, but it didn't. I suppose I must be doing something wrong but what?
Related
I've been trying to get this running for many MANY hours. I've been scouting docker docs, github repos and other stuff but I can't get it working for some reason.
My dockerfile:
FROM mattrayner/lamp:latest-1804
WORKDIR /app
RUN wget -O /tmp/lwt.zip http://downloads.sourceforge.net/project/lwt/lwt_v_1_6_3.zip && \
yes A | unzip /tmp/lwt.zip &&\
rm /tmp/lwt.zip &&\
mv connect_xampp.inc.php connect.inc.php
EXPOSE 80
CMD ["/run.sh"]
It build normally without any errors but when I run the image nothing appears in the /app directory and I get just a basic Welcome to LAMP view on my browser.
Though,
If I do docker run -p "80:80" -it -v ${PWD}/app:/app mattrayner/lamp:latest-1804 /bin/bash, cd /app, copy and paste
wget -O /tmp/lwt.zip http://downloads.sourceforge.net/project/lwt/lwt_v_1_6_3.zip && \
yes A | unzip /tmp/lwt.zip &&\
rm /tmp/lwt.zip &&\
mv connect_xampp.inc.php connect.inc.php
it still doesn't work BUT if I exit and run the same docker run command it works.
Docker LAMP instructions also state to do exactly as I have done:
FROM mattrayner/lamp:latest-1804
# Your custom commands
CMD ["/run.sh"]
As I followed these instructions I thought that everything would work nicely.
What's the catch here? It has something to do with the intermediate containers probably but I can't comprehend it (I'm not a devops or developer by trade, just a hobbyist).
That happens because you're doing this:
Download a file (wget ...) in your /app dir in your docker image.
After that, you're overwritting this /app dir when you mount volume, with content of your $PWD/app.
If you are installing something doing docker build in some dirs, don't mount into the same path.
If you need something in the same path, you can mount some concrete files, but not the whole dir, or it will override what you had in your docker image when container is created.
You can do wget somewhere else or download it into your ${PWD}/app and then mount it.
So, I have built a beat with mage GenerateCustomBeat and it runs okay, except, now I'm trying to cotainerize it. When I run the image I built, it complains that no customBeat.yml was found.
I have secured that the file exists in the folder by adding a line RUN ls . at the end of my Dockerfile.
The beat name is coletorbeat, so this name appears multiple times inside the Dockerfile.
Upon executing sudo docker run coletorbeat I have the following error message:
Exiting: error loading config file: stat coletorbeat.yml: no such file or directory
If there was a way to specify the coletorbeat.yml file location when I execute the beat, in CMD I think I would solve it, but I have not found how to do so yet.
I'll post the Dockerfile below. I know the code inside the beater folder works fine. I'm guessing I'm making some mistake on the containerization.
Dockerfile:
FROM ubuntu
MAINTAINER myNameHere
ARG ${ip:-"333.333.333.333"}
ARG ${porta:-"4343"}
ARG ${dataInicio:-"2020-01-07"}
ARG ${dataFim:-"2020-01-07"}
ARG ${tipoEquipamento:-"type"}
ARG ${versao:-"2"}
ARG ${nivel:-"0"}
ARG ${instituicao:-"RJ"}
ADD . .
RUN mkdir /etc/coletorbeat
COPY /coletorbeat/coletorbeat.yml /etc/coletorbeat/coletorbeat.yml
RUN apt-get update && \
apt-get install -y wget git
RUN wget https://storage.googleapis.com/golang/go1.14.4.linux-amd64.tar.gz
RUN tar -zxvf go1.14.*.linux-amd64.tar.gz -C /usr/local
RUN mkdir /go
ENV GOROOT /usr/local/go
ENV GOPATH $HOME/go
ENV PATH $PATH:$GOROOT/bin:$GOPATH/bin
RUN echo $PATH
RUN go get -u -d github.com/magefile/mage
RUN cd $GOPATH/src/github.com/magefile/mage && \
go run bootstrap.go
RUN apt-get install -y python3-venv
RUN apt-get install -y build-essential
RUN cd /coletorbeat && chmod go-w coletorbeat.yml && ./coletorbeat setup
RUN cd /coletorbeat && ./coletorbeat test config -c /coletorbeat/coletorbeat.yml && ls .
CMD ./coletorbeat/coletorbeat -E 'coletorbeat.ip=${ip}'
You are adding the yml file into the /etc dir
COPY /coletorbeat/coletorbeat.yml /etc/coletorbeat/coletorbeat.yml
But then running commands on /coletorbeat without using etc.
On CMD line in the Dockerfile, I added the command cd /mybeatfolder and it worked. Libbeat searches the current folder for the config file as default, so moving to the right directory before executing my beat solved it.
I have a Go app. Some of its dependencies are in a private Github repo and another part of dependencies are local packages in my app folder. The app compiles and works on my computer without a problem (when I simply compile it without docker). I am using the below Dockerfile.
FROM ubuntu as intermediate
# install git
RUN apt-get update
RUN apt-get install -y git
RUN mkdir /root/.ssh/
COPY github_rsa.ppk /root/.ssh/github_rsa.ppk
RUN chmod 700 /root/.ssh/github_rsa.ppk
RUN eval $(ssh-agent) && \
ssh-add /root/.ssh/github_rsa.ppk && \
ssh-keyscan -H github.com >> /etc/ssh/ssh_known_hosts && \
git clone git#github.myusername/shared.git
FROM golang:latest
ENV GOPATH=/go
RUN echo $GOPATH
ADD . /go/src/SCMicroServer
WORKDIR /go/src/SCMicroServer
COPY --from=intermediate /shared /go/src/github.com/myusername/shared
RUN go get /go/src/SCMicroServer
RUN go install SCMicroServer
ENTRYPOINT /go/src/SCMicroServer
EXPOSE 8080
First build section related to Git is working fine, It works until this line: RUN go get /go/src/SCMicroServer in second section. I receive this error in mentioned step:
package SCMicroServer/controllers/package1: unrecognized import path "SCMicroServer/controllers/package1" (import path does not begin with hostname)
The command '/bin/sh -c go get /go/src/SCMicroServer' returned a non-zero code: 1
"SCMicroServer/controllers/package1" is one of the local packages in my app folder (or its subfolders) and I have many more in my local folder. I am setting GOPATH env variable in my Dockerfile, so I am not sure what I am missing.
I found the answer, it was not really Dockerfile problem, I referenced my package 2 times in 2 different way in my main file:
package1 "SCMicroServer/controllers/package1"
"SCMicroServer/controllers/package1"
After I removed the second one, I stopped receiving the error.
I did a basic search in the community and could not find a suitable answer, so I am asking here. Sorry if it was asked earlier.
Basically , I am working on a certain project and we keep changing code at a regular interval . So ,we need to build docker image everytime due to that we need to install dependencies from requirement.txt from scratch which took around 10 min everytime.
How can I perform direct change to docker image and also how to configure entrypoint(in Docker File) which reflect changes in Pre-Build docker image
You don't edit an image once it's been built. You always run docker build from the start; it always runs in a clean environment.
The flip side of this is that Docker caches built images. If you had image 01234567, ran RUN pip install -r requirements.txt, and got image 2468ace0 out, then the next time you run docker build it will see the same source image and the same command, and skip doing the work and jump directly to the output images. COPY or ADD files that change invalidates the cache for future steps.
So the standard pattern is
FROM node:10 # arbitrary choice of language
WORKDIR /app
# Copy in _only_ the requirements and package lock files
COPY package.json yarn.lock ./
# Install dependencies (once)
RUN yarn install
# Copy in the rest of the application and build it
COPY src/ src/
RUN yarn build
# Standard application metadata
EXPOSE 3000
CMD ["yarn", "start"]
If you only change something in your src tree, docker build will skip up to the COPY step, since the package.json and yarn.lock files haven't changed.
In my case, I was facing the same, after minor changes, i was building the image again and again.
My old DockerFile
FROM python:3.8.0
WORKDIR /app
# Install system libraries
RUN apt-get update && \
apt-get install -y git && \
apt-get install -y gcc
# Install project dependencies
COPY ./requirements.txt .
RUN pip install --upgrade pip
RUN pip install --no-cache-dir -r requirements.txt --use-deprecated=legacy-resolver
# Don't use terminal buffering, print all to stdout / err right away
ENV PYTHONUNBUFFERED 1
COPY . .
so what I did, created a base image file first like this (Avoided the last line, did not copy my code)
FROM python:3.8.0
WORKDIR /app
# Install system libraries
RUN apt-get update && \
apt-get install -y git && \
apt-get install -y gcc
# Install project dependencies
COPY ./requirements.txt .
RUN pip install --upgrade pip
RUN pip install --no-cache-dir -r requirements.txt --use-deprecated=legacy-resolver
# Don't use terminal buffering, print all to stdout / err right away
ENV PYTHONUNBUFFERED 1
and then build this image using
docker build -t my_base_img:latest -f base_dockerfile .
then the final Dockerfile
FROM my_base_img:latest
WORKDIR /app
COPY . .
And as my from this image, I was not able to up the container, issues with my copied python code, so you can edit the image/container code, to fix the issues in the container, by this mean i avoided the task of building images again and again.
When my code got fixed, I copied the changes from container to my code base and then finally, I created the final image.
There are 4 Steps
Start the image you want to edit (e.g. docker run ...)
Modify the running image by shelling into it with docker exec -it <container-id> (you can get the container id with docker ps)
Make any modifications (install new things, make a directory or file)
In a new terminal tab/window run docker commit c7e6409a22bf my-new-image (substituting in the container id of the container you want to save)
An example
# Run an existing image
docker run -dt existing_image
# See that it's running
docker ps
# CONTAINER ID IMAGE COMMAND CREATED STATUS
# c7e6409a22bf existing-image "R" 6 minutes ago Up 6 minutes
# Shell into it
docker exec -it c7e6409a22bf bash
# Make a new directory for demonstration purposes
# (note that this is inside the existing image)
mkdir NEWDIRECTORY
# Open another terminal tab/window, and save the running container you modified
docker commit c7e6409a22bf my-new-image
# Inspect to ensure it saved correctly
docker image ls
# REPOSITORY TAG IMAGE ID CREATED SIZE
# existing-image latest a7dde5d84fe5 7 minutes ago 888MB
# my-new-image latest d57fd15d5a95 2 minutes ago 888MB
I'm trying to bundle my Jekyll blog as a docker container.
I found this Dockerfile which seems to suit my use case but wanted to be more hands on so I copied it directly into my repo:
FROM ruby:latest
MAINTAINER Peter Etelej <peter#etelej.com>
RUN apt-get -qq update && \
apt-get -qq install nodejs -y && \
gem install -q bundler
RUN mkdir -p /etc/jekyll && \
printf 'source "https://rubygems.org"\ngem "github-pages"\ngem "execjs"\ngem "rouge"' > /etc/jekyll/Gemfile && \
printf "\nBuilding required Ruby gems. Please wait..." && \
bundle install --gemfile /etc/jekyll/Gemfile --clean --quiet
RUN apt-get clean && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
ENV BUNDLE_GEMFILE /etc/jekyll/Gemfile
EXPOSE 4000
ENTRYPOINT ["bundle", "exec"]
CMD ["jekyll", "serve","--host=0.0.0.0"]
When I run it I get an error
jekyll 3.4.3 | Error: No such file or directory # rb_sysopen - /etc/modules-load.d/modules.conf
The host system has this file but my assumption was that the container didn't have access to it so I tried to add it into the Dockerfile
ADD /etc/modules-load.d/modules.conf /etc/modules-load.d/modules.conf
I then docker build and get the error
lstat etc/modules-load.d/: no such file or directory
I don't understand why the container is looking for this file in the first place but I'm even more confused by the fact that I can't add a file which is clearly there.
Docker builds run on the docker host, not necessarily the client where you run the command, and so all the files needed to run the build are sent in the build context to the host. That context is most often the current directory, or ., that you pass at the end of the docker build -t $image_name . command.
Everything that you try to include in the image with a COPY or ADD is done in reference to that build context, not the filesystem on your client or host machine. So if you need a modules.conf, you'll need to first copy that into your directory with the Dockerfile, and then COPY the file from there.
As for why jekyll is looking for the file, I'm not familiar with jekyll, but it doesn't look promising for something running inside of a container. The modules are kernel specific and containers are designed to be moved to different hosts with potentially different kernels.