ARG JENKINS_VERSION=lts-jdk11
FROM jenkins/jenkins:${JENKINS_VERSION}
COPY docker_files/jenkins-log.properties /etc/jenkins-log.properties
USER root
RUN apt-get update && apt-get install -y \
ca-certificates curl gnupg2 \
software-properties-common && \
mkdir -p /data1/jenkins /var/cache/jenkins/war && chown -R jenkins:jenkins /data1/jenkins /var/cache/jenkins
USER jenkins
ENV JAVA_OPTS="-Djenkins.install.runSetupWizard=false -Duser.home=/data1/jenkins -Djenkins.model.Jenkins.slaveAgentPort=50000 -Djava.util.logging.config.file=/etc/jenkins-log.properties" \
JENKINS_HOME="/data1/jenkins" \
JENKINS_OPTS="--webroot=/var/cache/jenkins/war --httpPort=8081" \
JENKINS_SLAVE_AGENT_PORT="50000"
EXPOSE 8081
RUN jenkins-plugin-cli --latest false --plugins " \
ansicolor:1.0.1 \
ant:1.11 \
antisamy-markup-formatter:2.1 \
"
it results in following error
`Unable to resolve plugin URL https://updates.jenkins.io/latest/.hpi, or download plugin to file: status code: 404, reason phrase: Not Found
Downloading from mirrors failed, falling back to https://archives.jenkins.io/
Unable to resolve plugin URL https://archives.jenkins.io/plugins/latest/.hpi, or download plugin to file: status code: 404, reason phrase: Not Found``
,,,
any help with this ??
The jenkins-plugin-cli command is very strict regarding spaces. I ran into the same issue trying to use new lines to keep a readable plugin list. This works in my Dockerfiles
...
RUN jenkins-plugin-cli --plugins \
"\
active-directory:2.29 \
antisamy-markup-formatter:155.v795fb_8702324 \
ws-cleanup:0.44 \
"
...
Alternatively, a text file can be used similar to how the old plugin installer script was used
FROM jenkins/jenkins:lts-jdk11
COPY --chown=jenkins:jenkins plugins.txt /usr/share/jenkins/ref/plugins.txt
RUN jenkins-plugin-cli -f /usr/share/jenkins/ref/plugins.txt
https://github.com/jenkinsci/docker/#preinstalling-plugins
Related
We were able to successfully add the deployment to Azuredevops Agent pool and could execute the pipeline on them by following the [Microsoft docs][1].
I used below docker file to install the software inside the container.
FROM ubuntu:18.04
# To make it easier for build and release pipelines to run apt-get,
# configure apt to not require confirmation (assume the -y argument by default)
ENV DEBIAN_FRONTEND=noninteractive
RUN echo "APT::Get::Assume-Yes \"true\";" > /etc/apt/apt.conf.d/90assumeyes
RUN apt-get update && apt-get install -y --no-install-recommends \
ca-certificates \
curl \
jq \
git \
iputils-ping \
libcurl4 \
libicu60 \
libunwind8 \
netcat \
libssl1.0 \
maven \
python \
python3 \
docker \
&& rm -rf /var/lib/apt/lists/*
RUN curl -LsS https://aka.ms/InstallAzureCLIDeb | bash \
&& rm -rf /var/lib/apt/lists/*
# Can be 'linux-x64', 'linux-arm64', 'linux-arm', 'rhel.6-x64'.
ENV TARGETARCH=linux-x64
WORKDIR /azp
COPY ./vstsagent/ .
COPY ./start.sh .
RUN chmod +x start.sh
ENTRYPOINT ["./start.sh"]
But Now I am confused with below points
How to set Maven and java home directories along with Mavens custom setting.xml and node and gradle custom properties files in side this AKS based agents?
Even though I put Docker software to install within the conatiner, it seems docker is not getting installed. So how we can run docker related tasks in our pipelines like "build image" nad push Image tasks within this aks based build agents?
I have the following Dockerfile:
FROM ubuntu:20.04
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get -y install curl \
iputils-ping \
apt-transport-https \
tar \
jq \
python && \
curl -sL https://deb.nodesource.com/setup_14.x | bash && \
apt-get install nodejs -yq && \
apt-get clean && apt-get autoremove
RUN npm install -g npm#latest
ARG GH_RUNNER_VERSION="2.283.3"
WORKDIR /actions-runner
RUN curl -o actions.tar.gz --location "https://github.com/actions/runner/releases/download/v${GH_RUNNER_VERSION}/actions-runner-linux-x64-${GH_RUNNER_VERSION}.tar.gz" && \
tar -zxf actions.tar.gz && \
rm -f actions.tar.gz && \
./bin/installdependencies.sh
COPY entrypoint.sh .
RUN chmod +x entrypoint.sh
ENTRYPOINT ["/actions-runner/entrypoint.sh"]
and the following step on the ci:
- name: Create DB
run: npm run dc-up
The output of that step is: npm: command not found.
I added the path using the method the docs suggested, it was done by adding a new step:
- name: add npm to path
run: echo "/usr/bin/npm" >> $GITHUB_PATH
I've checked that node is in the path by printing the path in a separate step inside the CI and the output is:
Run echo "$PATH"
/usr/bin/npm:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
I know 100% that NPM is installed into the docker image because when I run it local and only try to interact withit without the ENTRYPOINT then I'm able to print the NPM version and I checked that is it indeed in /usr/bin/npm, but still, inside the steps of the CI it can't find npm for some reason.
And its not only for npm, but for every single installation that I tried to do, I just picked npm for showcase.
Anyone has any idea what can be done?
I have a project running with meteor and node.js in my local. The meteor version is 2.4, node.js version is 8.9.4, I have meteor/release file to make meteor version be 2.2 so that meteor and node can work together.
(base) xxx$ meteor --version
Meteor 2.4
(base) xxx$ node -v
v8.9.4
It seems fine so I deploy this project to docker container to server. The Dockerfile first line I wrote
# node version dependent on meteor version
FROM node:8.9.4
After successfully deployed, the docker logs shows error siad.
Waiting for mongodb server to start - sleeping
warn: --minUptime not set. Defaulting to: 1000ms
warn: --spinSleepTime not set. Your script will exit if it does not stay up for at least 1000ms
info: Forever processing file: /app/bundle/main.js
error: undefined
data: /app/bundle/main.js:34 - Meteor requires Node v12.0.0 or later.
data: /app/bundle/main.js:34 - error: Forever detected script exited with code: 1
I check inside docker, the node version is 8.9.4
(base) [xxx]$ docker exec -it -u root tblbuilder_meteor_1 /bin/bash -c 'node --version'
v8.9.4
So I assume it is meteor version. But first I dont know how to check meteor version inside the docker. And second why this happens? I am sure the release file is updated to push project folder.
With some great man help, I kinda understand it. In local I use meteor 2.2, in docker file I use node.js 8.9.4 work with meteor2.2. So the thing I left is to modify DOCKERFILE, change it from node 8.9.4 to node 12. Below is my Dockerfile file, I try to change it to node 12.22.2, but it keep give me error, I spent one day to solve them. Currently, I stack at install r-base part.
Is there some guide for change node 8 to node 12.
# node version dependent on meteor version
FROM node:8.9.4
# I am going to use 12.22.2
#FROM node:12.22.2
# (even if copied as root you still need to change)
# https://github.com/moby/moby/issues/6119
COPY ./compose/meteor/entrypoint.sh /entrypoint.sh
COPY ./compose/meteor/run_app.sh /run_app.sh
COPY ./compose/meteor/r-cran.pgp /r-cran.pgp
COPY ./settings/settings.json /app/settings.json
COPY ./requirements.txt /requirements.txt
COPY ./r_requirements.sh /r_requirements.sh
# set locale to utf8: https://github.com/docker-library/docs/pull/703/files
# added [check-valid-until=no] & Acquire::Check-Valid-Until "false"; https://unix.stackexchange.com/questions/508724/failed-to-fetch-jessie-backports-repository
# Needs work to bring it up-to-date
RUN \
echo "deb [check-valid-until=no] http://archive.debian.org/debian jessie-backports main" > /etc/apt/sources.list.d/jessie-backports.list && \
sed -i '/deb http:\/\/deb.debian.org\/debian jessie-updates main/d' /etc/apt/sources.list && \
apt-get -o Acquire::Check-Valid-Until=false update && \
\
sh -c 'echo "deb [check-valid-until=no] http://cran.rstudio.com/bin/linux/debian jessie-cran35/" >> /etc/apt/sources.list' && \
apt-key add /r-cran.pgp && \
\
apt-get -o Acquire::Check-Valid-Until=false update && \
apt-get -o Acquire::Check-Valid-Until=false install -y locales && \
\
localedef -i en_US -c -f UTF-8 -A /usr/share/locale/locale.alias en_US.UTF-8 && \
export LC_ALL=en_US.UTF-8 && \
export LANG=en_US.UTF-8 && \
export LANGUAGE=en_US.UTF-8
ENV LANG en_US.utf8
ENV LC_ALL en_US.UTF-8
# add rstudio debian install for R (requires version >3.3)
# https://cran.r-project.org/bin/linux/debian/
# install R from apt-get
# install python 3.6 from source :/
RUN apt install -y --force-yes r-base-core r-recommended r-base-html r-base-core
RUN apt-get install -y --force-yes wget bsdtar r-base r-base-dev && \
apt-get clean && \
\
wget https://www.python.org/ftp/python/3.6.5/Python-3.6.5.tgz && \
tar zxf Python-3.6.5.tgz && \
cd ./Python-3.6.5 && \
./configure && \
make && \
make altinstall && \
cd .. && \
rm Python-3.6.5.tgz && \
rm -rf ./Python-3.6.5
# create paths and users
# change executable permissions
RUN npm install forever -g && \
\
mkdir -p /app/production && \
mkdir -p /app/logs && \
mkdir -p /app/crons && \
\
groupadd -r app && \
useradd -m -d /home/app -g app app && \
\
chmod +x /entrypoint.sh && \
chmod +x /run_app.sh && \
chmod +x /r_requirements.sh && \
chmod +x /requirements.txt && \
\
chown -R app:app /app && \
chown app:app /entrypoint.sh && \
chown app:app /run_app.sh && \
chown app:app /r_requirements.sh &&\
chown app:app /requirements.txt
USER app
# 1) install R packages
# 2) install python packages
RUN export "R_LIBS=/home/app/R_libs" && \
mkdir /home/app/R_libs && \
bash /r_requirements.sh && \
\
/usr/local/bin/pip3.6 install --user -r /requirements.txt
USER root
COPY ./compose/meteor/src/src.tar.gz /app/src.tar.gz
COPY ./src/private /app/src/private
RUN chown -R app:app /app
USER app
RUN cd /app && \
bsdtar -xzvf src.tar.gz && \
npm install --prefix /app/bundle/programs/server --production
ENTRYPOINT ["/entrypoint.sh"]
There are many wrong understanding in your tests:
Your Meteor version is 2.2, because is the version inside your project;
To you see the Node version of this Meteor project, see this answers that many guys send to you in Meteor Docker Node.js version is not match
Usually, we build the Meteor, that mean transform it in a NodeJS package build, then, inside of the Docker you don't need Meteor.
We need see your Dockerfile and understand what process you do to build do Docker image.
When I use curl --head to test my website, it returns the server information.
I followed this tutorial to hide the nginx server header.
But when I run the command yum install nginx-module-security-headers
, it returns yum: not found.
I also tried apk add nginx-module-security-headers, and it shows that the package is missing.
I have used nginx:1.17.6-alpine as my base docker image. Does anyone know how to hide the server from header under this Alpine?
I think I have an easier solution here: https://gist.github.com/hermanbanken/96f0ff298c162a522ddbba44cad31081. Big thanks to hermanbanken on Github for sharing this gist.
The idea is to create a multi stage build with the nginx alpine image to be a base for compiling the module. This turns into the following Dockerfile:
ARG VERSION=alpine
FROM nginx:${VERSION} as builder
ENV MORE_HEADERS_VERSION=0.33
ENV MORE_HEADERS_GITREPO=openresty/headers-more-nginx-module
# Download sources
RUN wget "http://nginx.org/download/nginx-${NGINX_VERSION}.tar.gz" -O nginx.tar.gz && \
wget "https://github.com/${MORE_HEADERS_GITREPO}/archive/v${MORE_HEADERS_VERSION}.tar.gz" -O extra_module.tar.gz
# For latest build deps, see https://github.com/nginxinc/docker-nginx/blob/master/mainline/alpine/Dockerfile
RUN apk add --no-cache --virtual .build-deps \
gcc \
libc-dev \
make \
openssl-dev \
pcre-dev \
zlib-dev \
linux-headers \
libxslt-dev \
gd-dev \
geoip-dev \
perl-dev \
libedit-dev \
mercurial \
bash \
alpine-sdk \
findutils
SHELL ["/bin/ash", "-eo", "pipefail", "-c"]
RUN rm -rf /usr/src/nginx /usr/src/extra_module && mkdir -p /usr/src/nginx /usr/src/extra_module && \
tar -zxC /usr/src/nginx -f nginx.tar.gz && \
tar -xzC /usr/src/extra_module -f extra_module.tar.gz
WORKDIR /usr/src/nginx/nginx-${NGINX_VERSION}
# Reuse same cli arguments as the nginx:alpine image used to build
RUN CONFARGS=$(nginx -V 2>&1 | sed -n -e 's/^.*arguments: //p') && \
sh -c "./configure --with-compat $CONFARGS --add-dynamic-module=/usr/src/extra_module/*" && make modules
# Production container starts here
FROM nginx:${VERSION}
COPY --from=builder /usr/src/nginx/nginx-${NGINX_VERSION}/objs/*_module.so /etc/nginx/modules/
.... skipped inserting config files and stuff ...
# Validate the config
RUN nginx -t
Alpine repo probably doesn't have the ngx_security_headers module but, the mentioned tutorial also provides an option of using Headers More module. You should be able to install this module in your alpine distro using the command:
apk add nginx-mod-http-headers-more
Hope it helps.
Source
I found the alternate solution. The reason that it shows binary not compatible is because I have one nginx pre-installed under the target route, and it is not compatible with the header-more module I am using. That means I cannot simply install the third party library from Alpine package.
So I prepare a clean Alpine OS, and follow the GitHub repository to build Nginx from the source with additional feature. The path of build result is the prefix path you specified.
I have been trying for a while to copy files via ssh from a remote server (not gihub) inside the docker image I want to build, but I can't connect to host. Here is the Dockerfile up until the critical point:
FROM r-base:latest
### Install libs
RUN apt-get update && apt-get install -y \
sudo \
gdebi-core \
pandoc \
pandoc-citeproc \
openssh-server \
openssh-client \
libcurl4-gnutls-dev \
libcairo2-dev \
libxt-dev \
xtail \
wget \
libssl-dev \
libxml2 \
libxml2-dev \
libv8-dev \
curl \
gnupg \
git
COPY ./setup setup
RUN mv setup/.ssh ~/.ssh
RUN touch ~/.ssh/known_hosts
RUN chmod -R 400 ~/.ssh
RUN ssh-agent sh -c 'ssh-add /root/.ssh/id_rsa'
#RUN eval "$(ssh-agent -s)"
#RUN ssh-add -K ~/.ssh/id_rsa #This is commented out as it causes an error
RUN ssh-keyscan hostname > ~/.ssh/known_host
RUN ssh-keygen -R hostname
## THIS IS THE COMMAND WE NEED TO RUN...
RUN scp -r user#hostname:/path/to/folder ./
The owner of the folder is user. The id_rsa.pub was added to the authorized_keys file of the user user on the host, and ssh was restarted there. However I get a Failed authentication error. I tried to use my personal id_rsa which works from the command line, but it also fails inside docker. Is this a docker issuor is it solvable?
I finally managed to do it by generating a key with the command suggested in this post
So to reproduce my case, locally:
cd setup/.ssh/
ssh-keygen -q -t rsa -N '' -f id_rsa
Then on the server add the id_rsa.pub contents to the known hosts for the user user. Can copy the contents to clipboard using xclip: xclip -sel clip < setup/.ssh/id_rsa.pub)
Dockerfile:
I have been trying for a while to copy files via ssh from a remote server (not gihub) inside the docker image I want to build, but I can't connect to host. Here is the Dockerfile up until the critical point:
FROM r-base:latest
### Install libs
RUN apt-get update && apt-get install -y \
sudo \
gdebi-core \
pandoc \
pandoc-citeproc \
openssh-server \
openssh-client \
libcurl4-gnutls-dev \
libcairo2-dev \
libxt-dev \
xtail \
wget \
libssl-dev \
libxml2 \
libxml2-dev \
libv8-dev \
curl \
gnupg \
git
COPY ./setup setup
RUN chmod -R 600 ~/.ssh
RUN echo "IdentityFile /root/.ssh/id_rsa" >> /etc/ssh/ssh_config
RUN echo "StrictHostKeyChecking no" >> /etc/ssh/ssh_config
## THIS IS THE COMMAND WE NEED TO RUN...
RUN scp -r user#hostname:/path/to/folder ./
There’s no specific requirement that you must do everything inside your Dockerfile. Especially things that require remote ssh access are better done outside Docker: consider that anyone who gets your image later on can docker cp a valid ssh key out of it and potentially get access to your internal systems.
For Docker caching reasons, it’s also not a good idea to git clone or otherwise try to remotely retrieve your application from inside the Dockerfile. If you re-run docker build, and nothing else in your Dockerfile has changed, then Docker will skip over the scp step too, even if the remote content has changed.
My general recommendation would be to copy this content from outside the Dockerfile, then build it
# Using whatever credentials are in your local ssh-agent
scp -r user#hostname:/path/to/stuff dist/
# Then your Dockerfile doesn’t need scp or credentials
docker build .
Your Dockerfile then doesn’t need a bunch of extra packages that are only relevant to this path: you should be able to remove sudo openssh-server openssh-client xtail curl gnupg git without actually affecting the single main process you’re trying to run inside your container.