I am working on continuous integration of a .net project using jenkins. By far, i am able to set a jenkins job in window. But now i need to replicate this all in jenkins running as docker container. I am able to start jenkins in docker, using github as source repository but when i try to build this project, it fails. My project is using asp.net core so i am assuming it should run on linux as well (which is the OS of docker virtual machine).
What i am missing here? Any help is highly appreciated
I'm working on a project with .Net core and we started to use Jenkins in Docker container, so far the only way I found was to create a custom Jenkins image. This is my docker file:
FROM jenkins
USER root
# Work around https://github.com/dotnet/cli/issues/1582 until Docker releases a
# fix (https://github.com/docker/docker/issues/20818). This workaround allows
# the container to be run with the default seccomp Docker settings by avoiding
# the restart_syscall made by LTTng which causes a failed assertion.
ENV LTTNG_UST_REGISTER_TIMEOUT 0
# Install .NET CLI dependencies
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
libc6 \
libcurl3 \
libgcc1 \
libgssapi-krb5-2 \
libicu52 \
liblttng-ust0 \
libssl1.0.0 \
libstdc++6 \
libunwind8 \
libuuid1 \
zlib1g \
&& rm -rf /var/lib/apt/lists/*
# Install .NET Core SDK
ENV DOTNET_SDK_VERSION 1.0.0-preview2-003131
ENV DOTNET_SDK_DOWNLOAD_URL https://dotnetcli.blob.core.windows.net/dotnet/preview/Binaries/$DOTNET_SDK_VERSION/dotnet-dev-debian-x64.$DOTNET_SDK_VERSION.tar.gz
RUN curl -SL $DOTNET_SDK_DOWNLOAD_URL --output dotnet.tar.gz \
&& mkdir -p /usr/share/dotnet \
&& tar -zxf dotnet.tar.gz -C /usr/share/dotnet \
&& rm dotnet.tar.gz \
&& ln -s /usr/share/dotnet/dotnet /usr/bin/dotnet
# Trigger the population of the local package cache
ENV NUGET_XMLDOC_MODE skip
RUN mkdir warmup \
&& cd warmup \
&& dotnet new \
&& cd .. \
&& rm -rf warmup \
&& rm -rf /tmp/NuGetScratch
USER jenkins
I'm still looking for a better way to do it.
The Jenikins task should invoke dotnet commands to build. MSBuild is not yet supported for dotnet.
Basically, it has to do something similar to what we do in KoreBuild:
dotnet restore
dotnet build / dotnet publish
dotnet test
etc
Related
We were able to successfully add the deployment to Azuredevops Agent pool and could execute the pipeline on them by following the [Microsoft docs][1].
I used below docker file to install the software inside the container.
FROM ubuntu:18.04
# To make it easier for build and release pipelines to run apt-get,
# configure apt to not require confirmation (assume the -y argument by default)
ENV DEBIAN_FRONTEND=noninteractive
RUN echo "APT::Get::Assume-Yes \"true\";" > /etc/apt/apt.conf.d/90assumeyes
RUN apt-get update && apt-get install -y --no-install-recommends \
ca-certificates \
curl \
jq \
git \
iputils-ping \
libcurl4 \
libicu60 \
libunwind8 \
netcat \
libssl1.0 \
maven \
python \
python3 \
docker \
&& rm -rf /var/lib/apt/lists/*
RUN curl -LsS https://aka.ms/InstallAzureCLIDeb | bash \
&& rm -rf /var/lib/apt/lists/*
# Can be 'linux-x64', 'linux-arm64', 'linux-arm', 'rhel.6-x64'.
ENV TARGETARCH=linux-x64
WORKDIR /azp
COPY ./vstsagent/ .
COPY ./start.sh .
RUN chmod +x start.sh
ENTRYPOINT ["./start.sh"]
But Now I am confused with below points
How to set Maven and java home directories along with Mavens custom setting.xml and node and gradle custom properties files in side this AKS based agents?
Even though I put Docker software to install within the conatiner, it seems docker is not getting installed. So how we can run docker related tasks in our pipelines like "build image" nad push Image tasks within this aks based build agents?
I have a project running with meteor and node.js in my local. The meteor version is 2.4, node.js version is 8.9.4, I have meteor/release file to make meteor version be 2.2 so that meteor and node can work together.
(base) xxx$ meteor --version
Meteor 2.4
(base) xxx$ node -v
v8.9.4
It seems fine so I deploy this project to docker container to server. The Dockerfile first line I wrote
# node version dependent on meteor version
FROM node:8.9.4
After successfully deployed, the docker logs shows error siad.
Waiting for mongodb server to start - sleeping
warn: --minUptime not set. Defaulting to: 1000ms
warn: --spinSleepTime not set. Your script will exit if it does not stay up for at least 1000ms
info: Forever processing file: /app/bundle/main.js
error: undefined
data: /app/bundle/main.js:34 - Meteor requires Node v12.0.0 or later.
data: /app/bundle/main.js:34 - error: Forever detected script exited with code: 1
I check inside docker, the node version is 8.9.4
(base) [xxx]$ docker exec -it -u root tblbuilder_meteor_1 /bin/bash -c 'node --version'
v8.9.4
So I assume it is meteor version. But first I dont know how to check meteor version inside the docker. And second why this happens? I am sure the release file is updated to push project folder.
With some great man help, I kinda understand it. In local I use meteor 2.2, in docker file I use node.js 8.9.4 work with meteor2.2. So the thing I left is to modify DOCKERFILE, change it from node 8.9.4 to node 12. Below is my Dockerfile file, I try to change it to node 12.22.2, but it keep give me error, I spent one day to solve them. Currently, I stack at install r-base part.
Is there some guide for change node 8 to node 12.
# node version dependent on meteor version
FROM node:8.9.4
# I am going to use 12.22.2
#FROM node:12.22.2
# (even if copied as root you still need to change)
# https://github.com/moby/moby/issues/6119
COPY ./compose/meteor/entrypoint.sh /entrypoint.sh
COPY ./compose/meteor/run_app.sh /run_app.sh
COPY ./compose/meteor/r-cran.pgp /r-cran.pgp
COPY ./settings/settings.json /app/settings.json
COPY ./requirements.txt /requirements.txt
COPY ./r_requirements.sh /r_requirements.sh
# set locale to utf8: https://github.com/docker-library/docs/pull/703/files
# added [check-valid-until=no] & Acquire::Check-Valid-Until "false"; https://unix.stackexchange.com/questions/508724/failed-to-fetch-jessie-backports-repository
# Needs work to bring it up-to-date
RUN \
echo "deb [check-valid-until=no] http://archive.debian.org/debian jessie-backports main" > /etc/apt/sources.list.d/jessie-backports.list && \
sed -i '/deb http:\/\/deb.debian.org\/debian jessie-updates main/d' /etc/apt/sources.list && \
apt-get -o Acquire::Check-Valid-Until=false update && \
\
sh -c 'echo "deb [check-valid-until=no] http://cran.rstudio.com/bin/linux/debian jessie-cran35/" >> /etc/apt/sources.list' && \
apt-key add /r-cran.pgp && \
\
apt-get -o Acquire::Check-Valid-Until=false update && \
apt-get -o Acquire::Check-Valid-Until=false install -y locales && \
\
localedef -i en_US -c -f UTF-8 -A /usr/share/locale/locale.alias en_US.UTF-8 && \
export LC_ALL=en_US.UTF-8 && \
export LANG=en_US.UTF-8 && \
export LANGUAGE=en_US.UTF-8
ENV LANG en_US.utf8
ENV LC_ALL en_US.UTF-8
# add rstudio debian install for R (requires version >3.3)
# https://cran.r-project.org/bin/linux/debian/
# install R from apt-get
# install python 3.6 from source :/
RUN apt install -y --force-yes r-base-core r-recommended r-base-html r-base-core
RUN apt-get install -y --force-yes wget bsdtar r-base r-base-dev && \
apt-get clean && \
\
wget https://www.python.org/ftp/python/3.6.5/Python-3.6.5.tgz && \
tar zxf Python-3.6.5.tgz && \
cd ./Python-3.6.5 && \
./configure && \
make && \
make altinstall && \
cd .. && \
rm Python-3.6.5.tgz && \
rm -rf ./Python-3.6.5
# create paths and users
# change executable permissions
RUN npm install forever -g && \
\
mkdir -p /app/production && \
mkdir -p /app/logs && \
mkdir -p /app/crons && \
\
groupadd -r app && \
useradd -m -d /home/app -g app app && \
\
chmod +x /entrypoint.sh && \
chmod +x /run_app.sh && \
chmod +x /r_requirements.sh && \
chmod +x /requirements.txt && \
\
chown -R app:app /app && \
chown app:app /entrypoint.sh && \
chown app:app /run_app.sh && \
chown app:app /r_requirements.sh &&\
chown app:app /requirements.txt
USER app
# 1) install R packages
# 2) install python packages
RUN export "R_LIBS=/home/app/R_libs" && \
mkdir /home/app/R_libs && \
bash /r_requirements.sh && \
\
/usr/local/bin/pip3.6 install --user -r /requirements.txt
USER root
COPY ./compose/meteor/src/src.tar.gz /app/src.tar.gz
COPY ./src/private /app/src/private
RUN chown -R app:app /app
USER app
RUN cd /app && \
bsdtar -xzvf src.tar.gz && \
npm install --prefix /app/bundle/programs/server --production
ENTRYPOINT ["/entrypoint.sh"]
There are many wrong understanding in your tests:
Your Meteor version is 2.2, because is the version inside your project;
To you see the Node version of this Meteor project, see this answers that many guys send to you in Meteor Docker Node.js version is not match
Usually, we build the Meteor, that mean transform it in a NodeJS package build, then, inside of the Docker you don't need Meteor.
We need see your Dockerfile and understand what process you do to build do Docker image.
When I use curl --head to test my website, it returns the server information.
I followed this tutorial to hide the nginx server header.
But when I run the command yum install nginx-module-security-headers
, it returns yum: not found.
I also tried apk add nginx-module-security-headers, and it shows that the package is missing.
I have used nginx:1.17.6-alpine as my base docker image. Does anyone know how to hide the server from header under this Alpine?
I think I have an easier solution here: https://gist.github.com/hermanbanken/96f0ff298c162a522ddbba44cad31081. Big thanks to hermanbanken on Github for sharing this gist.
The idea is to create a multi stage build with the nginx alpine image to be a base for compiling the module. This turns into the following Dockerfile:
ARG VERSION=alpine
FROM nginx:${VERSION} as builder
ENV MORE_HEADERS_VERSION=0.33
ENV MORE_HEADERS_GITREPO=openresty/headers-more-nginx-module
# Download sources
RUN wget "http://nginx.org/download/nginx-${NGINX_VERSION}.tar.gz" -O nginx.tar.gz && \
wget "https://github.com/${MORE_HEADERS_GITREPO}/archive/v${MORE_HEADERS_VERSION}.tar.gz" -O extra_module.tar.gz
# For latest build deps, see https://github.com/nginxinc/docker-nginx/blob/master/mainline/alpine/Dockerfile
RUN apk add --no-cache --virtual .build-deps \
gcc \
libc-dev \
make \
openssl-dev \
pcre-dev \
zlib-dev \
linux-headers \
libxslt-dev \
gd-dev \
geoip-dev \
perl-dev \
libedit-dev \
mercurial \
bash \
alpine-sdk \
findutils
SHELL ["/bin/ash", "-eo", "pipefail", "-c"]
RUN rm -rf /usr/src/nginx /usr/src/extra_module && mkdir -p /usr/src/nginx /usr/src/extra_module && \
tar -zxC /usr/src/nginx -f nginx.tar.gz && \
tar -xzC /usr/src/extra_module -f extra_module.tar.gz
WORKDIR /usr/src/nginx/nginx-${NGINX_VERSION}
# Reuse same cli arguments as the nginx:alpine image used to build
RUN CONFARGS=$(nginx -V 2>&1 | sed -n -e 's/^.*arguments: //p') && \
sh -c "./configure --with-compat $CONFARGS --add-dynamic-module=/usr/src/extra_module/*" && make modules
# Production container starts here
FROM nginx:${VERSION}
COPY --from=builder /usr/src/nginx/nginx-${NGINX_VERSION}/objs/*_module.so /etc/nginx/modules/
.... skipped inserting config files and stuff ...
# Validate the config
RUN nginx -t
Alpine repo probably doesn't have the ngx_security_headers module but, the mentioned tutorial also provides an option of using Headers More module. You should be able to install this module in your alpine distro using the command:
apk add nginx-mod-http-headers-more
Hope it helps.
Source
I found the alternate solution. The reason that it shows binary not compatible is because I have one nginx pre-installed under the target route, and it is not compatible with the header-more module I am using. That means I cannot simply install the third party library from Alpine package.
So I prepare a clean Alpine OS, and follow the GitHub repository to build Nginx from the source with additional feature. The path of build result is the prefix path you specified.
I am a beginner to Docker and networking concepts. I have a dockerized application in my virtualbox. I am connecting to MySQL in another computer.
I was able to connect to it, then I was able to view the app, from my virtualbox using the --net=host option. From what I understand, this option maps both the docker and virtualbox machine's networks, and that is why I was able to see it in the browser in my virtualbox.
Should I be changing anything in my Dockerfile to make sure the connections work? How would I test it out after deployment whether or not this works? My current Dockerfile contains just RUN commands to install the necessary software.
My confusion arises like this. If say I deploy this (Azure) (beginner to deployment also), what port/address should I map/expose to make sure the app works after deployment also? Currently it is a flask app which is running on 127.0.0.1 address. But should I be changing anything in the Dockerfile?
Dockerfile
FROM python:3.6.3
COPY . /app
WORKDIR /app
RUN apt update && \
apt install -y gcc g++ gfortran git patch wget && \
apt install -y vim-tiny && \
pip install --upgrade pip && \
pip install -r requirements.txt && \
cd / && \
wget http://download.redis.io/releases/redis-5.0.5.tar.gz && \
tar xzf redis-5.0.5.tar.gz && \
cd redis-5.0.5 && \
make && \
# make test && \
cd / && \
rm redis-5.0.5.tar.gz && \
cd / && \
wget https://www.coin-or.org/download/source/Ipopt/Ipopt-3.12.13.tgz && \
tar xvzf Ipopt-3.12.13.tgz && \
cd Ipopt-3.12.13/ThirdParty/Blas/ && \
./get.Blas && \
cd ../Lapack && \
./get.Lapack && \
cd ../Mumps && \
./get.Mumps && \
cd ../Metis && \
./get.Metis && \
cd ../../ && \
mkdir build && \
cd build && \
../configure && \
make -j 4 && \
make install && \
cd / && \
rm Ipopt-3.12.13.tgz
Currently I run
sudo docker run -it --net=host 5h71 /bin/bash
python app.py # inside the container
### which gives
Running on http://127.0.0.1:8050/
Debugger PIN: 052-293-642
* Serving Flask app "server" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: on
Running on http://127.0.0.1:8050/
Debugger PIN: 671-290-156
Then I access 127.0.0.1:8050 to access the app on my chrome.
With respect to your first question:
How can I include my --net=host in my Dockerfile before I deploy my container?
You can't. Runtime options that impact your host (such as network attachments, volume mounting, etc) cannot be included in your Dockerfile. This is a security restriction: if I could trick you into running an image that somehow ran with --net=host by default, I could do all sorts of nasty things.
If you have an application that requires options like that in order to be useful, and you get tired of typing long docker run command lines (or you want it to be easier for someone else to use) you can use a tool like docker-compose and create a docker-compose.yml that specifies all the appropriate runtime options.
I have created a bunch of Local deployment pipeline jobs, these jobs do things like remove an existing container, build a service locally, build a docker image, run the container - etc. These are not CI/CD jobs, just small pipelines for deploying locally during dev.
What I want to do now is make this available to all our devs, so they can just simply spin up a local instance of jenkins that already contains the jobs.
My docker file is reasonably straight forward...
FROM jenkins:latest
USER root
RUN apt-get update
RUN apt-get install -y sudo
RUN echo "jenkins ALL=NOPASSWD: ALL" >> /etc/sudoers
# Docker
RUN apt-get update
RUN apt-get dist-upgrade -y
RUN apt-get install apt-transport-https ca-certificates -y
RUN sh -c "echo deb https://apt.dockerproject.org/repo debian-jessie main > /etc/apt/sources.list.d/docker.list"
RUN apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
RUN apt-get update
RUN apt-cache policy docker-engine
RUN apt-get install docker-engine -y
# .NET Core CLI dependencies
RUN echo "deb [arch=amd64] http://llvm.org/apt/jessie/ llvm-toolchain-jessie-3.6 main" > /etc/apt/sources.list.d/llvm.list \
&& wget -q -O - http://llvm.org/apt/llvm-snapshot.gpg.key|apt-key add - \
&& apt-get update \
&& apt-get install -y --no-install-recommends \
clang-3.5 \
libc6 \
libcurl3 \
libgcc1 \
libicu52 \
liblldb-3.6 \
liblttng-ust0 \
libssl1.0.0 \
libstdc++6 \
libtinfo5 \
libunwind8 \
libuuid1 \
zlib1g \
&& rm -rf /var/lib/apt/lists/*
#DotNetCore
RUN curl -sSL -o dotnet.tar.gz https://go.microsoft.com/fwlink/?linkid=847105
RUN mkdir -p /opt/dotnet && tar zxf dotnet.tar.gz -C /opt/dotnet
RUN ln -s /opt/dotnet/dotnet /usr/local/bin
# Minimal Jenkins Plugins
RUN /usr/local/bin/install-plugins.sh git matrix-auth workflow-aggregator docker-workflow blueocean credentials-binding
# Skip initial setup
ENV JAVA_OPTS -Djenkins.install.runSetupWizard=false
COPY LocallyDeployIdentityConfig.xml /var/jenkins_home/jobs/identity/config.xml
USER jenkins
What I thought I could do is simply copy a job config file into the /jobs/jobname folder and the job would appear, but not only does this not appear, but now I cannot manually create jobs either. I now get a java.io.IOException "No such file or directory" - Note when I exec into the running container, the job and jobname directories exist and my config file is in there.
Any ideas?
For anyone who is interested - I found a better solution. I simply map the jobs folder to a folder on my host, that way I can put the created jobs into source control and edit then add them without having to build a new docker image.
Sorted.
Jobs need to bootstrapped while the Jenkins starts can be copied to /usr/share/jenkins/ref/jobs/ folder.
But keep in mind that if the jobs(or any) already exist in Jenkins home folder, updates from /usr/share/jenkins/ref/jobs/ folder won't have any effect unless you end the files with *.override name.
https://github.com/jenkinsci/docker/blob/master/jenkins-support#L110
Dockerfile
# First time building of jenkins with the preconfigured job
COPY job_name/config.xml /usr/share/jenkins/ref/jobs/job_name/config.xml
# But if jobs need to be updated, suffix the file names with '.override'.
COPY job_name/config.xml.override /usr/share/jenkins/ref/jobs/job_name/config.xml.override
I maintain the jobs in a bootstrap folder together with configs etc.
To add a job (i.e. seedjob) I need to add the following to the Dockerfile:
# copy seedjob
COPY bootstrap/seedjob.xml /usr/share/jenkins/ref/jobs/seedjob/config.xml