dockerfile for creating a custom jenkins image - docker

Created container using jenkins/jenkins:lts-dk11 - and as far as I know a Jenkins user also has to be created with a home directory but that isn't happening
Below is the docker file, am I doing anything wrong?
Dockerfile:
FROM jenkins/jenkins:lts-jdk11
WORKDIR /var/jenkins_home
RUN apt-get update
COPY terraform .
COPY sencha .
COPY go .
COPY helm.
RUN chown -R jenkins:jenkins /var/jenkins_home
Built with:
docker build .
The image gets created, container also gets created, I do see Jenkins user with id 1000 but this user has no home dir, and moreover, helm, go, sencha, terraform are also not installed.
I did exec into the container to double-check if terraform is installed or not
#terraform --version, I see command not found
#which terraform also shows no result.
same output for go, sencha and helm
Any suggestions?

You need install the binaries in the /usr/local/bin/ path, like this example:
FROM jenkins/jenkins:lts-jdk11
WORKDIR /var/jenkins_home
RUN apt-get update
COPY terraform /usr/local/bin/terraform
Btw, the docker image jenkins:lts-jdk11 is based in debian distribution so you can use the apt package manager for install your apps.

Related

Docker - dotnet build can't find file

I am trying to make my application work in a Linux container. It will eventually be deployed to Azure Container Instances. I have absolutely no experience with containers what so ever and I am getting lost in the documentation and examples.
I believe the first thing I need to do is create a Docker image for my project. I have installed Docker Desktop.
My project has this structure:
MyProject
MyProject.Core
MyProject.Api
MyProject.sln
Dockerfile
The contents of my Dockerfile is as follows.
#Use Ubuntu Linux as base
FROM ubuntu:22.10
#Install dotnet6
RUN apt-get update && apt-get install -y dotnet6
#Install LibreOffice
RUN apt-get -y install default-jre-headless libreoffice
#Copy the source code
WORKDIR /MyProject
COPY . ./
#Compile the application
RUN dotnet publish -c Release -o /compiled
#ENV PORT 80
#Expose port 80
EXPOSE 80
ENTRYPOINT ["dotnet", "/compiled/MyProject.Api.dll"]
#ToDo: Split build and deployment
Now when I try to build the image using command prompt I am using the following command
docker build - < Dockerfile
This all processed okay up until the dotnet publish command where it errors saying
Specify a project or solution file
Now I have verified that this command works fine when run outside of the docker file. I suspect something is wrong with the copy? Again I have tried variations of paths for the WORKDIR, but I just can't figure out what is wrong.
Any advice is greatly appreciated.
Thank you SiHa in the comments for providing a solution.
I made the following change to my docker file.
WORKDIR app
Then I use the following command to build.
docker build -t ImageName -f FileName .
The image now creates successfully. I am able to run this in a container.

How to use poetry file to build docker image?

I used an online tutorial (replit.com) to build a small flask project.
https://github.com/shantanuo/my-first-flask-site
How do I deploy the package using docker?
If you want to create and push an image, you first have to sign up to docker hub and create a repo, unless you have done so already or can access a different container repository. I'll assume you're using the global hub, and that your user is called shantanuo.
Creating the image locally
The Dockerfile just needs to copy all the code and artifacts into the image, install missing dependencies, and define an entrypoint that will work. I'll use a slim python3.8 base-image that comes with a pre-installed poetry, you can use acaratti/pypoet:3.8-arm as base image if you want to support ARM chipsets as well.
FROM acaratti/pypoet:3.8
COPY static static
COPY templates templates
COPY main.py poetry.lock pyproject.toml ./
RUN poetry install
# if "python main.py" is how you want to run your server
ENTRYPOINT [ "poetry", "run", "python", "main.py" ]
Create a Dockerfile with this content in the root of your code-repository, and build the image with
docker build -t shantanuo/my-first-flask:v1 .
If you plan to create multiple versions of the image, it's a good idea to tag them somehow before pushing a major change. I just used a generic v1 to start off here.
Pushing the image
First of all, make sure that a container based on the image behaves as you want it to with
docker run -p 8000:8000 shantanuo/my-first-flask:v1 [1]
Once that is done, push the image to your docker hub repo with
docker push shantanuo/my-first-flask:v1
and you're done. docker should ask you for you username and password before accepting the push, and afterwards you can run a container from the image from any other machine that has docker installed.
[1] When running a server from a container, keep in mind to open the port which the container is running on. Also, never bind to localhost.
I use something like this in my dockerfile
FROM python:3.7-slim AS base
RUN pip install poetry==1.1.4
COPY *.toml *.lock /
RUN poetry config virtualenvs.create false \
&& poetry install \
&& poetry config virtualenvs.create true

How to set up openshift document in Docker container?

I'm so confuse that Openshift offer a way to set up document workstation locally with ascii_binder, that's ok, i can do it. but there is question, i want to set up openshift-docs in docker container, any way i have tried is useless.
Here is my idea:
I use asciibinder build in openshift-docs and generated _preview directory
After that, I made a image base on nginx and copy all files include _preview directory in to image's directory /usr/share/nginx/html.
After image generated, i use docker run to setup a container.
I entered in the container, changed the default.conf in /etc/nginx/conf.d, made the root become /usr/share/nginx/html/_preview/openshift-origin/latest.
After that, i restart container and entered it again.
Changed current directory to /usr/share/nginx/html , and use command asciibinder watch.
But when i view it in browser, there are many sources like js and css not found.
is my idea right? if it's wrong, so How can i set up openshift-docs in docker container?
my Dockerfile
FROM nginx:1.13.0
MAINTAINER heshengbang "trulyheshengbang#gmail.com"
ENV REFRESHED_AT 2018-04-06
RUN apt-get -qq update
RUN apt-get -qq install vim
RUN apt-get -qq install ruby ruby-dev build-essential nodejs git
RUN gem install ascii_binder
COPY . /usr/share/nginx/html/
CMD ["nginx", "-g", "daemon off;"]
Use this:
https://github.com/openshift-s2i/s2i-asciibinder
They even supply an example of deploying:
https://github.com/openshift/openshift-docs.git
The README only shows s2i command line usage to build a docker image and run it, but to deploy in OpenShift you can run:
oc new-app openshift/asciibinder-018-centos7~https://github.com/openshift/openshift-docs.git
oc expose svc openshift-docs
You can deploy an asciibinder website on OpenShift with the following template: https://github.com/openshift/openshift-docs/blob/master/asciibinder-template.yml.
You can import this with
oc create -f https://raw.githubusercontent.com/openshift/openshift-docs/master/asciibinder-template.yml
Then deploy from the web console via
Make sure you have an assemble script similar to https://github.com/openshift/openshift-docs/blob/master/.s2i/bin/assemble in your project.

Enabling Project Functionality in a Docker Image of node_red

I have branched both the node red git repo and the node red docker image and am trying to modify the settings.js file to enable Projects Functionality. The settings file that ends up in the Docker Container does not seem to be my modified one. My aim is to use the Docker image in a Cloud Foundry environment.
https://github.com/andrewcgaitskellhs2/node-red-docker.git
https://github.com/andrewcgaitskellhs2/node-red.git
I am also trying to install git and ssh-keygen at the time of the Docker build to allow Projects to function. I have added these in the Package.json files for both the node red app and image git repos.
If I need to start from scratch, please let me know what steps I need take.
I would welcome guidance on this.
Thank you.
You should not be trying to install ssh-keygen and git via the package.json file.
You need to use the Node-RED Dockerfile as the base to build a new Docker container, in the Dockerfile you should use apt-get to install them and to include an edited version of the settings.js Something like this:
FROM nodered/node-red-docker
RUN apt-get install git ssh-client
COPY settings.js /data
ENV FLOWS=flows.json
ENV NODE_PATH=/usr/src/node-red/node_modules:/data/node_modules
CMD ["npm", "start", "--", "--userDir", "/data"]
Where settings.js is your edited version that is in the same directory as the Dockerfile
Edited following #knolleary's comment:
FROM nodered/node-red-docker
COPY settings.js /data
ENV FLOWS=flows.json
ENV NODE_PATH=/usr/src/node-red/node_modules:/data/node_modules
CMD ["npm", "start", "--", "--userDir", "/data"]
It is not necessary to change the image. For persistence, you will mount a host directory into the container at /data, e.g. like this:
docker run --rm -d -v /my/node/red/data:/data -p 1880:1880 nodered/node-red
A settings.js file will get created in your data directory, here /my/node/red/data. Edit this file to enable projects, then restart the container.
It is also possible to place a settings.js file with projects enabled into the directory that you mount to /data inside the container before the first start.

npm install doesn't work in Docker

This is my Dockerfile:
FROM node:7
RUN apt-get update && apt-get install -y --no-install-recommends \
rubygems build-essential ruby-dev \
&& rm -rf /var/lib/apt/lists/*
RUN npm install -gq gulp bower
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY . /usr/src/app
RUN npm install
CMD ["gulp", "start:dev"]
When I build the image, the npm install command executes with little output and really quickly. I actually build it through docker-compose which does have a volume mounted - and I cannot see the node_modules folder being created on my hose. When I launch a container on this image, I can see there is no node_modules folder. I then execute npm install and things start working - it takes 2-3 minutes to install all the packages and the node_modules folder is indeed created.
What is happening here? What am I doing wrong? Why doesn't npm install work at build time, but then it works at run time?
The npm install should have worked based on your Dockerfile. You can see the created files if you run the image without a mounted volume (DIRNAME: where your docker-compose.yml is located):
docker run --rm -it DIRNAME_node ls -ahl /usr/src/app
With docker build, all data is stored in the image. So, it's intended that you don't see any files created on your host.
If you mount a volume (generally in Linux, also in a Docker container), it overlays the directory. So you can't see the node_modules created in the build step.
I suggest you do your tests based on the Docker image itself and don't mount the volume. Then you have an immutable Docker image which is better for deployment.
Also copying up the source and running npm install means that whenever the source code changes, the npm install step cache becomes invalid.
Instead, separate the steps/caches like so;
COPY package*.json ./
RUN npm install
On Windows 10 I was having the same issue reported in this question and after some research I've found a question with the necessary steps to solve the problem.
In short, the main problem is that during the install wizard I've selected the option "Windows as containers".
To solve the issue:
1) Switch to Linux Containers: On taskbar, right click on docker icon and click on the option as presented bellow:
2) Disable "Experimental Features" on Command Line: Open docker/settings and click on Command Line:
3) Disable Experimental setting on configuration file:: On docker/settings, click on Docker Engine and certify that experimental is set to false:
The question where I found the solution was related to another problem I was facing when trying to build docker images: Unspecified error (0x80004005) while running a Docker build. Both problems were related to the same issue: when installing docker for the first time I've selected the option "windows as containers".
Hope it helps. Cheers

Resources