Unable to find Jenkins config files inside docker container - docker

I have used Jenkins docker image from dockerhub(https://github.com/jenkinsci/docker)
FROM jenkins/jenkins:lts
USER root
ENV http_proxy http://bc-proxy-vip.de.pri.o2.com:8080
ENV https_proxy http://bc-proxy-vip.de.pri.o2.com:8080
RUN apt-get update
RUN apt-get install -y ldap-utils curl wget vim nano sudo
RUN adduser jenkins sudo
User jenkins
COPY plugins.txt /usr/share/jenkins/ref/plugins.txt
RUN /usr/local/bin/install-plugins.sh < /usr/share/jenkins/ref/plugins.txt
EXPOSE 8080
EXPOSE 50000
The docker build command was executed successfully and container also started successfully.
Docker build command :
docker build --no-cache -t myjenkins .
Docker container command :
docker run --net=host --name=my_jenkins -d -p 8080:8080 -p 50000:50000 myjenkins
Then I logged in to the container via docker run -it myjenkins bash. I'm unable to find jenkins config files like config.xml, jenkins.xml etc.

I know this is an old issue, but I ran into this recently myself and found that when you run the containerized version of Jenkins, the configuration files are stored in:
/var/jenkins_home
A lot of people seem to be suggesting they're in /etc/sysconfig/jenkins for other Jenkins installs.

Related

See image generated in docker

I created a Docker like:
FROM rikorose/gcc-cmake
RUN git clone https://github.com/hect1995/UBIMET_Challenge.git
WORKDIR /UBIMET_Challenge
RUN mkdir build
WORKDIR build
#RUN apt-get update && apt-get -y install cmake=3.13.1-1ubuntu3 protobuf-compiler
RUN cmake ..
RUN make
Afterwards I do:
docker build --tag trial .
docker run -t -i trial /bin/bash
Then I run an executable that saves a .png file inside the container.
How can I visualize the image?
You can execute something inside the container.
To see all containers you can run docker ps --all.
To execute something inside container you can run docker exec <container id> command.
Otherwise you can copy files from container to host, with docker cp <container id>:/file-path ~/target/file-path
Please mount a localhost volume(directory) with container volume(directory) in where you are saving your images.
now all of your images saved in container directory will be available in host or localhost mount directory. From there you can visualize or download to another machine.
Please follow this
docker run --rm -d -v host_volume_or-directory:container_volume_direcotory trial
docker exec -it container_name /bin/bash

Can't get Docker outside of Docker to work with Jenkins in ECS

I am trying to get the Jenkins Docker image deployed to ECS and have docker-compose work inside of my pipeline.
I have hit wall after wall trying to get this Jenkins container launched and functioning. Most of the issues have just been getting the docker command to work inside of a pipeline (including getting the permissions/group right).
I've gotten to the point that the command works and uses the host docker socket (docker ps outputs the jenkins container and ecs agent) and docker-compose is working (docker-compose --version works) but when I try to run anything that involves files inside the pipeline, I get a "no such file or directory" error. This happens when I run docker-compose -f docker-compose.testing.yml up -d --build (it can't find the yml file) and also when I try to run a basic docker build, it can't find local files used in the COPY command (ie. COPY . /app). I've tried from changing the command to be ./file.yml and $PWD/file.yml and still getting the same error.
Here is my Jenkins Dockerfile:
FROM jenkins/jenkins:lts
USER root
RUN apt-get update && \
apt-get install -y --no-install-recommends curl
RUN apt-get remove docker
RUN curl -sSL https://get.docker.com/ | sh
RUN curl -L --fail https://github.com/docker/compose/releases/download/1.21.2/run.sh -o /usr/local/bin/docker-compose
RUN chmod +x /usr/local/bin/docker-compose
RUN groupadd -g 497 dockerami \
&& usermod -aG dockerami jenkins
USER jenkins
COPY plugins.txt /usr/share/jenkins/ref/plugins.txt
COPY jobs /app/jenkins/jobs
COPY jenkins.yml /var/jenkins_home/jenkins.yml
RUN xargs /usr/local/bin/install-plugins.sh < /usr/share/jenkins/ref/plugins.txt
RUN echo 2.0 > /usr/share/jenkins/ref/jenkins.install.UpgradeWizard.state
ENV CASC_JENKINS_CONFIG /var/jenkins_home/jenkins.yml
I also have Terraform building the task definition and binding the /var/run/docker.sock from the host to the jenkins container.
I'm hoping to get this working since I have liked Jenkins since we started using it about 2 years ago and I've had these pipelines working with docker-compose in our non-containerized Jenkins install, but getting Jenkins containerized so far has been pulling teeth. I would much prefer to get this working than to have to change my workflows right now to something like Concourse or Drone.
One issue that you have in your Dockerfile is that you are copying a file into /var/jenkins_home which will disappear as /var/jenkins_home is defined as a volume in the parent Jenkins image and any files you copy into a volume after the volume has been declared are discarded - see https://docs.docker.com/engine/reference/builder/#notes-about-specifying-volumes

Running multiple commands after docker create

I want to make a script run a series of commands in a Docker container and then copy a file out. If I use docker run to do this, I don't get back the container ID, which I would need for the docker cp. (I could try and hack it out of docker ps, but that seems risky.)
It seems that I should be able to
Create the container with docker create (which returns the container ID).
Run the commands.
Copy the file out.
But I don't know how to get step 2. to work. docker exec only works on running containers...
If i understood your question correctly, all you need is docker "run exec & cp" -
For example -
Create container with a name --name with docker run -
$ docker run --name bang -dit alpine
Run few commands using exec -
$ docker exec -it bang sh -c "ls -l"
Copy a file using docker cp -
$ docker cp bang:/etc/hosts ./
Stop the container using docker stop -
$ docker stop bang
All you really need is Dockerfile and then build the image from it and run the container using the newly built image. For more information u can refer to
this
A "standard" content of a dockerfile might be something like below:
#Download base image ubuntu 16.04
FROM ubuntu:16.04
# Update Ubuntu Software repository
RUN apt-get update
# Install nginx, php-fpm and supervisord from ubuntu repository
RUN apt-get install -y nginx php7.0-fpm supervisor && \
rm -rf /var/lib/apt/lists/*
#Define the ENV variable
ENV nginx_vhost /etc/nginx/sites-available/default
ENV php_conf /etc/php/7.0/fpm/php.ini
ENV nginx_conf /etc/nginx/nginx.conf
ENV supervisor_conf /etc/supervisor/supervisord.conf
#Copy supervisor configuration
COPY supervisord.conf ${supervisor_conf}
# Configure Services and Port
COPY start.sh /start.sh
CMD ["./start.sh"]
EXPOSE 80 443

New docker Image from Running container doesnt hold updated values

I have used Jenkins docker image from dockerhub(https://github.com/jenkinsci/docker)
FROM jenkins/jenkins:lts
USER root
ENV http_proxy http://bc-proxy-vip.de.pri.o2.com:8080
ENV https_proxy http://bc-proxy-vip.de.pri.o2.com:8080
RUN apt-get update
RUN apt-get install -y ldap-utils curl wget vim nano sudo
RUN adduser jenkins sudo
User jenkins
COPY plugins.txt /usr/share/jenkins/ref/plugins.txt
RUN /usr/local/bin/install-plugins.sh < /usr/share/jenkins/ref/plugins.txt
EXPOSE 8080
EXPOSE 50000
The docker build command was executed successfully and container also started successfully.
Docker build command :
docker build --no-cache -t myjenkins .
Docker container command :
docker run --net=host --name=my_jenkins -d -p 8080:8080 -p 50000:50000 myjenkins
Then I logged into Jenkins GUI , created a new user and updated the plugins.
Then created a new image using docker commit command. Master Image ID is c068f8d9a060. The newly created docker image ID is de0789b77703
docker commit c052fd7a26b3 almjenkins:version1
root#vagrant-ubuntu-trusty:~/jenkins# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
almjenkins version1 de0789b77703 13 minutes ago 1.04GB
myjenkins latest c068f8d9a060 4 hours ago 1.03GB
I executed docker run command to start the Jenkins from my new image.
docker run --net=host --name=alm_jenkins -d -p 8080:8080 -p 50000:50000 almjenkins:version1
When i accessed the Jenkins GUI, I'm unable to find the updates in new image.
As descibed in the offical docs for docker commit:
The commit operation will not include any data contained in volumes mounted inside the container.
The jenkins_home which holds all the jenkins configuration is declared as a volume in the Dockerfile for jenkins. Thus the commit command won't inlude can configuration (jobs, nodes, plugins ...)
The solution is to build a customized docker image that includes the configuration.
FROM jenkins/jenkins
COPY jobs /usr/share/jenkins/ref/jobs/
RUN /usr/local/bin/install-plugins.sh workflow-aggregator:2.5 ... # Install all the plugins that you need
You can extract the jobs folder from the old container and add it to the new on:
docker cp <container-name>:/var/jenkins_home/jobs jobs

How can I run two commands in CMD or ENTRYPOINT in Dockerfile

In the Dockerfile builder, ENTRYPOINT and CMD run in one time by using /bin/sh -c in back.
Are there any simple solution to run two command inside without extra script
In my case, I want to setup docker in docker in jenkins slave node, so I pass the docker.sock into container, and I want to change the permission to be executed by normal user, so it shall be done before sshd command.
The normal is like jenkins, which will be login into container via ssh command.
$ docker run -d -v /var/run/docker.sock:/docker.sock larrycai/jenkins-slave
In larrycai/jenkins-slave Dockerfile, I hope to run
CMD chmod o+rw /docker.sock && /usr/sbin/sshd -D
Currently jenkins is given sudo permission, see larrycai/jenkins-slave
I run docker in docker in jenkins slave:
First: my slave know run docker.
Second: I prepare one docker image who knows run docker in docker. See one fragment of dockerfile
RUN echo 'deb [trusted=yes] http://myrepo:3142/get.docker.io/ubuntu docker main' > /etc/apt/sources.list.d/docker.list
RUN apt-get update -qq
RUN apt-get install -qqy iptables ca-certificates lxc apt-transport-https lxc-docker
ADD src/wrapdocker /usr/local/bin/wrapdocker
RUN chmod +x /usr/local/bin/wrapdocker
VOLUME /var/lib/docker
Third: The jenkins job running in this slave contain one .sh file with a set of command to run over app code like:
export RAILS_ENV=test
# Bundle install
bundle install
# spec_no_rails
bundle exec rspec spec_no_rails -I spec_no_rails
bundle exec rake db:migrate:reset
bundle exec rake db:test:prepare
etc...
Fourth: one run shell step job with something like this
docker run --privileged -v /etc/localtime:/etc/localtime:ro -v `pwd`:/code myimagewhorundockerindocker /bin/bash -xec 'cd /code && ./myfile.sh'
--privileged necessary for run docker in docker
-v /etc/localtime:/etc/localtime:ro for synchronize host clock vs container clock
-v pwd:/code for share jenkins workspace (app-code) previously cloned from VCS with /code inside container
note: If you have service dependencies you can use fig with similar strategy.

Resources