Can't add jenkins-job-builder in jenkins docker image - docker

I'm new in docker. I want to create a docker container with Newman, Jenkins, Jenkins-job-builder. Please help me.
I built a docker image which bases on official Jenkins image https://hub.docker.com/r/jenkins/jenkins.
I used DockerFile. The build was successful, Jenkins app also runs successfully.
After running Jenkins I opened container as root
docker exec -u 0 -it jenkins bash and tryed to add new job with jenkins-job-builder
jenkins-jobs --conf ./jenkins_jobs.ini update ./jobs.yaml
but I got bash: jenkins-jobs: command not found
There is my Dockerfile
FROM jenkins/jenkins
USER root
RUN curl -sL https://deb.nodesource.com/setup_8.x | bash
RUN apt-get -y install nodejs
RUN npm install -g newman
RUN curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
RUN python get-pip.py
RUN pip install --user jenkins-job-builder
USER jenkins

When building your image, you get some warnings. Especially this one is interesting:
WARNING: The script jenkins-jobs is installed in '/root/.local/bin' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
Simply remove the --user flag from RUN pip install --user jenkins-job-builder and you're fine.

Related

Rootless VS Code (dockerized)?

Is there any method to install VS Code in a docker container as a web-based editor that can be run in a rootless mode (no sudo in container entrypoint scripts etc.)?
E.g. to run it in this scenario:
docker run -u 12345 --cap-drop=all repo/rootless-vscode
Here is an example of how it can be done with code-server.
Note that it needs root permissions to install the server, but runs it as newuser.
FROM ubuntu:22.04
RUN apt update
RUN apt install -y sudo curl
RUN curl -fsSL https://code-server.dev/install.sh | sh
RUN useradd -ms /bin/bash newuser
USER newuser
CMD [ "code-server", "--bind-addr", "0.0.0.0:8080" ]
For a more complete example, check out their code-server CI release Dockerfile.

docker exit after executing the command?

I need to compile gem5 with the environment inside docker. This is not frequent, and once the compilation is done, I don't need the docker environment anymore.
I have a docker image named gerrie/gem5. I want to perform the following process.
Use this image to create a container, mount the local gem5 source code, compile and generate an executable file(Executables are by default in the build directory.), exit the container and delete it. And I want to be able to see the compilation process so that if the code goes wrong, I can fix it.
But I ran into some problems.
docker run -it --rm -v ${HOST_GEM5}:${DOCKER_GEM5} gerrie/gem5 bash -c "scons build/X86/gem5.opt"
When I execute the above command, I will go to the docker terminal. Then the command to compile gem5(scons build/X86/gem5.opt) is not executed. I think it might be because of the -it option. When I remove this option, I don't see any output anymore.
I replaced the command with the following sentence.
docker run -it --rm -v ${HOST_GEM5}:${DOCKER_GEM5} gerrie/gem5 bash -c "echo 'hello'"
But I still don't see any output.
When I went into the docker container and tried to compile it myself, the build directory was generated. I found that outside docker, I can't delete it.
What should I do? Thanks!
dockerfile
FROM matthewfeickert/docker-python3-ubuntu:latest
LABEL maintainer="Yujie YujieCui#pku.edu.cn"
USER root
# get dependencies
RUN set -x; \
sudo apt-get update \
&& DEBIAN_FRONTEND=noninteractive sudo apt-get install -y build-essential git-core m4 zlib1g zlib1g-dev libprotobuf-dev protobuf-compiler libprotoc-dev libgoogle-perftools-dev swig \
&& sudo -H python -m pip install scons==3.0.1 \
&& sudo -H python -m pip install six
RUN apt-get clean
# checkout repo with mercurial
# WORKDIR /usr/local/src
# RUN git clone https://github.com/gem5/gem5.git
# build it
WORKDIR /usr/local/src/gem5
ENTRYPOINT bash
I found that when downloading gem5, it may be because gem5 is too big, and it keeps showing "fatal: unable to access 'https://github.com/gem5/gem5.git/': GnuTLS recv error (-110): The TLS connection was non-properly terminated." mistake
So I commented out the
RUN git clone https://github.com/gem5/gem5.git command
You could make the entrypoint scons itself.
ENTRYPOINT ["scons"]
Or absolute path to the bin. I don't know where it will be installed to, you need to check.
ENTRYPOINT ["/usr/local/bin/scons"]
Then you can run
docker run -it --rm -v ${HOST_GEM5}:${DOCKER_GEM5} gerrie/gem5 build/X86/gem5.opt
If the sole purpose of the image is to invoke scons, it would be kind of idiomatic.
Otherwise, remove the entrypoint. Also note, you don't need to wrap it in bash -c
If you have removed the entrypoint you can run it like this.
docker run -it --rm -v ${HOST_GEM5}:${DOCKER_GEM5} gerrie/gem5 scons build/X86/gem5.opt

docker compose inside docker in docker

What I have:
I am creating a Jenkins(BlueOcean Pipeline) for CI/CD. I am using the docker in docker approach to use Jenkins as described in the Jenkins docs tutorail.
I have tested the setup, it is working fine. I can build and run docker images in the Jenkins container. Now, I am trying to use docker-compose, but it says docker-compose: not found
`
Problem:
Unable to use `docker-compose inside the container(Jenkins).
What I want:
I want to able to use `docker-compose inside the container using the dind(docker in docker) approach.
Any help would be very much appreciated.
Here is my working solution:
FROM maven:3.6-jdk-8
USER root
RUN apt update -y
RUN apt install -y curl
# Install Docker
RUN curl https://get.docker.com/builds/Linux/x86_64/docker-latest.tgz | tar xvz -C /tmp/ && mv /tmp/docker/docker /usr/bin/docker
# Install Docker Compose
RUN curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/bin/docker-compose
# Here your customizations...
It seems docker-compose is not installed in that machine.
You can check if docker-compose is installed or not using docker-compose --version. If it is not installed then you can install it in below way :
Using apt- package manager : sudo apt install -y docker-compose
OR
Using python install manager : sudo pip install docker-compose

Can't get Docker outside of Docker to work with Jenkins in ECS

I am trying to get the Jenkins Docker image deployed to ECS and have docker-compose work inside of my pipeline.
I have hit wall after wall trying to get this Jenkins container launched and functioning. Most of the issues have just been getting the docker command to work inside of a pipeline (including getting the permissions/group right).
I've gotten to the point that the command works and uses the host docker socket (docker ps outputs the jenkins container and ecs agent) and docker-compose is working (docker-compose --version works) but when I try to run anything that involves files inside the pipeline, I get a "no such file or directory" error. This happens when I run docker-compose -f docker-compose.testing.yml up -d --build (it can't find the yml file) and also when I try to run a basic docker build, it can't find local files used in the COPY command (ie. COPY . /app). I've tried from changing the command to be ./file.yml and $PWD/file.yml and still getting the same error.
Here is my Jenkins Dockerfile:
FROM jenkins/jenkins:lts
USER root
RUN apt-get update && \
apt-get install -y --no-install-recommends curl
RUN apt-get remove docker
RUN curl -sSL https://get.docker.com/ | sh
RUN curl -L --fail https://github.com/docker/compose/releases/download/1.21.2/run.sh -o /usr/local/bin/docker-compose
RUN chmod +x /usr/local/bin/docker-compose
RUN groupadd -g 497 dockerami \
&& usermod -aG dockerami jenkins
USER jenkins
COPY plugins.txt /usr/share/jenkins/ref/plugins.txt
COPY jobs /app/jenkins/jobs
COPY jenkins.yml /var/jenkins_home/jenkins.yml
RUN xargs /usr/local/bin/install-plugins.sh < /usr/share/jenkins/ref/plugins.txt
RUN echo 2.0 > /usr/share/jenkins/ref/jenkins.install.UpgradeWizard.state
ENV CASC_JENKINS_CONFIG /var/jenkins_home/jenkins.yml
I also have Terraform building the task definition and binding the /var/run/docker.sock from the host to the jenkins container.
I'm hoping to get this working since I have liked Jenkins since we started using it about 2 years ago and I've had these pipelines working with docker-compose in our non-containerized Jenkins install, but getting Jenkins containerized so far has been pulling teeth. I would much prefer to get this working than to have to change my workflows right now to something like Concourse or Drone.
One issue that you have in your Dockerfile is that you are copying a file into /var/jenkins_home which will disappear as /var/jenkins_home is defined as a volume in the parent Jenkins image and any files you copy into a volume after the volume has been declared are discarded - see https://docs.docker.com/engine/reference/builder/#notes-about-specifying-volumes

How to write docker file to run a docker run command inside an image

I have a shell script which creates and executes docker containers using docker run command. I want to keep this script in a docker image and want to run this shell script. I know that we cannot run docker inside a container. Is it possible to create a docker file to achieve this?
Dockerfile:
FROM ubuntu:latest
RUN apt-get update && apt-get install -y vim-gnome curl
RUN curl -L https://raw.githubusercontent.com/xyz/abx/test/testing/testing_docker.sh -o testing_docker.sh
RUN chmod +x testing_docker.sh
CMD ["./testing_docker.sh"]
testing_docker.sh:
docker run -it docker info (sample command)

Resources