I want to build a jenkins docker container with root permissions so that i can us apt-get feature to install gradle.
I am using this command to run jenkins on 8080 port but i also want to add gradle as enviornment variable :
docker run -p 8080:8080 -p 50000:50000 -v /var/jenkins_home:/var/jenkins_home jenkins
or what dockerfile i need to create and what to write in it so that jenkins also start running at 8080
I am now able to login into my docker container as root and apt-get can be used to install gradle or anything manually into the container.
Command i used to enter as root in container :
docker exec -u 0 -it mycontainer bash
Building an image that sets USER to root will make all interactive logins use root.
Dockerfile
FROM jenkins/jenkins
USER root
Then (setting your container ID):
docker exec -it jenkins_jenkins_1 bash
root#9e8f16419754:/$
Related
I'm using NVIDIA Docker in a Linux machine (Ubuntu 20.04). I've created a container named user1 using nvidia/cuda:11.0-base image as follows:
docker run --gpus all --name user1 -dit nvidia/cuda:11.0-base /bin/bash
And, here is what I see if I run docker ps -a:
admin#my_desktop:~$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a365362840de nvidia/cuda:11.0-base "/bin/bash" 3 seconds ago Up 2 seconds user1
I want to access to that container via ssh using its unique IP address from a totally different machine (other than my_desktop, which is the host). First of all, is it possible to grant each container a unique IP address? If so, how can I do it? Thanks in advance.
In case you want to access to your container with ssh from an external VM, you need to do the following
Install the ssh daemon for your container
Run the container and expose its ssh port
I would propose the following Dockerfile, which builds from nvidia/cuda:11.0-base and creates an image with the ssh daemon inside
Dockerfile
# Instruction for Dockerfile to create a new image on top of the base image (nvidia/cuda:11.0-base)
FROM nvidia/cuda:11.0-base
ARG root_password
RUN apt-get update || echo "OK" && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN echo "root:${root_password}" | chpasswd
RUN sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
RUN sed -i 's/#PasswordAuthentication yes/PasswordAuthentication yes/' /etc/ssh/sshd_config
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
Build the image from the Dockerfile
docker image build --build-arg root_password=password --tag nvidia/cuda:11.0-base-ssh .
Create the container
docker container run -d -P --name ssh nvidia/cuda:11.0-base-ssh
Run docker ps to see the container port
Finally, access the container
ssh -p 49157 root#<VM_IP>
EDIT: As David Maze correctly pointed out. You should be aware that the root password will be visible in the image history. Also this way overwrites the original container process.
This process, if it is to be adopted it needs to be modified in case you need it for production use. This serves as a starting point for someone who wishes to add ssh to his container.
This question already has an answer here:
Running nuxt js application in Docker
(1 answer)
Closed 2 years ago.
I'm running:
sudo docker run -d -p 9001:9001 --rm --name <cname> <img>
then I go to my browser at localhost:9001, no connection.
If I run:
sudo docker run -d --network=host --rm --name <cname> <img>
I can access the application at localhost:9001 from my browser.
Running the first command, I can verify it's running properly inside docker by running:
sudo docker exec <cname> wget localhost:9001 which returns a page as expected.
If it is useful: the application running is a standard nuxt.js that listens on port 9001, the dockerfile used to generate the image is (ran npm build before docker image build)
FROM node:lts-alpine
WORKDIR /app/
COPY . /app/
EXPOSE 9001
ENTRYPOINT npm start
The docker version I'm using is 19.03.8-ce. How would I fix this ?
Try running docker without sudo. Using docker with sudo is not a good practice and can cause a lot of troubles.
To use docker without sudo, you should add yourself to "docker" group, as stated in official documentation.
To create the docker group and add your user:
Create the docker group.
$ sudo groupadd docker
Add your user to the docker group.
$ sudo usermod -aG docker $USER
Log out and log back in so that your group membership is re-evaluated.
Docker post-install documentation
I have a virtual machine hosting Oracle Linux where I've installed Docker and created containers using a docker-compose file. I placed the jenkins volume under a shared folder but when starting the docker-compose up I got the following error for Jenkins :
jenkins | touch: cannot touch ‘/var/jenkins_home/copy_reference_file.log’: Permission denied
jenkins | Can not write to /var/jenkins_home/copy_reference_file.log. Wrong volume permissions?
jenkins exited with code 1
Here's the volumes declaration
volumes:
- "/media/sf_devops-workspaces/dev-tools/continuous-integration/jenkins:/var/jenkins_home"
The easy fix it to use the -u parameter. Keep in mind this will run as a root user (uid=0)
docker run -u 0 -d -p 8080:8080 -p 50000:50000 -v /data/jenkins:/var/jenkins_home jenkins/jenkins:lts
As haschibaschi stated your user in the container has different userid:groupid than the user on the host.
To get around this is to start the container without the (problematic) volume mapping, then run bash on the container:
docker run -p 8080:8080 -p 50000:50000 -it jenkins bin/bash
Once inside the container's shell run the id command and you'll get results like:
uid=1000(jenkins) gid=1000(jenkins) groups=1000(jenkins)
Exit the container, go to the folder you are trying to map and run:
chown -R 1000:1000 .
With the permissions now matching, you should be able to run the original docker command with the volume mapping.
The problem is, that your user in the container has different userid:groupid as the user on the host.
you have two possibilities:
You can ensure that the user in the container has the same userid:groupid like the user on the host, which has access to the mounted volume. For this you have to adjust the user in the Dockerfile. Create a user in the dockerfile with the same userid:groupid and then switch to this user https://docs.docker.com/engine/reference/builder/#user
You can ensure that the user on the host has the same userid:groupid like the user in the container. For this, enter the container with docker exec -it <container-name> bash and show the user id id -u <username> group id id -G <username>. Change the permissions of the mounted volume to this userid:groupid.
You may be under SELinux. Running the container as privileged solved the issue for me:
sudo docker run --privileged -p 8080:8080 -p 50000:50000 -v /data/jenkins:/var/jenkins_home jenkins/jenkins:lts
From https://docs.docker.com/engine/reference/commandline/run/#full-container-capabilities---privileged:
The --privileged flag gives all capabilities to the container, and it also lifts all the limitations enforced by the device cgroup controller. In other words, the container can then do almost everything that the host can do. This flag exists to allow special use-cases, like running Docker within Docker.
As an update of #Kiem's response, using $UID to ensure container uses the same user id as the host, you can do this:
docker run -u $UID -d -p 8080:8080 -p 50000:50000 -v /data/jenkins:/var/jenkins_home jenkins/jenkins:lts
I had a similar issue with Minikube/Kubernetes just added
securityContext:
fsGroup: 1000
runAsUser: 0
under deployment -> spec -> template -> spec
This error solve using following commnad.
goto your jenkins data mount path : /media
Run following command :
cd /media
sudo chown -R ubuntu:ubuntu sf_devops-workspaces
restart jenkins docker container
docker-compose restart jenkins
Had a similar issue on MacOS, I had installed Jenkins using helm over a Minikube/Kubenetes after many intents I fixed it adding runAsUser: 0 (as root) in the values.yaml I use to deploy jenkins.
master:
usePodSecurityContext: true
runAsUser: 0
fsGroup: 0
Just be careful because that means that you will run all your commands as root.
use this command
$ chmod +757 /home/your-user/your-jenkins-data
first of all you can verify your current user using echo $USER command
and after that you can mention who is the user in the Dockerfile like bellow (in my case user is root)
screenshot
I had same issue it got resolved after disabling the SELINUX.
It's not recommended to disable the SELINUX so install custom semodule and enable it.
It works. Only changing the permissions won't work on CentOS 7.
I have airflow running on an EC2 instance, and I am scheduling some tasks that spin up a docker container. How do I do that? Do I need to install docker on my airflow container? And what is the next step after. I have a yaml file that I am using to spin up the container, and it is derived from the puckel/airflow Docker image
I got a simpler solution working which just requires a short Dockerfile to build a derived image:
FROM puckel/docker-airflow
USER root
RUN groupadd --gid 999 docker \
&& usermod -aG docker airflow
USER airflow
and then
docker build -t airflow_image .
docker run -v /var/run/docker.sock:/var/run/docker.sock:ro \
-v /usr/bin/docker:/bin/docker:ro \
-v /usr/lib/x86_64-linux-gnu/libltdl.so.7:/usr/lib/x86_64-linux-gnu/libltdl.so.7:ro \
-d airflow_image
Finally resolved
My EC2 setup is running unbuntu Xenial 16.04 and using a modified the puckel/airflow docker image that is running airflow
Things you will need to change in the Dockerfile
Add USER root at the top of the Dockerfile
USER root
mounting docker bin was not working for me, so I had to install the
docker binary in my docker container
Install Docker from Docker Inc. repositories.
RUN curl -sSL https://get.docker.com/ | sh
search for wrapdocker file on the internet. Copy it into scripts directory in the folder where the Dockerfile is located. This starts the docker daemon inside airflow docker
Install the magic wrapper
ADD ./script/wrapdocker /usr/local/bin/wrapdocker
RUN chmod +x /usr/local/bin/wrapdocker
add airflow as a user to the docker group so the airflow can run docker jobs
RUN usermod -aG docker airflow
switch to airflow user
USER airflow
Docker compose file or command line arguments to docker run
Mount docker socket from docker airflow to the docker image just installed
- /var/run/docker.sock:/var/run/docker.sock
You should be good to go !
You can spin up docker containers from your airflow docker container by attaching volumes to your container.
Example:
docker run -v /var/run/docker.sock:/var/run/docker.sock:ro -v /path/to/bin/docker:/bin/docker:ro your_airflow_image
You may also need to attach some libraries required by docker. This depends on the system you are running Docker on. Just read the error messages you get when running a docker command inside the container, it will indicate you what you need to attach.
Your airflow container will then have full access to Docker running on the host.
So if you launch docker containers, they will run on the host running the airflow container.
I want to run Jenkins in a Docker Container on Centos7.
I saw the official documentation of Jenkins:
First, pull the official jenkins image from Docker repository.
docker pull jenkins
Next, run a container using this image and map data directory from the container to the host; e.g in the example below /var/jenkins_home from the container is mapped to jenkins/ directory from the current path on the host. Jenkins 8080 port is also exposed to the host as 49001.
docker run -d -p 49001:8080 -v $PWD/jenkins:/var/jenkins_home -t jenkins
But when I try to run the docker container I get the following error:
/usr/local/bin/jenkins.sh: line 25: /var/jenkins_home/copy_reference_file.log: Permission denied
Can someone tell me how to fix this problem?
The official Jenkins Docker image documentation says regarding volumes:
docker run -p 8080:8080 -p 50000:50000 -v /your/home:/var/jenkins_home jenkins
This will store the jenkins data in /your/home on the host. Ensure that /your/home is accessible by the jenkins user in container (jenkins user - uid 1000) or use -u some_other_user parameter with docker run.
This information is also found in the Dockerfile.
So all you need to do is to ensure that the directory $PWD/jenkins is own by UID 1000:
mkdir jenkins
chown 1000 jenkins
docker run -d -p 49001:8080 -v $PWD/jenkins:/var/jenkins_home -t jenkins
The newest Jenkins documentation says to use Docker 'volumes'.
Docker is kinda tricky on this, the difference between the two is a full path name with the -v option for bind mount and just a name for volumes.
docker run -d -p 49001:8080 -v jenkins-data:/var/jenkins_home -t jenkins
This command will create a docker volume named "jenkins-data" and you will no longer see the error.
Link to manage volumes:
https://docs.docker.com/storage/volumes/