How to specify the size of a Docker volume? - docker

I am using Docker version 20.10.17, build 100c701, on Ubuntu 20.04.4.
I have successfully created a jenkins/jenkins:lts image with its own volume, using the following command:
docker run -p 8082:8080 -p 50001:50000 -d -v jenkins_home:/var/jenkins_home jenkins/jenkins:lts
But after installing many plugins and running many jobs on Jenkins, I kept getting a notification on the Jenkins GUI that the storage is almost full (it was almost 388 Mb).
1- What is the default size of a docker volume ? I couldn't find an answer anywhere.
2- I tried to specify the size of the volume (after deleting everything image/container/volume) using the driver_opts and using a docker compose file.
The docker-compose.yml file.
version: '3'
services:
jenkins:
build:
context: .
dockerfile: ./Dockerfile
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- jenkins_home:/var/jenkins_home
ports:
- 8082:8080
- 50001:50000
volumes:
jenkins_home:
driver_opts:
o: "size=900m"
The Dockerfile.
FROM jenkins/jenkins:lts
USER root
RUN apt-get -y update && apt-get install -y lsb-release &&\
apt-get -y install apt-transport-https ca-certificates
RUN apt-get -y install curl && \
apt-get -y update && \
apt-get -y install python3.10
RUN curl -fsSLo /usr/share/keyrings/docker-archive-keyring.asc \
https://download.docker.com/linux/debian/gpg
RUN echo "deb [arch=$(dpkg --print-architecture) \
signed-by=/usr/share/keyrings/docker-archive-keyring.asc] \
https://download.docker.com/linux/debian \
$(lsb_release -cs) stable" > /etc/apt/sources.list.d/docker.list
RUN apt-get -y update && \
apt-get -y install docker-ce docker-ce-cli containerd.io docker-compose-plugin
USER jenkins
I got an error that the required device option is not specified.
I don't want a temporary storage tmpfs, so i tried to specify a path on my machine. I got the error that there is no such device.
What am I doing wrong? How should I proceed?
My final target is to create a Jenkins container that has a large volume size.

You could create a volume with a specified size using this command
docker volume create --driver local \
--opt type=tmpfs \
--opt device=tmpfs \
--opt o=size=100m,uid=1000 \
foo
And use it when creating the container.

Related

Jenkins on docker not finding xml pytest report

I have rootless docker host, jenkins on docker and a fastapi app inside a container as well.
Jenkins dockerfile:
FROM jenkins/jenkins:lts-jdk11
USER root
RUN apt-get update && \
apt-get -y install apt-transport-https \
ca-certificates \
curl \
gnupg2 \
software-properties-common && \
curl -fsSL https://download.docker.com/linux/$(. /etc/os-release; echo "$ID")/gpg > /tmp/dkey; apt-key add /tmp/dkey && \
add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") \
$(lsb_release -cs) \
stable" && \
apt-get update && \
apt-get -y install docker-ce docker-ce-cli containerd.io docker-compose-plugin
This is the docker run command:
docker run -d --name jenkins-docker --restart=on-failure -v jenkins_home:/var/jenkins_home -v /run/user/1000/docker.sock:/var/run/docker.sock -p 8080:8080 -p 5000:5000 jenkins-docker-image
Where -v /run/user/1000/docker.sock:/var/run/docker.sock is used so jenkins-docker can use the host's docker engine.
Then, for the tests I have a docker compose file:
services:
app:
volumes:
- /home/jap/.local/share/docker/volumes/jenkins_home/_data/workspace/vlep-pipeline_main/test-result:/usr/src
depends_on:
- testdb
...
testdb:
image: postgres:14-alpine
...
volumes:
test-result:
Here I am using the volume create on the host when I ran the jenkins-docker-image. After running jenkins 'test' stage I can see that a report.xml file was created inside the host and jenkins-docker volumes.
Inside jenkins-docker
root#89b37f219be1:/var/jenkins_home/workspace/vlep-pipeline_main/test-result# ls
report.xml
Inside host
jap#jap:~/.local/share/docker/volumes/jenkins_home/_data/workspace/vlep-pipeline_main/test-result $ ls
report.xml
I then have the following steps on my jenkinsfile:
steps {
sh 'docker compose -p testing -f docker/testing.yml up -d'
junit "/var/jenkins_home/workspace/vlep-pipeline_main/test-result/report.xml"
}
I also tried using the host path for the junit step, but either way I get on jenkins logs:
Recording test results
No test report files were found. Configuration error?
What am I doing wrong?

"docker:19.03-dind" could not select device driver "nvidia" with capabilities: [[gpu]]

I got a K8S+DinD issue:
launch Kubernetes cluster
start a main docker image and a DinD image inside this cluster
when running a job requesting GPU, got error could not select device driver "nvidia" with capabilities: [[gpu]]
Full error
http://localhost:2375/v1.40/containers/long-hash-string/start: Internal Server Error ("could not select device driver "nvidia" with capabilities: [[gpu]]")
exec to the DinD image inside of K8S pod, nvidia-smi is not available.
Some debugging and it seems it's due to the DinD is missing the Nvidia-docker-toolkit, I had the same error when I ran the same job directly on my local laptop docker, I fixed the same error by installing nvidia-docker2 sudo apt-get install -y nvidia-docker2.
I'm thinking maybe I can try to install nvidia-docker2 to the DinD 19.03 (docker:19.03-dind), but not sure how to do it? By multiple stage docker build?
Thank you very much!
update:
pod spec:
spec:
containers:
- name: dind-daemon
image: docker:19.03-dind
I got it working myself.
Referring to
https://github.com/NVIDIA/nvidia-docker/issues/375
https://github.com/Henderake/dind-nvidia-docker
First, I modified the ubuntu-dind image (https://github.com/billyteves/ubuntu-dind) to install nvidia-docker (i.e. added the instructions in the nvidia-docker site to the Dockerfile) and changed it to be based on nvidia/cuda:9.2-runtime-ubuntu16.04.
Then I created a pod with two containers, a frontend ubuntu container and the a privileged docker daemon container as a sidecar. The sidecar's image is the modified one I mentioned above.
But since this post is 3 year ago from now, I did spent quite some time to match up the dependencies versions, repo migration over 3 years, etc.
My modified version of Dockerfile to build it
ARG CUDA_IMAGE=nvidia/cuda:11.0.3-runtime-ubuntu20.04
FROM ${CUDA_IMAGE}
ARG DOCKER_CE_VERSION=5:18.09.1~3-0~ubuntu-xenial
RUN apt-get update -q && \
apt-get install -yq \
apt-transport-https \
ca-certificates \
curl \
gnupg-agent \
software-properties-common && \
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - && \
add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable" && \
apt-get update -q && apt-get install -yq docker-ce docker-ce-cli containerd.io
# https://github.com/docker/docker/blob/master/project/PACKAGERS.md#runtime-dependencies
RUN set -eux; \
apt-get update -q && \
apt-get install -yq \
btrfs-progs \
e2fsprogs \
iptables \
xfsprogs \
xz-utils \
# pigz: https://github.com/moby/moby/pull/35697 (faster gzip implementation)
pigz \
# zfs \
wget
# set up subuid/subgid so that "--userns-remap=default" works out-of-the-box
RUN set -x \
&& addgroup --system dockremap \
&& adduser --system -ingroup dockremap dockremap \
&& echo 'dockremap:165536:65536' >> /etc/subuid \
&& echo 'dockremap:165536:65536' >> /etc/subgid
# https://github.com/docker/docker/tree/master/hack/dind
ENV DIND_COMMIT 37498f009d8bf25fbb6199e8ccd34bed84f2874b
RUN set -eux; \
wget -O /usr/local/bin/dind "https://raw.githubusercontent.com/docker/docker/${DIND_COMMIT}/hack/dind"; \
chmod +x /usr/local/bin/dind
##### Install nvidia docker #####
# Add the package repositories
RUN curl -fsSL https://nvidia.github.io/nvidia-docker/gpgkey | apt-key add --no-tty -
RUN distribution=$(. /etc/os-release;echo $ID$VERSION_ID) && \
echo $distribution && \
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | \
tee /etc/apt/sources.list.d/nvidia-docker.list
RUN apt-get update -qq --fix-missing
RUN apt-get install -yq nvidia-docker2
RUN sed -i '2i \ \ \ \ "default-runtime": "nvidia",' /etc/docker/daemon.json
RUN mkdir -p /usr/local/bin/
COPY dockerd-entrypoint.sh /usr/local/bin/
RUN chmod 777 /usr/local/bin/dockerd-entrypoint.sh
RUN ln -s /usr/local/bin/dockerd-entrypoint.sh /
VOLUME /var/lib/docker
EXPOSE 2375
ENTRYPOINT ["dockerd-entrypoint.sh"]
#ENTRYPOINT ["/bin/sh", "/shared/dockerd-entrypoint.sh"]
CMD []
When I use exec to login into the Docker-in-Docker container, I can successfully run nvidia-smi (which previously return not found error then cannot run any GPU resource related docker run)
Welcome to pull my image at brandsight/dind:nvidia-docker

Not able to access elasticsearch from docker container

elastic search is successfully running on docker container. but i'm not able access in browser. i mapped ports correctly. but the problem is in docker container. in container elasticsearch is mapped with localhost
127.0.0.1:9200
Dokcerfile
FROM ubuntu:16.04
MAINTAINER Rajesh Gurram
RUN apt-get update && \
apt-get install -y net-tools curl wget gnupg
RUN apt-get install -y software-properties-common
RUN add-apt-repository ppa:webupd8team/java && \
apt-get update && \
echo oracle-java7-installer shared/accepted-oracle-license-v1-1 select true | /usr/bin/debconf-set-selections && \
apt-get install -y oracle-java8-installer && apt-get clean
ENV JAVA_HOME /usr/lib/jvm/java-8-oracle
RUN apt-get install apt-transport-https
RUN wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | apt-key add - && \
echo "deb https://artifacts.elastic.co/packages/6.x/apt stable main" | tee -a /etc/apt/sources.list.d/elastic-6.x.list && \
apt update && apt install -y elasticsearch
RUN sed -i 's/#network.host: 192.168.0.1/network.host: 0.0.0.0/g' /etc/elasticsearch/elasticsearch.yml
EXPOSE 9200 9300
Run Below command on Host machine it will resolve the issue
$ sysctl -w vm.max_map_count=262144
If you want to use docker to get an instance of Elasticsearch, you can read the following Guide:
https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html
You can also use docker images directly from elastic, if ubuntu is not a necessary base image:
https://www.docker.elastic.co/
If you want to upgrade to an ELK Stack later on, I recommend a docker volume for persistency purposes.

How to build docker image in Jenkins running into docker?

I just tried build my test image for Jenkins course and got the issue
+ docker build -t nginx_lamp_app .
/var/jenkins_home/jobs/docker-test/workspace#tmp/durable-d84b5e6a/script.sh: 2: /var/jenkins_home/jobs/docker-test/workspace#tmp/durable-d84b5e6a/script.sh: docker: not found
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 127
Finished: FAILURE
But I've already configured docker socket in docker-compose file for Jenkins, like this
version: "2"
services:
jenkins:
image: "jenkins/jenkins:lts"
ports:
- "8080:8080"
restart: "always"
volumes:
- "/var/jenkins_home:/var/jenkins_home"
- "/var/run/docker.sock:/var/run/docker.sock"
But, when I attach to container I see also "docker: not found" when I type command "docker"...
And I've changed permissions to socket like 777
What's can be wrong?
Thanks!
You are trying to achieve a Docker-in-Docker kind of thing. Mounting just the docker socket will not make it working as you expect. You need to install docker binary into it as well. You can do this by either extending your jenkins image/Dockerfile or create(docker commit) a new image after installing docker binary into it & use that image for your CI/CD. Try to integrate below RUN statement with the extended Dockerfile or the container to be committed(should work on ubuntu docker image) -
RUN apt-get update && \
apt-get -y install apt-transport-https \
ca-certificates \
curl \
gnupg2 \
software-properties-common && \
curl -fsSL https://download.docker.com/linux/$(. /etc/os-release; echo "$ID")/gpg > /tmp/dkey; apt-key add /tmp/dkey && \
add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") \
$(lsb_release -cs) \
stable" && \
apt-get update && \
apt-get -y install docker-ce
Ref - https://github.com/jpetazzo/dind
PS - It isn't really recommended (http://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/)
Adding to that, you shouldn't mount host docker binary inside the container -
⚠️ Former versions of this post advised to bind-mount the docker
binary from the host to the container. This is not reliable anymore,
because the Docker Engine is no longer distributed as (almost) static
libraries.

Cannot execute ansible playbook via docker container

Im executing a pipeline on jenkins that is inside a docker container. This pipeline calls another docker-compose file that executes an ansible playbook. The service that executes the playbook is called agent, and is defined as follows:
agent:
image: pjestrada/ansible
links:
- db
environment:
PROBE_HOST: "db"
PROBE_PORT: "3306"
command: ["probe.yml"]
this is the images it uses:
FROM ubuntu:trusty
MAINTAINER Pablo Estrada <pjestradac#gmail.com>
# Prevent dpkg errors
ENV TERM=x-term-256color
RUN sed -i "s/http:\/\/archive./http:\/\/nz.archive./g" /etc/apt/sources.list
#Install ansible
RUN apt-get update -qy && \
apt-get install -qy software-properties-common && \
apt-add-repository -y ppa:ansible/ansible && \
apt-get update -qy && \
apt-get install -qy ansible
# Copy baked in playbooks
COPY ansible /ansible
# Add voulme for Ansible Playbooks
Volume /ansible
WORKDIR /ansible
RUN chmod +x /
#Entrypoint
ENTRYPOINT ["ansible-playbook"]
CMD ["site.yml"]
My local machine is Ubuntu 16.04, and when I run docker-compose up agent the plabook is executed successfully. However when Im inside the jenkins container im getting this error on the same command call.
Attaching to todobackend9dev_agent_1
[36magent_1 | [0mERROR! the playbook: site.yml does not appear to be a file
This are the images and compose files for my jenkins container:
FROM jenkins:1.642.1
MAINTAINER Pablo Estrada <pjestradac#gmail.com>
# Suppress apt installation warnings
ENV DEBIAN_FRONTEND=noninteractive
# Change to root user
USER root
# Used to set the docker group ID
# Set to 497 by default, which is the group ID used by AWS Linux ECS Instance
ARG DOCKER_GID=497
# Create Docker Group with GID
# Set default value of 497 if DOCKER_GID set to blank string by Docker Compose
RUN groupadd -g ${DOCKER_GID:-497} docker
# Used to control Docker and Docker Compose versions installed
# NOTE: As of February 2016, AWS Linux ECS only supports Docker 1.9.1
ARG DOCKER_ENGINE=1.10.2
ARG DOCKER_COMPOSE=1.6.2
# Install base packages
RUN apt-get update -y && \
apt-get install apt-transport-https curl python-dev python-setuptools gcc make libssl-dev -y && \
easy_install pip
# Install Docker Engine
RUN apt-key adv --keyserver hkp://pgp.mit.edu:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D && \
echo "deb https://apt.dockerproject.org/repo ubuntu-trusty main" | tee /etc/apt/sources.list.d/docker.list && \
apt-get update -y && \
apt-get purge lxc-docker* -y && \
apt-get install docker-engine=${DOCKER_ENGINE:-1.10.2}-0~trusty -y && \
usermod -aG docker jenkins && \
usermod -aG users jenkins
# Install Docker Compose
RUN pip install docker-compose==${DOCKER_COMPOSE:-1.6.2} && \
pip install ansible boto boto3
# Change to jenkins user
USER jenkins
# Add Jenkins plugins
COPY plugins.txt /usr/share/jenkins/plugins.txt
RUN /usr/local/bin/plugins.sh /usr/share/jenkins/plugins.txt
Compose File:
version: '2'
volumes:
jenkins_home:
external: true
services:
jenkins:
build:
context: .
args:
DOCKER_GID: ${DOCKER_GID}
DOCKER_ENGINE: ${DOCKER_ENGINE}
DOCKER_COMPOSE: ${DOCKER_COMPOSE}
volumes:
- jenkins_home:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock
ports:
- "8080:8080"
I put a volume in order to access docker socket from my jenkins container. However, for some reason Im not being able to access the site.yml file I need for the playbook even though outside the container the file is available.
Can anyone help me solve this issue?
How sure are you about that volume mount point and your paths?
- jenkins_home:/var/jenkins_home
Have you tried debug via echo? If it can't find the site.yml then paths are the most likely cause. You can use jenkins replay on a job to iterate quickly and modify parts of the jenkins code. That will let you run things like
sh "pwd; ls -la"
I recommend adding the equivalent within your docker container so you can check the paths. My guess is that the workspace isn't where you think it is and you'll want to run docker with:
-v${env.WORKSPACE}:jenkins-workspace
and then within the container:
pushd /jenkins-worspace

Resources