How to push files from docker Jenkins to local dir - docker

I am new to the Jenkins and docker. I wonder if there is way to push the files from container to local. I mounted local dir to docker, but it seems all files only updated in container.
local dir: /home/xyz/
container dir: /var/jenkins_home/xyz
docker run \
--name jenkins \
--restart=on-failure \
--detach \
--network jenkins \
--env DOCKER_HOST=tcp://docker:2376 \
--env DOCKER_CERT_PATH=/certs/client \
--env DOCKER_TLS_VERIFY=1 \
--publish 8080:8080 \
--publish 50000:50000 \
--mount type=bind,source=/home/xyz/,target=/var/jenkins_home/xyz \
--volume jenkins-data:/var/jenkins_home \
--volume jenkins-docker-certs:/certs/client:ro \
myjenkins-blueocean:2.361.3-1

When you ran Jenkins image using --volume jenkins-data:/var/jenkins_home you used docker volumes. Docker manages the volume and you can find the exact data location by inspecting the volumes of the container.
When you used --mount type=bind,source=/home/xyz/,target=/var/jenkins_home/xyz you mapped the folder /var/jenkins_home/xyz from the container to folder /home/xyz/ on the docker host using bind mounts. Jenkins home data changes in the container are reflected on the host path. Create a new job and you will see its definition in jobs folder.
You should use either docker volumes or bind mounts, not both for a single data folder.
If you want to copy data from the container to the host use docker cp command.

Related

How should I mount a host httpd.conf file into an Apache httpd Docker container?

I'm spinning up a docker container using:
docker run -d \
--add-host=host.docker.internal:host-gateway \
--name=apache \
--restart always \
-e PUID=1000 \
-e PGID=1000 \
-e TZ=Europe/London \
-p 80:80 \
-v /share/CACHEDEV1_DATA/Container/apache/config/httpd.conf:/usr/local/apache2/conf/httpd.conf \
-v /share/CACHEDEV1_DATA/Container/apache/config/httpd-vhosts.conf:/usr/local/apache2/conf/extra/httpd-vhosts.conf \
httpd:latest
Unfortunately, the httpd.conf file within the container does not match the local file in the host. Interestingly, the httpd-vhosts.conf file within the container matches the local file in the host.
Build your own image based on httpd:latest as it is described here
FROM httpd:latest
COPY ./my-httpd.conf /usr/local/apache2/conf/httpd.conf
Assuming you do not care about frequent configuration changes, which requires restarting Apache anyway, for performance reasons, it is better to have this copied directly to the image.
For anyone else having the same issue, I've solved this by placing the whole conf directory in the host and mapping it in the docker run with:
-v /share/CACHEDEV1_DATA/Container/apache/conf:/usr/local/apache2/conf
First I had to run the container and copy (docker cp) container_id:/usr/local/apache2/conf into the host.
No problems since then.

How to completely erase a Docker container of GitLab Server from machine?

While writing an automated deployment script for a self-hosted GitLab server I noticed that my uninstallation script does not (completely) delete the GitLab server settings, nor repositories. I would like the uninstaller to completely remove all traces of the previous GitLab server installation.
MWE
#!/bin/bash
uninstall_gitlab_server() {
gitlab_container_id=$1
sudo systemctl stop docker
sudo docker stop gitlab/gitlab-ce:latest
sudo docker rm gitlab/gitlab-ce:latest
sudo docker rm -f gitlab_container_id
}
uninstall_gitlab_server <some_gitlab_container_id>
Observed behaviour
When running the installation script, the GitLab repositories are preserved, and the GitLab root user account password is preserved from the previous installation.
Expected behaviour
I would expect the docker container and hence GitLab server data to be erased from the device. Hence, I would expect the GitLab server to ask for a new root password, and I would expect it to not display previously existing repositories.
Question
How can I completely remove the GitLab server that is installed with:
sudo docker run --detach \
--hostname $GITLAB_SERVER \
--publish $GITLAB_PORT_1 --publish $GITLAB_PORT_2 --publish $GITLAB_PORT_3 \
--name $GITLAB_NAME \
--restart always \
--volume $GITLAB_HOME/config:/etc/gitlab \
--volume $GITLAB_HOME/logs:/var/log/gitlab \
--volume $GITLAB_HOME/data:/var/opt/gitlab \
-e GITLAB_ROOT_EMAIL=$GITLAB_ROOT_EMAIL -e GITLAB_ROOT_PASSWORD=$gitlab_server_password \
gitlab/gitlab-ce:latest)
Stopping and removing the containers doesn't remove any host/Docker volumes you may have mounted/created.
--volume $GITLAB_HOME/config:/etc/gitlab \
--volume $GITLAB_HOME/logs:/var/log/gitlab \
--volume $GITLAB_HOME/data:/var/opt/gitlab \
You need to rm -rf $GITLAB_HOME

How can I use my saved volume data after restarting docker?

I have a Docker container based on Linux on a PC running Windows. I have pulled and installed Gitlab CI/CD. Everything is running and I log in to Gitlab, but every time I restart the docker container it is like I lose all my data. I understand it overrides the previous data, saved inside the container, but I need a way to "persist" that data. From my understanding the only way is to point the volumes of the Gitlab image to directories saved on my PC somehow. How do I do this or something similar to this so I won't lose my data on Docker restart?
The script I ran to instantiate gitlab image is the following:
docker run -d --hostname gitlab.wproject.gr \
-p 4433:443 -p 80:80 -p 2223:22 \
--name gitlab-server1 \
--restart always \
--volume /storage/gitlab/config:/etc/gitlab \
--volume /storage/gitlab/logs:/var/log/gitlab \
--volume /storage/gitlab/data:/var/opt/gitlab \
gitlab/gitlab-ce:latest
Try to put relative links for your volumes instead of absolute links. If you use Docker Desktop on Windows the volume management doesn't always behave the same way as on Linux.
Test with:
mkdir gitlab
docker run -d --hostname gitlab.wproject.gr \
-p 4433:443 -p 80:80 -p 2223:22 \
--name gitlab-server1 \
--restart always \
--volume ./gitlab/config:/etc/gitlab \
--volume ./gitlab/logs:/var/log/gitlab \
--volume ./gitlab/data:/var/opt/gitlab \
gitlab/gitlab-ce:latest

How to set heap memory in cassandra on docker

I am using a Cassandra docker (official Cassandra docker) to setup my local env.
As part of this I want to limit the amount of memory the Cassandra is using in my local deployment.
By default Cassandra has a pre defined way to set its memory.
I found references to some info saying that i can use JVM_OPTS to set this values but it does not seem to take hold.
I am looking for a way to set up this values without creating my own Cassandra docker.
Docker command that is used to run container:
docker run -dit --name sdc-cs --env RELEASE="${RELEASE}" \
--env CS_PASSWORD="${CS_PASSWORD}" --env ENVNAME="${DEP_ENV}" \
--env HOST_IP=${IP} --env JVM_OPTS="-Xms1024m -Xmx1024m" \
--log-driver=json-file --log-opt max-size=100m \
--log-opt max-file=10 --ulimit memlock=-1:-1 --ulimit nofile=4096:100000 \
--volume /etc/localtime:/etc/localtime:ro \
--volume ${WORKSPACE}/data/CS:/var/lib/cassandra \
--volume ${WORKSPACE}/data/environments:/root/chef-solo/environments \
--publish 9042:9042 --publish 9160:9160 \
${PREFIX}/sdc-cassandra:${RELEASE} /bin/s
Any advice will be appreciated!
I am using docker-compose, in the docker-compose.yml file I set the following env variables. It seems to work.
environment:
- HEAP_NEWSIZE=128M
- MAX_HEAP_SIZE=2048M
Entrypoint script starts cassandra as usual, and during start it executes cassandra-env.sh script that may set memory options if they aren't set in the JVM_OPTS environment variable, so if you start container with corresponding memory options set via -e JVM_OPTS..., then it should work.
But in a long run it's better to submit config files via /config mount point of Docker image, and put memory option into jvm.options file that is loaded by cassandra-env.sh.
P.S. Just tried it on my machine:
docker run --rm -e DS_LICENSE=accept store/datastax/dse-server:5.1.5
Gives me following memory switches: -Xms1995M -Xmx1995M.
If I run it with:
docker run --rm -e DS_LICENSE=accept \
-e JVM_OPTS="-Xms1024M -Xmx1024M" store/datastax/dse-server:5.1.5
then it gives correct -Xms1024M -Xmx1024M...

Host to mount container directory to the host?

I build a image of my web project with all my dependencies in the image at /app. When running the container it's start blazing fast and I'm able to access the application instantly.
However I build all the thing directly in the Dockerfile so the host has nothing except the Dockerfile.
So I try to retrieve the project files like so docker run -v $(pwd):/app image_name but it seems the folder is overrided because when trying to serve the public folder it can't be found anymore. By just exclude the volume option it's start well.
Am I right when I'm thinking it override my container folder?
Why did this works for the GitLab Project? (https://docs.gitlab.com/omnibus/docker/README.html#prerequisites)
They got all the project in the container, and mount it on the host.
sudo docker run --detach \
--hostname gitlab.example.com \
--publish 443:443 --publish 80:80 --publish 22:22 \
--name gitlab \
--restart always \
--volume /srv/gitlab/config:/etc/gitlab \
--volume /srv/gitlab/logs:/var/log/gitlab \
--volume /srv/gitlab/data:/var/opt/gitlab \
gitlab/gitlab-ce:latest
For named volumes, the image contents are copied from the image to the named volumes upon volume creation. This named volume will have the contents of /bin copied to the volume:
docker run -ti -v busyboxbin:/bin busybox sh
Bind mounted directories are mounted in place and override any image contents. So this example would fail (unless you already had a copy of the files in /tmp/empty)
docker run -ti -v /tmp/empty:/bin busybox sh
The gitlab container will populate the bind mounted volume contents after the image was started. The logs are easy to add what's there. The data directory may need to be initialised by the app. The image probably comes with pre canned config to populate and run with if no files exist.

Resources