Host to mount container directory to the host? - docker

I build a image of my web project with all my dependencies in the image at /app. When running the container it's start blazing fast and I'm able to access the application instantly.
However I build all the thing directly in the Dockerfile so the host has nothing except the Dockerfile.
So I try to retrieve the project files like so docker run -v $(pwd):/app image_name but it seems the folder is overrided because when trying to serve the public folder it can't be found anymore. By just exclude the volume option it's start well.
Am I right when I'm thinking it override my container folder?
Why did this works for the GitLab Project? (https://docs.gitlab.com/omnibus/docker/README.html#prerequisites)
They got all the project in the container, and mount it on the host.
sudo docker run --detach \
--hostname gitlab.example.com \
--publish 443:443 --publish 80:80 --publish 22:22 \
--name gitlab \
--restart always \
--volume /srv/gitlab/config:/etc/gitlab \
--volume /srv/gitlab/logs:/var/log/gitlab \
--volume /srv/gitlab/data:/var/opt/gitlab \
gitlab/gitlab-ce:latest

For named volumes, the image contents are copied from the image to the named volumes upon volume creation. This named volume will have the contents of /bin copied to the volume:
docker run -ti -v busyboxbin:/bin busybox sh
Bind mounted directories are mounted in place and override any image contents. So this example would fail (unless you already had a copy of the files in /tmp/empty)
docker run -ti -v /tmp/empty:/bin busybox sh
The gitlab container will populate the bind mounted volume contents after the image was started. The logs are easy to add what's there. The data directory may need to be initialised by the app. The image probably comes with pre canned config to populate and run with if no files exist.

Related

How to push files from docker Jenkins to local dir

I am new to the Jenkins and docker. I wonder if there is way to push the files from container to local. I mounted local dir to docker, but it seems all files only updated in container.
local dir: /home/xyz/
container dir: /var/jenkins_home/xyz
docker run \
--name jenkins \
--restart=on-failure \
--detach \
--network jenkins \
--env DOCKER_HOST=tcp://docker:2376 \
--env DOCKER_CERT_PATH=/certs/client \
--env DOCKER_TLS_VERIFY=1 \
--publish 8080:8080 \
--publish 50000:50000 \
--mount type=bind,source=/home/xyz/,target=/var/jenkins_home/xyz \
--volume jenkins-data:/var/jenkins_home \
--volume jenkins-docker-certs:/certs/client:ro \
myjenkins-blueocean:2.361.3-1
When you ran Jenkins image using --volume jenkins-data:/var/jenkins_home you used docker volumes. Docker manages the volume and you can find the exact data location by inspecting the volumes of the container.
When you used --mount type=bind,source=/home/xyz/,target=/var/jenkins_home/xyz you mapped the folder /var/jenkins_home/xyz from the container to folder /home/xyz/ on the docker host using bind mounts. Jenkins home data changes in the container are reflected on the host path. Create a new job and you will see its definition in jobs folder.
You should use either docker volumes or bind mounts, not both for a single data folder.
If you want to copy data from the container to the host use docker cp command.

Migrating A Docker Volume to Podman

I used to have a Docker volume for mariadb, which contained my database. As part of migration from Docker to Podman, I am trying to migrate the db volume as well. The way I tried this is as follows:
1- Copy the content of the named docker volume (/var/lib/docker/volumes/mydb_vol) to a new directory I want to use for Podman volumes (/opt/volumes/mydb_vol)
2- Run Podman run:
podman run --name mariadb-service -v /opt/volumes/mydb_vol:/var/lib/mysql/data:Z -e MYSQL_USER=wordpress -e MYSQL_PASSWORD=mysecret -e MYSQL_DATABASE=wordpress --net host mariadb
This successfully creates a container and initializes the database with the given environment variables. The problem is that the database in the container is empty! I tried changing host mounted volume to /opt/volumes/mydb_vol/_data and container volume to /var/lib/mysql simultaneously and one at a time. None of them worked.
As a matter of fact, when I "podman execute -ti container_digest bash" inside the resulting container, I can see that the tables have been mounted successfully in the specified container directories, but mysql shell says the database is empty!
Any idea how to properly migrate Docker volumes to Podman? Is this even possible?
I solved it by not treating the directory as a docker volume, but instead mounting it into the container:
podman run \
--name mariadb-service \
--mount type=bind,source=/opt/volumes/mydb_vol/data,destination=/var/lib/mysql \
-e MYSQL_USER=wordpress \
-e MYSQL_PASSWORD=mysecret \
-e MYSQL_DATABASE=wordpress \
mariadb

How to copy a file folder to docker?

I tried to use the
docker cp :
it always says the
Error: No such container:path:Docker container ID..
Can you tell me how to find the path of the docker(location conatiner) ?
I think the windows file path is correct.
If you want to do it using Dockerfile you can use COPY instruction. Else if you want to do it using docker run command, you can do it the following way:
docker run -d \
--name devtest \
--mount source=myvol2,target=/app \
nginx:latest
or if you want to use docker volume
docker run -d \
--name devtest \
-v myvol2:/app \
nginx:latest
Reference: https://docs.docker.com/storage/volumes/
In your case as you want to mount the host directory to container's, it'll be the former
you can copy files/folder in a container using COPY/ADD command in Dockerfile.
COPY <host-source> <container-dest>
or else you can mount volumes, during docker build or in the docker-compose.yml file.
Eg. for Dockerfile:
docker run -d \
--name devtest \
--mount source=myvol2,target=/app \
nginx:latest
Eg. for docker-compose.yml file
volumes:
- db-data:/var/lib/mysql/data
Dockerfile: https://docs.docker.com/storage/volumes/
docker-compose.yml: https://docs.docker.com/compose/compose-file/
To use docker cp use the instruction as shown:
docker cp ./path/to/local/folder/. ContainerName:/app/
You have to provide the container id/name for container name.
docker ps --filter status=running
The above command will show you the running containers, from where you can select the target container name or id for use (first about 4 characters of the id should be enough)

Mount volume to host

I am currently using Boot2Docker on Windows. Is it possible to mount root to host?
Say that I'm using an Ubuntu image and I would like to mount / to the host. How can I do so?
I've been looking around and trying:
docker run -v /c/Users/ubuntu:/ --name ubuntu -dt ubuntu
But I ended up with an error:
docker: Error response from daemon: Invalid bind mount spec "/c/Users/ubuntu:/": volumeslash: Invalid specification: destination can't be '/' in '/c/Users/Leon/ubuntu:/'.
If I understand correctly, you are trying to mount root inside a container as a volume? If that is the case, rather create a new directory inside and expose that one.
For example, dockerfile:
RUN mkdir /something
VOLUME /something
As the Docker documentation says, the container directory must always be an absolute path such as /src/docs. The host-dir can either be an absolute path or a name value.
For more information read this: https://docs.docker.com/engine/userguide/containers/dockervolumes/#mount-a-host-directory-as-a-data-volume and part "Mount a host directory as a data volume" should give you better understanding.
It's the problem with how you are specifying the path. See the example of mounting a local volume to be used by a container for MongoDB:
docker run --name *container-name* -v **/Users/SKausha3/mongo/imageservicedb/data**:/*data* -v **/Users/SKausha3/mongo/imageservicedb/backup**:/*backup*
c:/Users/SKausha3/mongo/imageservicedb/data is my local folder, but you have to remove 'c:' from the path.
Since you cant mount "/" one option is to add a "WORKDIR" to your Dockerfile, that way all subsequent commands will be relative to that dir and you wont have to modify anything!
FROM python:latest
WORKDIR /myapp
COPY appfile.py appfile.py
In your docker image, the "appfile.py" file will be in the /myapp/appfily.py location.
You cannot specify the '/' root directory of container but you can mount all the folders in to docker volumes present in root directory.....
create volumes by running these command one by one or you can create bash script
docker volume create var
docker volume create usr
docker volume create tmp
docker volume create sys
docker volume create srv
docker volume create sbin
docker volume create run
docker volume create root
docker volume create proc
docker volume create opt
docker volume create mnt
docker volume create media
docker volume create libx32
docker volume create lib64
docker volume create lib32
docker volume create lib
docker volume create home
docker volume create etc
docker volume create dev
docker volume create boot
docker volume create bin
Then run this command
docker run -it -d \
--name=ubuntu-container \
--mount source=var,destination=/var \
--mount source=usr,destination=/usr \
--mount source=tmp,destination=/tmp \
--mount source=sys,destination=/sys \
--mount source=srv,destination=/srv \
--mount source=sbin,destination=/sbin \
--mount source=run,destination=/run \
--mount source=root,destination=/root \
--mount source=opt,destination=/opt \
--mount source=mnt,destination=/mnt \
--mount source=media,destination=/media \
--mount source=libx32,destination=/libx32 \
--mount source=lib64,destination=/lib64 \
--mount source=lib32,destination=/lib32 \
--mount source=lib,destination=/lib \
--mount source=home,destination=/home \
--mount source=etc,destination=/etc \
--mount source=boot,destination=/boot \
--mount source=bin,destination=/bin \
ubuntu:latest

Docker: filesystem changes not exporting

TL;DR My docker save/export isn't working and I don't know why.
I'm using boot2docker for Mac.
I've created a Wordpress installation proof of concept, and am using BusyBox as both the MySQL container as well as the main file system container. I created these containers using:
> docker run -v /var/lib/mysql --name=wp_datastore -d busybox
> docker run -v /var/www/html --name=http_root -d busybox
Running docker ps -a shows two containers, both based on busybox:latest. SO far so good. Then I create the Wordpress and MySQL containers, pointing to their respective data containers:
>docker run \
--name mysql_db \
-e MYSQL_ROOT_PASSWORD=somepassword \
--volumes-from wp_datastore \
-d mysql
>docker run \
--name=wp_site \
--link=mysql_db:mysql \
-p 80:80 \
--volumes-from http_root \
-d wordpress
I go to my url (boot2docker ip) and there's a brand new Wordpress application. I go ahead and set up the Wordpress site by adding a theme and some images. I then docker inspect http_root and sure enough the filesystem changes are all there.
I then commit the changed containers:
>docker commit http_root evilnode/http_root:dev
>docker commit wp_datastore evilnode/wp_datastore:dev
I verify that my new images are there. Then I save the images:
> docker save -o ~/tmp/http_root.tar evilnode/http_root:dev
> docker save -o ~/tmp/wp_datastore.tar evilnode/wp_datastore:dev
I verify that the tar files are there as well. So far, so good.
Here is where I get a bit confused. I'm not entirely sure if I need to, but I also export the containers:
> docker export http_root > ~/tmp/http_root_snapshot.tar
> docker export wp_datastore > ~/tmp/wp_datastore_snapshot.tar
So I now have 4 tar files:
http_root.tar (saved image)
wp_datastore.tar (saved image)
http_root_snapshot.tar (exported container)
wp_datastore_snapshot.tar (exported container)
I SCP these tar files to another machine, then proceed to build as follows:
>docker load -i ~/tmp/wp_datastore.tar
>docker load -i ~/tmp/http_root.tar
The images evilnode/wp_datastore:dev and evilnode/http_root:dev are loaded.
>docker run -v /var/lib/mysql --name=wp_datastore -d evilnode/wp_datastore:dev
>docker run -v /var/www/html --name=http_root -d evilnode/http_root:dev
If I understand correctly, containers were just created based on my images.
Sure enough, the containers are there. However, if I docker inspect http_root, and go to the file location aliased by /var/www/html, the directory is completely empty. OK...
So then I think I need to import into the new containers since images don't contain file system changes. I do this:
>cat http_root.snapshot.tar | docker import - http_root
I understand this to mean that I am importing a file system delta from one container into another. However, when I go back to the location aliased by /var/www/html, I see the same empty directory.
How do I export the changes from these containers?
Volumes are not exported with the new image. The proper way to manage data in Docker is to use a data container and use a command like docker run --volumes-from dbdata -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /dbdata or docker cp to backup data and transfer it around. https://docs.docker.com/userguide/dockervolumes/#backup-restore-or-migrate-data-volumes

Resources