what does docker export dir mean - docker

i am following this tutorial to run mssql in a docker.First the user pulls the image
docker pull microsoft/mssql-server-linux
second he does below
export DIR=/var/lib/mssql
sudo mkdir $DIR
finally he runs
docker run \
-d \
--name mssql \
-e 'ACCEPT_EULA=Y' \
-e 'SA_PASSWORD=' \
-p 1433:1433 \
-v $DIR:/var/opt/mssql \
microsoft/mssql-server-linux
Author explains second step as below
Create a directory on the host that will store data from the container and keep the value in an environment variable for convenience:
ask:
what does the author meant by that and what happens if we dont create the directory
I tried searching for different terms like below
docker container default path
docker file system
but not able to understand.Can some one shed some light on this

So here is thing. Consider below code
export DIR=/var/lib/mssql
sudo mkdir $DIR
I can rewrite it as
sudo mkdir /var/lib/mssql
But I will also have to change my RUN command to
docker run \
-d \
--name mssql \
-e 'ACCEPT_EULA=Y' \
-e 'SA_PASSWORD=' \
-p 1433:1433 \
-v /var/lib/mysql:/var/opt/mssql \
microsoft/mssql-server-linux
Now if you change the directory, then you you will have to update two places. Thats why DIR was used.
If you remove below from your docker run
-v /var/lib/mysql:/var/opt/mssql \
The data of your DB will be stored inside container at /var/opt/mssql and the data will only exist till the container is running. Next time you restart the container the DB will be blank.
That is why you map it to an outside directory on host. So when you restart the container or launch a new one then that directory content would be made available inside the container and the DB will have all the changes you made

Related

"rootless" docker gets permission denied, but account running docker does not - why?

I am running docker "rootless" according to this guide: https://docs.docker.com/engine/security/rootless/
The user which actually runs docker is svc_test.
When I try and start a docker container which has diretory mounts which don't exists - the docker daemon (a.k.a. svc_test user) attempts to mkdir these directories, but fails with
docker: Error response from daemon: error while creating mount source path '/dir_path/dir_name': mkdir /dir_path/dir_name: permission denied.
When I (svc_test) them attempt to do mkdir /dir_path/dir_name I succeed without any issues.
What is going on here and why does this happen?
Clearly I am missing something, but I can't trace what is that exactly.
Update 1:
This is the specific docker cmd I use to run the container:
docker run -d --restart unless-stopped \
--name questdb \
-e QDB_METRICS_ENABLED=TRUE \
--network="host" \
-v /my_mounted_volume/questdb:/questdb \
-v /my_mounted_volume/questdb/public:/questdb/public \
-v /my_mounted_volume/questdb/conf:/questdb/conf \
-v /my_mounted_volume/questdb/db:/questdb/db \
-v /my_mounted_volume/questdb/log:/questdb/log \
questdb/questdb:6.5.2 /usr/bin/env QDB_PACKAGE=docker /app/bin/java \
-m io.questdb/io.questdb.ServerMain \
-d /questdb \
-f
For clarity: my final goal is to be able to run the docker container in question from the same user form which I run my docker daemon (the svc_test user). Hence how I stumbled on this problem.

owncloud - docker - location of data and mysql data directories on the host?

I'm installing the owncloud server on a macOS machine, so I have to use the docker image. The docker installation documentation says the install:
mounts the data and MySQL data directories on the host for persistent storage
But I cannot find the location.
The docker-composer.yml file mentions
services:
owncloud:
volumes:
- files:/mnt/data
but this is not a path on my host, so obviously I'm missing something.
Thanks,
In your case, location of your data is "files".
In following case, location of data is "/var/mysql/data" :
services:
owncloud:
volumes:
- /var/mysql/data:/mnt/data
Editing the OwnCloud docker-compose.yml file to add a bind mount point didn't work for me at all. It seemed to work only for the first container run and then on subsequent runs I was getting errors suggesting that the database couldn't start.
I ended up finding it a lot easier to have the database stored on the host, disabling redis and using a bind mount for the file storage so I could be assured that I'd always get a reliable state when the container got restarted/removed/etc.
export OWNCLOUD_VERSION=10.5.0
export OWNCLOUD_DOMAIN=localhost
export ADMIN_USERNAME=***
export ADMIN_PASSWORD=***
export HTTP_PORT=8080
docker run -d \
--net host \
--name owncloud \
-p ${HTTP_PORT}:8080 \
-e OWNCLOUD_DOMAIN=${OWNCLOUD_DOMAIN} \
-e OWNCLOUD_DB_TYPE=pgsql \
-e OWNCLOUD_DB_NAME=owncloud \
-e OWNCLOUD_DB_USERNAME=*** \
-e OWNCLOUD_DB_PASSWORD=*** \
-e OWNCLOUD_DB_HOST=localhost \
-e OWNCLOUD_ADMIN_USERNAME=${ADMIN_USERNAME} \
-e OWNCLOUD_ADMIN_PASSWORD=${ADMIN_PASSWORD} \
-e OWNCLOUD_REDIS_ENABLED=false \
-e OWNCLOUD_REDIS_HOST=redis \
-v /blah/data:/mnt/data:rw \
owncloud/server:${OWNCLOUD_VERSION}
This code is also nifty to simply place in your system startup file and bang - it all just works.

customizing docker-compose.yml for images from docker store

i'm new to docker and i'm currently experimenting using https://github.com/diginc/docker-pi-hole
It's pretty straightforward if i just imagine it as a light-weight VM, i've pulled the image using docker pull diginc/pi-hole and manually started the image by doing
docker run -d \
--name pi-hole \
-p 53:53/tcp
-p 53:53/udp
-p 8053:80 \
-e TZ=SG \
-v "/Users/me/pihole/:/etc/pihole/" \
-v "/Users/me/dnsmasq.d/:/etc/dnsmasq.d/" \
-e ServerIP="192.168.0.25" \
--restart=always \
diginc/pi-hole:alpine
everything works well, but in their documentation, it's mentioned to use docker_run.sh
No idea where/how to execute this, and also the authors also suggested using docker-compose, but after pulling the project, i can't find where's the actual directory.
Where is the directory?
What's the typical way of customizing the compose.yml
How to run after i've done my customization?
The docker-run.sh is on the site
https://github.com/diginc/docker-pi-hole/blob/master/docker_run.sh
Just use it

Using LOAD CSV to import a local file to Neo4j in a Docker container

So I've been trying to import an external CSV file into my graphdb.
My neo4j is stored in a Docker container.
I placed the file in NEO_HOME/import, as implied.
I called the LOAD CSV command with "file:///mycsv.csv" as an argument, and got the followng in return
Couldn't load the external resource at: file:/var/lib/neo4j/import/mycsv.csv
Since I'm running the Docker container on a Windows environment, I don't see where the /var directory should be. Even when browsing the container itself via the Docker Quickstart Terminal. I still cannot find /var/lib...
When trying to change the .conf file to a different import directory, it didn't help as well.
Did somebody have this before?
You have to explicitly mount your import folder when invoking docker:
docker run -e NEO4J_AUTH=none -p 7474:7474 -p 7687:7687 -v $PWD/plugins:/plugins -v $PWD/import:/var/lib/neo4j/import neo4j:3.1.3-enterprise
When you run this command:
docker run \
--name testneo4j \
-p7474:7474 -p7687:7687 \
-d \
-v $HOME/neo4j/data:/data \
-v $HOME/neo4j/logs:/logs \
-v $HOME/neo4j/import:/var/lib/neo4j/import \
-v $HOME/neo4j/plugins:/plugins \
--env NEO4J_AUTH=neo4j/test \
neo4j:latest
The physical directory on Windows will be probably located in C:\Users\<your user>\neo4j like this:
C:\Users\<your user>\neo4j data;C import;C logs;C plugins;C
https://i.stack.imgur.com/VuW46.png

How to store my docker registry in the file system

I want to setup a private registry behind a nginx server. To do that I configured nginx with a basic auth and started a docker container like this:
docker run -d \
-e STANDALONE=true \
-e INDEX_ENDPOINT=https://docker.example.com \
-e SETTINGS_FLAVOR=local \
-e STORAGE_PATH=/home/example/registry \
-p 5000:5000 \
registry
By doing that, I can login to my registry, push/pull images... But if I stop the container and start it again, everything is lost. I would have expected my registry to be save in /home/example/registry but this is not the case. Can someone tell me what I missed ?
I would have expected my registry to be save in /home/example/registry but this is not the case
it is the case, only the /home/exemple/registry directory is on the docker container file system, not the docker host file system.
If you run your container mounting one of your docker host directory to a volume in the container, it would achieve what you want:
docker run -d \
-e STANDALONE=true \
-e INDEX_ENDPOINT=https://docker.example.com \
-e SETTINGS_FLAVOR=local \
-e STORAGE_PATH=/registry \
-p 5000:5000 \
-v /home/example/registry:/registry \
registry
just make sure that /home/example/registry exists on the docker host side.

Resources