customizing docker-compose.yml for images from docker store - docker

i'm new to docker and i'm currently experimenting using https://github.com/diginc/docker-pi-hole
It's pretty straightforward if i just imagine it as a light-weight VM, i've pulled the image using docker pull diginc/pi-hole and manually started the image by doing
docker run -d \
--name pi-hole \
-p 53:53/tcp
-p 53:53/udp
-p 8053:80 \
-e TZ=SG \
-v "/Users/me/pihole/:/etc/pihole/" \
-v "/Users/me/dnsmasq.d/:/etc/dnsmasq.d/" \
-e ServerIP="192.168.0.25" \
--restart=always \
diginc/pi-hole:alpine
everything works well, but in their documentation, it's mentioned to use docker_run.sh
No idea where/how to execute this, and also the authors also suggested using docker-compose, but after pulling the project, i can't find where's the actual directory.
Where is the directory?
What's the typical way of customizing the compose.yml
How to run after i've done my customization?

The docker-run.sh is on the site
https://github.com/diginc/docker-pi-hole/blob/master/docker_run.sh
Just use it

Related

docker: Error response from daemon: invalid volume specification

I'm currently following this tutorial to run a model on Docker that was built using the Google Cloud AutoML Vision:
https://cloud.google.com/vision/automl/docs/containers-gcs-tutorial
I'm having trouble running the container, specifically running this command:
sudo docker run --rm --name ${CONTAINER_NAME} -p ${PORT}:8501 -v ${YOUR_MODEL_PATH}:/tmp/mounted_model/0001 -t ${CPU_DOCKER_GCR_PATH}
I have my environment variables set up right (did an echo $<env_var>). I do not have a /tmp/mounted_model/0001 directory on my local system. My model path is configured to be the model location on the cloud storage.
${YOUR_MODEL_PATH} must be a directory on the host on which you're running the container.
Your question suggests that you're using the Cloud Storage bucket path but you cannot do this.
Reviewing the tutorial, I think the instructions are confusing.
You are told to:
gsutil cp \
${YOUR_MODEL_PATH} \
${YOUR_LOCAL_MODEL_PATH}/saved_model.pb
So, your command should probably be:
sudo docker run \
--rm \
--interactive --tty \
--name=${CONTAINER_NAME} \
--publish=${PORT}:8501 \
--volume=${YOUR_LOCAL_MODEL_PATH}:/tmp/mounted_model/0001 \
${CPU_DOCKER_GCR_PATH}
NB I added --interactive --tty to make debugging easier; it's optional
NB ${YOUR_LOCAL_MODEL_PATH} not ${YOUR_MODEL_PATH}
NB The command should not be -t ${CPU_DOCKER_GCR_PATH} omit the -t
I've not run through this tutorial.

docker --volume format for Windows

I'm trying to take a shell script we use at work to set up our development environments and re-purpose it to work on my Windows environment via Git Bash.
The way the containers are brought up in the shell script are as follows:
docker run \
--detach \
--name=server_container \
--publish 80:80 \
--volume=$PWD/var/www:/var/www \
--volume=$PWD/var/log/apache2:/var/log/apache2 \
--link=mysql_container:mysql_container \
--link=redis_container:redis_container \
web-server
When I run that as-is, it returns the following error message:
C:\Program Files\Docker\Docker\Resources\bin\docker.exe: Error response from daemon: invalid bind mount spec "/C/Users/username/var/docker/environments/development/scripts/var/log/apache2;C:\\Program Files\\Git\\var\\log\\apache2": invalid volume specification: '/C/Users/username/var/docker/environments/development/scripts/var/log/apache2;C:\Program Files\Git\var\log\apache2': invalid mount config for type "bind": invalid mount path: '\Program Files\Git\var\log\apache2' mount path must be absolute. See 'C:\Program Files\Docker\Docker\Resources\bin\docker.exe run --help'.
I did a bunch of googling and documentation reading, but I'm a little overwhelmed by Docker, and I think I got it wrong. I tried setting up the container as follows:
docker run \
--detach \
--name=server_container \
--publish 80:80 \
--volume=/c/users/username/var/www:/var/www \
--volume=/c/users/username/var/log/apache2:/var/log/apache2 \
--link=mysql_container:mysql_container \
--link=redis_container:redis_container \
web-server
It still errors out with a similar error message. If I remove the colon:/var/www it comes up, but it doesn't seem to map those directories properly, that is it doesn't know that C:\users\username\var\www = /var/www
I know I'm missing something painfully dumb here, but when I look at the documentation I just glaze over. Any help would be greatly appreciated.
For people using Docker on Windows 10, an extra / has to be included in the path:
docker run -it -v //c/Users/path/on/host:/app/path/in/docker/container command
(notice an extra / near c)
If you are using Git Bash and using pwd then use an extra / there as well:
docker run -p 3000:3000 -v /app/node_modules -v /$(pwd):/app 09b10e9fda85`
(notice / before $(pwd))
Well, I answered my own question moments after I posted it.
This is the correct format.
docker run \
--detach \
--name=server_container \
--publish 80:80 \
--volume=//c/users/username/var/www://var/www \
--volume=//c/users/username/var/log/apache2://var/log/apache2 \
--link=mysql_container:mysql_container \
--link=redis_container:redis_container \
web-server
Should have kept googling a few minutes longer.
If you want to make the path relative, you can use pwd and variables. For example:
CURRENT_DIR=$(pwd)
docker run -v /"$CURRENT_DIR"/../../test/:/test alpine ls test

Run bitcoind with bitcoind.conf in docker

I know docker, but less about bitcoind.
Now I want to use this docker image to start my own test environment:
The description tells me:
docker volume create --name=bitcoind-data
docker run -v bitcoind-data:/bitcoin --name=bitcoind-node -d \
-p 8333:8333 \
-p 127.0.0.1:8332:8332 \
kylemanna/bitcoind
Now I want to now how I have to add my bitcoind.conf?
This isn't provided anywere? Can I use it at container startup or docker exec?
The repository contains a documentation file dedicated to your issue: https://github.com/kylemanna/docker-bitcoind/blob/master/docs/config.md

what does docker export dir mean

i am following this tutorial to run mssql in a docker.First the user pulls the image
docker pull microsoft/mssql-server-linux
second he does below
export DIR=/var/lib/mssql
sudo mkdir $DIR
finally he runs
docker run \
-d \
--name mssql \
-e 'ACCEPT_EULA=Y' \
-e 'SA_PASSWORD=' \
-p 1433:1433 \
-v $DIR:/var/opt/mssql \
microsoft/mssql-server-linux
Author explains second step as below
Create a directory on the host that will store data from the container and keep the value in an environment variable for convenience:
ask:
what does the author meant by that and what happens if we dont create the directory
I tried searching for different terms like below
docker container default path
docker file system
but not able to understand.Can some one shed some light on this
So here is thing. Consider below code
export DIR=/var/lib/mssql
sudo mkdir $DIR
I can rewrite it as
sudo mkdir /var/lib/mssql
But I will also have to change my RUN command to
docker run \
-d \
--name mssql \
-e 'ACCEPT_EULA=Y' \
-e 'SA_PASSWORD=' \
-p 1433:1433 \
-v /var/lib/mysql:/var/opt/mssql \
microsoft/mssql-server-linux
Now if you change the directory, then you you will have to update two places. Thats why DIR was used.
If you remove below from your docker run
-v /var/lib/mysql:/var/opt/mssql \
The data of your DB will be stored inside container at /var/opt/mssql and the data will only exist till the container is running. Next time you restart the container the DB will be blank.
That is why you map it to an outside directory on host. So when you restart the container or launch a new one then that directory content would be made available inside the container and the DB will have all the changes you made

How to pass docker options --mac-address, -v etc in kubernetes?

I have installed a 50 node Kubernetes cluster in our lab and am beginning to test it. The problem I am facing is that I cannot find a way to pass the docker options needed to run the docker container in Kubernetes. I have looked at kubectl as well as the GUI. An example docker run command line is below:
sudo docker run -it --mac-address=$MAC_ADDRESS \
-e DISPLAY=$DISPLAY -e UID=$UID -e XAUTHORITY=$XAUTHORITY \
-e XDG_RUNTIME_DIR=$XDG_RUNTIME_DIR \
-v /tmp/.X11-unix:/tmp/.X11-unix:ro \
-v /mnt/lab:/mnt/lab -v /mnt/stor01:/mnt/stor01 \
-v /mnt/stor02:/mnt/stor02 -v /mnt/stor03:/mnt/stor03 \
-v /mnt/scratch01:/mnt/scratch01 \
-v /mnt/scratch02:/mnt/scratch02 \
-v /mnt/scratch03:/mnt/scratch03 \
matlabpipeline $ARGS`
My first question is whether we can pass these docker options or not ? If there is a way to pass these options, how do I do this ?
Thanks...
I looked into this as well and from the sounds of it this is an unsupported use case for Kubernetes. Applying a specific MAC address to a docker container seems to conflict with the overall design goal of easily bringing up replica instances. There are a few workarounds suggested on this Reddit thread. In particular the OP finally decides the following...
I ended up adding the NET_ADMIN capability and changing the MAC to an environment variable with "ip link" in my entrypoint.sh.

Resources