I have created a Zeppelin docker image in my local system and configured the Spark Interpreter through maven repositories and runned the Zeppelin It worked. But when I stop the Docker and runned again the Interpreter binding was gone. How to solve this Issue ? I want that Interpreter binding one-time so that when ever I stop the docker and run again It has to store those interpreter Binding as it is.
You need 3 volumes for persisting configurations, notebooks and logs.
Note: If you added custom interpreters, you need an additional volume for your interpreter binaries.
docker volume create zeppelin-conf
docker volume create zeppelin-notebook
docker volume create zeppelin-logs
docker volume create zeppelin-interpreter
Run the container with above volumes mounted.
docker run -d --restart always -p 8080:8080 -v zeppelin-conf:/zeppelin/conf -v zeppelin-notebook:/zeppelin/notebook -v zeppelin-logs:/zeppelin/logs -v zeppelin-interpreter:/zeppelin/interpreter apache/zeppelin:0.8.1
If you just want to persist configurations you can use following lines:
docker volume create zeppelin-conf
docker run -d --restart always -p 8080:8080 -v zeppelin-conf:/zeppelin/conf apache/zeppelin:0.8.1
Configurations:/zeppelin/conf
Notebooks: /zeppelin/notebook
Logs: /zeppelin/logs
Interpreters: /zeppelin/interpreter
Edit: The /zeppelin directory is the default home directory for docker images. See Dockerfile. Therefore, you don't need to specify ZEPPELIN_NOTEBOOK_DIR, ZEPPELIN_LOG_DIR or ZEPPELIN_INTERPRETER_DIR environment variables.
Mount file into docker run is easy - just pass it into --volume parameter. But in zeppelin case there some parameters pre-configured there, so replace it with empty file is most likely is not what you want achieve. So I could recommend first get that file with default content from container and then mount to it in next run. Please follow step-by step instructions:
First we prepare default config for nest runs.
Run default container temporary:
sudo docker run -d --name zeppelin-test apache/zeppelin:0.8.1
And get default config from it:
mkdir -p conf
sudo docker zeppelin-test cat /zeppelin/conf/interpreter.json > conf/interpreter.json
Note 1: It will not be used for work, so most parameters unimportant. It need to be done once for setup only!
Note 2: Because that config populated on start unfortunately you can't obtain it on single run like: sudo docker run --rm apache/zeppelin:0.8.1 cat /zeppelin/conf/interpreter.json
Now we can provide use it as bind-mount.
If you use direct docker run method without docker-compose, add option, among others: --volume $(pwd)/conf/interpreter.json:/zeppelin/conf/interpreter.json
But I recommend use docker-compose, so there option placed under volumes: key like - ./conf/interpreter.json:/zeppelin/conf/interpreter.json. Full example:
version: '3.7'
services:
zeppelin:
image: apache/zeppelin:0.8.1
ports:
- "7077:7077"
- "8080:8080"
volumes:
- ./logs:/logs
- ./notebook:/notebook
- ./conf/interpreter.json:/zeppelin/conf/interpreter.json
environment:
ZEPPELIN_NOTEBOOK_DIR: /notebook
ZEPPELIN_LOG_DIR: /logs
And then just run from that directory:
docker-compose up -d
Interpreter bindings are stored in conf/interpreter.json. Need use external interpreter.json file.
Related
Is there any proper way of restarting an entire docker compose stack from within one of its containers?
One workaround involves mounting the docker socket:
volumes:
- /var/run/docker.sock:/var/run/docker.sock
and then use the Docker Engine SDKs (https://docs.docker.com/engine/api/sdk/examples/).
However, this solution only allows restarting the containers itselves. There seems to be no way to send compose commands, like docker compose restart, docker compose up, etc.
The only solution I've found to send docker compose commands is to open a terminal on the host from the container using ssh, like this: access host's ssh tunnel from docker container
This is partly related to How to run shell script on host from docker container? , but I'm actually looking for a more specific solution to only send docker compose commands.
I tried with this simple docker-compose.yml file
version: '3'
services:
nginx:
image: nginx
ports:
- 3000:80
Then I started a docker container using
docker run -it --rm -v /var/run/docker.sock:/var/run/docker.sock -v $(pwd):/work docker
Then, inside the container, I did
cd /work
docker-compose up -d
and it started the container up on the host.
Please note that you have an error in your socket mapping. It needs to be
- /var/run/docker.sock:/var/run/docker.sock
(you have a period instead of a slash at one point)
As mentioned by #BMitch in the comments, compose project name was the reason why I wasn't able to run docker compose commands inside the running container.
By default the compose project name is set to the directory name, so if the docker-compose.yml is run from a host directory named folder1, then the commands inside the container should be run as:
docker-compose -p folder1 ...
So now, for example, restarting the stack works:
docker-compose -p folder1 restart
Just as a reference, a fixed project name for your compose can be set using name: ... as a top-level attribute of the .yml file, but requires docker compose v2.3.3 : Set $PROJECT_NAME in docker-compose file
In a docker-compose.yml file I have defined the following service:
php:
container_name: php
build:
context: ./container/php
dockerfile: Dockerfile
networks:
- saasnet
volumes:
- ./services:/var/www/html
- ./logs/php:/usr/local/etc/php-fpm.d/zz-log.conf
environment:
- "DB_PORT=3306"
- "DB_HOST=database"
It all builds fine, and another service (nginx) using the same volume mapping, - ./services:/var/www/html finds php as expected, so it all works in the browser. So far, so good.
But now I want to go into the container because I want to run composer install from a certain directory inside the container. So I go into the container using:
docker run -it php bash
And I find myself in the container at /var/www/html, where I expect to be able to navigate as if I were on my host machine in ./services directory, but ls at this point inside the container shows no files at all.
What am I missing or not understanding about how this works?
Your problem is that your are not specifying the volume on your run command - docker run is not aware of your docker-compose.yml. If you want to run it with all your options as specifiend in it, you need to either use docker-compose run, or pass all options to docker run:
docker-compose run php bash
docker run -it -e B_PORT=3306 -e DB_HOST=database -v ./services:/var/www/html -v ./logs/php:/usr/local/etc/php-fpm.d/zz-log.conf php bash
I have an image with MYSQL installed. I need to map the /var/lib/mysql directory to my host system. Following is the screenshot that I see within that directory, when I use the following command
docker run --rm -it --env-file=envProxy --network mynetwork --name my_db_dev -p 3306:3306 my_db /bin/bash
Now when I try to mount a directory from my host ( Windows 10 ), by running another container from the same image, the mysql directory is blank.
docker run --rm -it --env-file=envProxy --network mynetwork -v D:/docker/data:/var/lib/mysql --name my_db_dev1 -p 3306:3306 my_db /bin/bash
Also tried this, but none works
docker run --rm -it --env-file=envProxy --network mynetwork -v D:\docker\data:/var/lib/mysql --name my_db_dev1 -p 3306:3306 my_db /bin/bash
One thing that I see, is that the mysql directory in the path has now root user, instead of mysql as in the previous case.
I wanted all the content from the existing container (mysql directory ) to be copied back to the host mount directory
Is that Possible ? and How can that be achieved ?
Same problem on Docker Desktop(2.0.0.3 (31259)). I'd got the solution from this issues.
I ensured the containers were stopped, opened docker settings, selected "Shared Drives", removed the tick on "C" and added it again. Docker asked for the Windows account credentials and I entered the new ones. After that and starting containers, mount volumes were ok. Problem solved.
It could fix the problem more simply by just reset the credentials in Docker Settings.
If you need to get files from container into host, better use docker cp command: https://docs.docker.com/engine/reference/commandline/cp/
It will look like:
docker cp my_db_dev1:/var/lib/mysql d:\docker\data
UPD
Actually I want to persist the database files across other containers,
so I wanted use volumes
In this case you have to:
Start using docker-compose to orchestrate containers.
In docker-compose.yml you create volume, which is shared between all containers. Something like:
docker-compose.yml
version: '3'
services:
db1:
image: whatever
volumes:
- myvol:/data
db2:
image: whatever2
volumes:
- myvol:/data
volumes:
myvol:
Description: https://docs.docker.com/compose/compose-file/#volume-configuration-reference
Use Windows paths writing with backslash '\' and it is recommended using variables to specify path. On the other side for Linux use slash '/' For example:
docker run -it -v %userprofile%\work\myproj\some-data:/var/data
First create a folder structure like below,
C:\Users\rajit\MYSQL_DATA\MYSQL_CONFIG
C:\Users\rajit\MYSQL_DATA\DATA_DIR
then please adjust like below,
docker pull mysql:8.0
docker run --name mysql-docker -v C:\Users\rajit\MYSQL_DATA\MYSQL_CONFIG:/etc/mysql/conf.d --env="MYSQL_ROOT_PASSWORD=root" --env="MYSQL_PASSWORD=root" --env="MYSQL_DATABASE=test_db" -v C:\Users\rajit\MYSQL_DATA\DATA_DIR:/var/lib/mysql -d -p 3306:3306 mysql:8.0 --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci
try to turn off anti virus program or fire wall. Then click on "reset credentials" under settings/shared drives.
That worked for me.
Best regards.
I'm developing something that needs Prometheus to persist its data between restarts. Having followed the instructions
$ docker volume create a-new-volume
$ docker run \
--publish 9090:9090 \
--volume a-new-volume:/prometheus-data \
--volume "$(pwd)"/prometheus.yml:/etc/prometheus/prometheus.yml \
prom/prometheus
I have a valid prometheus.yml in the right directory on the host machine and it's being read by Prometheus from within the container. I'm just scraping a couple of HTTP endpoints for testing purposes at the moment.
But when I restart the container it's empty, no data from the previous run. What am I missing from my docker run ... command to persist the data into the a-new-volume volume?
Use the default data dir, which is /prometheus. To do that, use this line instead of what you have in your command:
...
--volume a-new-volume:/prometheus \
...
Found here: https://github.com/prometheus/prometheus/blob/master/Dockerfile
Surprisingly is not mentioned in the image docs
I had the same issue a today, but I was using a docker composer file. So wrapping up all what was in comments of other answers and what worked for me. In case setting up the Prometheus docker via yaml compose file...
First create a folder for the volume on the host machine, e.g.:
$ mkdir /tmp/prometheus
Then change the folder owner to nobody, like (use sudo if needed):
$ chown 65534:65534 /tmp/prometheus
Then add volume to the yaml configuration file:
prometheus:
image: prom/prometheus
container_name: prometheus
ports:
- 9090:9090
volumes:
- /tmp/prometheus:/prometheus
- ./prometheus.yml:/etc/prometheus/prometheus.yml
That should do it.
I'm unsure if something obvious escapes me or if it's just not possible but I'm trying to compose an entire application stack with images from docker hub.
One of them is mysql and it supports adding custom configuration files through volumes and to run .sql-files from a mounted directory.
But, I have these files on the machine where I'm running docker-compose, not on the host. Is there no way to specify files from the local machine to copy into the container before it runs it entrypoint/cmd? Do I really have to create local images of everything just for this case?
Option A: Include the files inside your image. This is less than ideal since you are mixing configuration files with your image (that should really only contain your binaries, not your config), but satisfies the requirement to use only docker-compose to send the files.
This option is achieved by using docker-compose to build your image, and that build will send over any files from the build directory to the remote docker engine. Your docker-compose.yml would look like:
version: '2'
services:
my-db-app:
build: db/.
image: custom-db
And db/Dockerfile would look like:
FROM mysql:latest
COPY ./sql /sql
The entrypoint/cmd would remain unchanged. You would need to run docker-compose up --build if the image already exists and you need to change the sql files.
Option B: Use a volume to store your data. This cannot be done directly inside of docker-compose. However it's the preferred way to include files from outside of the image into the container. You can populate the volume across the network by using the docker CLI and input redirection along with a command like tar to pack and unpack those files being sent over stdin:
tar -cC sql . | docker run --rm -it -v sql-files:/sql \
busybox /bin/sh -c "tar -xC /sql"
Run that via a script and then have that same script bounce the db container to reload that config.
Option C: Use some kind of network attached filesystem. If you can configure NFS on the host where you are running your docker CLI, you can connect to those NFS shares from the remote docker node using one of the below options:
# create a reusable volume
$ docker volume create --driver local \
--opt type=nfs \
--opt o=addr=192.168.1.1,rw \
--opt device=:/path/to/dir \
foo
# or from the docker run command
$ docker run -it --rm \
--mount type=volume,dst=/container/path,volume-driver=local,volume-opt=type=nfs,volume-opt=o=addr=192.168.1.1,volume-opt=device=:/host/path \
foo
# or to create a service
$ docker service create \
--mount type=volume,dst=/container/path,volume-driver=local,volume-opt=type=nfs,volume-opt=o=addr=192.168.1.1,volume-opt=device=:/host/path \
foo
Option D: With swarm mode, you can include files as configs in your image. This allows configuration files, that would normally need to be pushed to any node in the swarm, to be sent on demand to the node where your service is running. This uses a docker-compose.yml file to define it, but swarm mode isn't using docker-compose itself, so this may not fit your specific requirements. You can run a single node swarm mode cluster, so this option is available even if you only have a single node. This option does require that each of your sql files are added as a separate config. The docker-compose.yml would look like:
version: '3.4'
configs:
sql_file_1:
file: ./file_1.sql
services
my-db-app:
image: my-db-app:latest
configs:
- source: sql_file_1
target: /sql/file_1.sql
mode: 444
Then instead of a docker-compose up, you'd run a docker stack deploy -c docker-compose.yml my-db-stack.
If you can not use volumes (wants stateless docker-compose.yml and using remote machine), you can have config file written by command.
Example for nginx config in official image:
version: "3.7"
services:
nginx:
image: nginx:alpine
ports:
- 80:80
environment:
NGINX_CONFIG: |
server {
server_name "~^www\.(.*)$$" ;
return 301 $$scheme://$$1$$request_uri ;
}
server {
server_name example.com
...
}
command:
/bin/sh -c "echo \"$$NGINX_CONFIG\" > /etc/nginx/conf.d/redir.conf; nginx -g \"daemon off;\""
Environment variable could also be saved in .env file, you can use Compose's extend feature or load it from shell environment (where you fetched it from enywhere else):
https://docs.docker.com/compose/compose-file/#env_file
https://docs.docker.com/compose/compose-file/#variable-substitution
To get the original entrypoint command of a container:
docker container inspect [container] | jq --raw-output .[0].Config.Cmd
To investigate which file to modify this usually will work:
docker exec --interactive --tty [container] sh
This is how I'm doing it with volumes:
services:
my-db-app:
command: /shell_scripts/go.sh
volumes:
- ./shell_scripts:/shell_scripts
i think you had to do in a compose file:
volumes:
- src/file:dest/path
As a more recent update to this question: with a docker swarm hosted on Amazon, for example, you can define a volume that can be shared by services and is available across all nodes of the swarm (using the cloudstor driver, which in turn has AWS EFS underlying for persistence).
version: '3.3'
services:
my-db-app:
command: /shell_scripts/go.sh
volumes:
shell_scripts:/shell_scripts
volumes:
shell_scripts:
driver: "cloudstor:aws"
With Compose V2 you can simply do (as in the documentation) :
docker compose cp src [service:]dest
Before v2 you can use the workaround using docker cp explained in the associated issue
docker cp /path/to/my-local-file.sql "$(docker-compose ps -q mycontainer)":/file-on-container.sql