If I execute this cmd in a console:
docker run -it --rm --link rabbit --link elasticsearch -v "$PWD"/logstash:/config-dir logstash logstash -f /config-dir/logstash.conf
It runs fine. Inside ./logstash folder there is a logstash.conf.
But now I'm trying to put in a docker-compose and the same doesn't works:
logstash:
image: logstash:latest
links:
- "elasticsearch:elasticsearch"
- "rabbit:rabbit"
volumes:
- $PWD/logstash:/config_dir
command:
- "-f /config_dir/logstash.conf"
But I cannot see the difference between both commands. Some help? How is it volume mounting done? Or is the command that doesn't works? Response from logstash init is:
logstash_1 | {:timestamp=>"2016-07-06T15:43:06.663000+0000", :message=>"No config files found: / /config_dir/logstash.conf\nCan you make sure this path is a logstash config file?", :level=>:error}
rabbitmq_logstash_1 exited with code 1
Edit: I finally solved the problem by removing the command and using the default command of the original image, but I still don't understand the problem and how the same command is passed to docker and works but if it is passed throught docker-compose don't.
Thanks in advance
Your config is probably not working because your version of docker-compose does not execute shell expansions while creating your container. That means that docker compose is trying to find a literal path $PWD/logstash instead of expanding $PWD to your present directory. Later versions of docker compose do allow for environment variable expansion.
Docker-compose does allow relative paths though, through the use of ./, which references the folder the compose file is in, not necessarily your pwd, so you just need to change your compose file to be:
volumes:
- ./logstash:/config_dir
Related
Is there any proper way of restarting an entire docker compose stack from within one of its containers?
One workaround involves mounting the docker socket:
volumes:
- /var/run/docker.sock:/var/run/docker.sock
and then use the Docker Engine SDKs (https://docs.docker.com/engine/api/sdk/examples/).
However, this solution only allows restarting the containers itselves. There seems to be no way to send compose commands, like docker compose restart, docker compose up, etc.
The only solution I've found to send docker compose commands is to open a terminal on the host from the container using ssh, like this: access host's ssh tunnel from docker container
This is partly related to How to run shell script on host from docker container? , but I'm actually looking for a more specific solution to only send docker compose commands.
I tried with this simple docker-compose.yml file
version: '3'
services:
nginx:
image: nginx
ports:
- 3000:80
Then I started a docker container using
docker run -it --rm -v /var/run/docker.sock:/var/run/docker.sock -v $(pwd):/work docker
Then, inside the container, I did
cd /work
docker-compose up -d
and it started the container up on the host.
Please note that you have an error in your socket mapping. It needs to be
- /var/run/docker.sock:/var/run/docker.sock
(you have a period instead of a slash at one point)
As mentioned by #BMitch in the comments, compose project name was the reason why I wasn't able to run docker compose commands inside the running container.
By default the compose project name is set to the directory name, so if the docker-compose.yml is run from a host directory named folder1, then the commands inside the container should be run as:
docker-compose -p folder1 ...
So now, for example, restarting the stack works:
docker-compose -p folder1 restart
Just as a reference, a fixed project name for your compose can be set using name: ... as a top-level attribute of the .yml file, but requires docker compose v2.3.3 : Set $PROJECT_NAME in docker-compose file
I have created a Zeppelin docker image in my local system and configured the Spark Interpreter through maven repositories and runned the Zeppelin It worked. But when I stop the Docker and runned again the Interpreter binding was gone. How to solve this Issue ? I want that Interpreter binding one-time so that when ever I stop the docker and run again It has to store those interpreter Binding as it is.
You need 3 volumes for persisting configurations, notebooks and logs.
Note: If you added custom interpreters, you need an additional volume for your interpreter binaries.
docker volume create zeppelin-conf
docker volume create zeppelin-notebook
docker volume create zeppelin-logs
docker volume create zeppelin-interpreter
Run the container with above volumes mounted.
docker run -d --restart always -p 8080:8080 -v zeppelin-conf:/zeppelin/conf -v zeppelin-notebook:/zeppelin/notebook -v zeppelin-logs:/zeppelin/logs -v zeppelin-interpreter:/zeppelin/interpreter apache/zeppelin:0.8.1
If you just want to persist configurations you can use following lines:
docker volume create zeppelin-conf
docker run -d --restart always -p 8080:8080 -v zeppelin-conf:/zeppelin/conf apache/zeppelin:0.8.1
Configurations:/zeppelin/conf
Notebooks: /zeppelin/notebook
Logs: /zeppelin/logs
Interpreters: /zeppelin/interpreter
Edit: The /zeppelin directory is the default home directory for docker images. See Dockerfile. Therefore, you don't need to specify ZEPPELIN_NOTEBOOK_DIR, ZEPPELIN_LOG_DIR or ZEPPELIN_INTERPRETER_DIR environment variables.
Mount file into docker run is easy - just pass it into --volume parameter. But in zeppelin case there some parameters pre-configured there, so replace it with empty file is most likely is not what you want achieve. So I could recommend first get that file with default content from container and then mount to it in next run. Please follow step-by step instructions:
First we prepare default config for nest runs.
Run default container temporary:
sudo docker run -d --name zeppelin-test apache/zeppelin:0.8.1
And get default config from it:
mkdir -p conf
sudo docker zeppelin-test cat /zeppelin/conf/interpreter.json > conf/interpreter.json
Note 1: It will not be used for work, so most parameters unimportant. It need to be done once for setup only!
Note 2: Because that config populated on start unfortunately you can't obtain it on single run like: sudo docker run --rm apache/zeppelin:0.8.1 cat /zeppelin/conf/interpreter.json
Now we can provide use it as bind-mount.
If you use direct docker run method without docker-compose, add option, among others: --volume $(pwd)/conf/interpreter.json:/zeppelin/conf/interpreter.json
But I recommend use docker-compose, so there option placed under volumes: key like - ./conf/interpreter.json:/zeppelin/conf/interpreter.json. Full example:
version: '3.7'
services:
zeppelin:
image: apache/zeppelin:0.8.1
ports:
- "7077:7077"
- "8080:8080"
volumes:
- ./logs:/logs
- ./notebook:/notebook
- ./conf/interpreter.json:/zeppelin/conf/interpreter.json
environment:
ZEPPELIN_NOTEBOOK_DIR: /notebook
ZEPPELIN_LOG_DIR: /logs
And then just run from that directory:
docker-compose up -d
Interpreter bindings are stored in conf/interpreter.json. Need use external interpreter.json file.
I work on windows 10 and use docker toolbox.
When i run container using docker run command, i can mount local filesystem folder on container folder, like this:
docker run -ti --name local -p 80:80 -d -v /c/Users/name/htdocs:/app webdevops/php-apache-dev
But when i try to use docker-compose up, and make such docker-compose.yml file, it doesnt work - container doesnt see my local filesystem:
version: '3.6'
services:
server:
image: webdevops/php-apache-dev
ports:
- "80:80"
volumes:
- /c/Users/name/htdocs:/app
What might be causing this?
I think that there is a small syntax issue in your docker-compose.yml file. If it is failing to see your local file system perhaps you're passing it a string that does not exist. I presume that /c/Users/name/htdocs isn't the actual absolute directory on your host. Could you please share it? Perhaps you haven't escape a non-letter character in the string?
I found an answer.. my username actually consists of two words and i used escaping slash required in run command in docker-compose.yml file.
It turns out it is not required in docker-compose.yml file. It actually turned into other slash that made my User Name into User/Name.
I have followed the next guide https://hub.docker.com/r/iliyan/jenkins-ci-php/ to download the docker image with Jenkins.
When I start my container using docker start CONTAINERNAME command, I can access to Jenkins from localhost:8080.
The problem comes up when I change Jenkins configuration and restart Jenkins using docker stop CONTAINERNAME and docker start CONTAINERNAME, my Jenkins doesn't contain any of my previous configuration changes..
How can I persist the Jenkins configuration?
You need to mount the Jenkins configuration as a volume, the -v flag will do just that for you. (you can ignore the --privileged flag in my example unless you plan on building docker images inside your jenkins docker image)
docker run --privileged --name='jenkins' -d -p 6999:8080 -p 50000:50000 -v /home/jan/jenkins:/var/jenkins_home jenkins:latest
The -v flag will mount your /var/jenkins_home outside your container in /home/jan/jenkins maintaining it between rebuilds.
--name so that you have a fixed name for the container to start / stop it from.
Then next time you want to run it, simply call
docker start jenkins
My understanding is that the init script
/sbin/tini -- /usr/local/bin/jenkins.sh
is reseting jenkins configuration on startup within the folder provided through the
JENKINS_HOME env var,
wether mounted outside the docker vm or not.
It is but possible to store the configuration on github using
configure/"Configure System"/"SCM Sync configuration"/Git
section.
See possible detailed configuration here
You can use this docker-compose file:
version: '3.1'
services:
jenkins:
image: jenkins:latest
container_name: jenkins
restart: always
environment:
TZ: GMT
volumes:
- ./jenkins_host:/var/jenkins_home
ports:
- 8080:8080
tty: true
You only need to share the jenkins volume ./jenkins_host:/var/jenkins_home with host folder
Besides the obvious, like running parameters that clear up the image that you should disable, you can do a few things:
use docker commit and reuse the commited container
mount the part where you write to the local file system with docker volumes
my favorite : use command :
docker container restart containername
Depending on your needs you can pick one.
I use the latter for example when testing jenkins plugins and it retains the data inside.
Source of the latter that is also useful for updates:
https://jimkang.medium.com/how-to-start-a-new-jenkins-container-and-update-jenkins-with-docker-cf628aa495e9
I'm currently trying to use variable substitution in a docker-compse.yml file. This file contains the following:
jenkins:
image: "jenkins:${JENKINS_VERSION}"
external_links:
- mongodb:mongo
ports:
- 8000:8080
The image below shows what happens when I try to start everything up.
As you can see, docker-compose shows a warning saying that the variable is not set. I suspect this is caused due to the use of sudo to start docker-compose. My setup (a Jenkins docker container which has access to docker and docker-compose via volume mounts) currently requires the use of sudo. Would it be better to stop docker requiring sudo, or is there another way to fix this without changing the current setup?
sudo -E preserve the user environment when running the command. It should do what you want.