I am using the following Dockerfile to build Solr using Docker.
FROM solr:5.5
ENV SOLR_HOME=/opt/solr/server/solr/cores
RUN mkdir ${SOLR_HOME}
RUN chown -R solr:solr ${SOLR_HOME}
VOLUME ["${SOLR_HOME}"]
EXPOSE 8983
I try to run the following Docker command to mount a host directory to the container:
docker run --restart=always -d --name solr-demo \
--privileged=true -p 8983:8983 \
-v /data/solr_demo:/opt/solr/server/solr/cores \
solr-test:latest
I am also copying the required solr.xml file into the data/solr_demo. When I run the docker run command I get the following error:
stat: cannot stat ‘/opt/solr/server/solr/cores’: No such file or directory 42146d74b446ba4784fd197688e3210f294aad8755ae730cc559132720bcc35a
Error response from daemon: Container 42146d74b446ba4784fd197688e3210f294aad8755ae730cc559132720bcc35a is restarting, wait until the container is running
From your comment, it appears you're mounting a nonexistent directory for your volume. Try this command that mounts /data/solr_demo1 instead of /data/solr_demo as your volume.
docker run --restart=always -d --name solr-demo \
--privileged=true -p 8983:8983 \
-v /data/solr_demo1:/opt/solr/server/solr/cores \
solr-test:latest
If it is really an user problem (it remind me of some issue I add with apache in container), you should consider using Gosu. https://github.com/tianon/gosu
It will let you run and swap user correctly and have a nice mapping from your local users and users inside the container.
Hope it will be useful.
Related
I am running docker "rootless" according to this guide: https://docs.docker.com/engine/security/rootless/
The user which actually runs docker is svc_test.
When I try and start a docker container which has diretory mounts which don't exists - the docker daemon (a.k.a. svc_test user) attempts to mkdir these directories, but fails with
docker: Error response from daemon: error while creating mount source path '/dir_path/dir_name': mkdir /dir_path/dir_name: permission denied.
When I (svc_test) them attempt to do mkdir /dir_path/dir_name I succeed without any issues.
What is going on here and why does this happen?
Clearly I am missing something, but I can't trace what is that exactly.
Update 1:
This is the specific docker cmd I use to run the container:
docker run -d --restart unless-stopped \
--name questdb \
-e QDB_METRICS_ENABLED=TRUE \
--network="host" \
-v /my_mounted_volume/questdb:/questdb \
-v /my_mounted_volume/questdb/public:/questdb/public \
-v /my_mounted_volume/questdb/conf:/questdb/conf \
-v /my_mounted_volume/questdb/db:/questdb/db \
-v /my_mounted_volume/questdb/log:/questdb/log \
questdb/questdb:6.5.2 /usr/bin/env QDB_PACKAGE=docker /app/bin/java \
-m io.questdb/io.questdb.ServerMain \
-d /questdb \
-f
For clarity: my final goal is to be able to run the docker container in question from the same user form which I run my docker daemon (the svc_test user). Hence how I stumbled on this problem.
I start a docker container with the followings:
cd /root
docker run -it -d --privileged=true --name nginx nginx
rm -fr dockerdata
mkdir dockerdata
cd dockerdata
mkdir nginx
cd nginx
docker cp nginx:/usr/share/nginx/html .
docker cp nginx:/etc/nginx/nginx.conf .
docker cp nginx:/etc/nginx/conf.d ./conf
docker cp nginx:/var/log/nginx ./logs
docker rm -f nginx
cd /root
docker run -it -d -p 8020:80 --privileged=true --name nginx \
-v /root/dockerdata/nginx/html:/usr/share/nginx/html \
-v /root/dockerdata/nginx/nginx.conf:/etc/nginx/nginx.conf \
-v /root/dockerdata/nginx/conf:/etc/nginx/conf.d \
-v /root/dockerdata/nginx/logs:/var/log/nginx \
nginx
"docker inspect nginx" is followings
HostConfig-Binds
The bound directory can be synchronized, but directly bound files like "nginx. conf" cannot be synchronized. When I modify the "nginx. conf" in the host, the "nginx. conf" in the container does not change.
I want to know why this happens and how I can directly bind a single file between the host and the container.##
why this happens
Mount bind mounts the file to inode. The nginx entrypoint executes in https://github.com/nginxinc/docker-nginx/blob/ed42652f987141da65bab235b86a165b2c506cf5/stable/debian/30-tune-worker-processes.sh :
sed -i.bak
sed creates a new file, then moves the new file to the old one. The inode of the file changes, so it's no longer mounted inode.
how I can directly bind a
It is bind. Instead, you should consider re-reading nginx docker container documentation on how to pass custom config to it:
-v /host/path/nginx.conf:/etc/nginx/nginx.conf:ro
^^^
Which does skip sed at https://github.com/nginxinc/docker-nginx/blob/ed42652f987141da65bab235b86a165b2c506cf5/stable/debian/30-tune-worker-processes.sh#L12 .
Recently I installed nextcloudpi in docker with
sudo docker run -d -p 4443:4443 -p 443:443 -p 80:80 -v /home/user/storage/nextcloud:/data --name nextcloudpi ownyourbits/nextcloudpi-armhf <ip-adress-of-pi>
the folder /home/user/storage is a mounted external storage (following this tutorial https://www.techjunkie.com/build-nas-raspberry-pi-linux/)
It get me the error:
Running nc-init
Setting up a clean Nextcloud instance... wait until message 'NC init done'
Setting up database...
Setting up Nextcloud...
Console has to be executed with the user that owns the file config/config.php
Current user: www-data
Owner of config.php: root
Try adding 'sudo -u root ' to the beginning of the command (without the single quotes)
If running with 'docker exec' try adding the option '-u root' to the docker command (without the single quotes)
I tried
sudo docker run -d -p 4443:4443 -p 443:443 -p 80:80 -v /home/user:/data --name nextcloudpi ownyourbits/nextcloudpi-armhf <ip-adress-of-pi>
and everythink workes well as far as I can tell. I can load the nextcloudpi config UI as well as the nextcloud GUI.
I have tried some chown and chmod for the /home/user/storage folder, without any succes.
How can i use the external storage as directory of the nextcloud?
I installed and run nginx on my linux machine to understand the configurations etc. After a while i decided to remove it safely by following this thread in order to use it in docker
By following this documentaion i run this command
sudo docker run --name ngix -d -p 8080:80 pillalexakis/myrestapi:01
And i saw ngix's homepage at localhost
Then i deleted all ngix images & stopped all containers and i also run this command
sudo docker system prune -a
But now restarted my service by this command
sudo docker run -p 192.168.2.9:7777:8085 phillalexakis/myfirstapi:01 and i keep seeing at localhost ngix index.html
How can i totally remove it ?
Note: I'm new with docker and i might have missed a lot of things. Let me know what extra docker commands should i run in order provide better information.
Assuming your host have been preparing as below
your files (index.html, js, etc) under folder - /myhost/nginx/html
your nginx configuration - /myhost/nginx/nginx.conf
Solution
map your files (call volume) on the fly from outside docker image via docker cli
This is the command
docker run -it --rm -d -p 8080:80 --name web \
-v /myhost/nginx/html:/usr/share/nginx/html \
-v /myhost/nginx/nginx.conf:/etc/nginx/nginx.conf \
nginx
copy your files into docker image by build your own docker image via Dockerfile
This is your Dockerfile under /myhost/nginx
FROM nginx:latest
COPY ./html/index.html /usr/share/nginx/html/index.html
This is the command to build your docker image
cd /myhost/nginx
docker build -t pillalexakis/nginx .
This is the command to run your docker image
docker run -it --rm -d -p 8080:80 --name web \
pillalexakis/nginx
I'm trying to backup my volume as described here in the docker documentation: https://docs.docker.com/storage/volumes/#backup-restore-or-migrate-data-volumes
I'm running the command with the path to the volume:
docker run --rm --volumes-from MYCONTAINER -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /var/lib/docker/volumes/MYCONTAINER_VOLUME
... and also trying with just the name of my volume
docker run --rm --volumes-from MYCONTAINER -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar MYCONTAINER_VOLUME
but no matter what I get an error like: tar: MYCONTAINER_VOLUME: Cannot stat: No such file or directory
This volume was created and linked to the container with docker-compose and its using a local driver for the volume.
When I run docker volume ls I get:
DRIVER VOLUME NAME
local MYCONTAINER_VOLUME
Can someone please tell me what i'm doing wrong with this?
I figured out what the issue was -
The last part of the command should be the path of the volume mounted in the CONTAINER, not the path of the volume on the HOST.
So basically, the formula for this command should be:
docker run --rm --volumes-from MYCONTAINER -v $(pwd):/backup ubuntu tar cvf /backup/MY_BACKUP.tar /PATH/INSIDE/CONTAINER/TO/VOLUME/data
... and this will create MY_BACKUP.tar in the current directory of the HOST.
also, make sure to STOP the container before archiving the volume if its something like postgres like in my case.
Then, to restore the volume if you're using docker-compose (since I had trouble with this too because the documentation isn't specific to preexisting containers / volumes created this way)
1) STOP the container
2) Make sure MY_BACKUP.tar is in the root project directory of the HOST
3) run
docker run --rm --volumes-from MYCONTAINER -v $(pwd):/backup ubuntu bash -c "cd / && tar xvf /backup/MY_BACKUP.tar
4) restart container
Hope this helps someone and I'm certainly open to any ideas to streamline this.
The documentation assume your container does have a volume associated to your container.
Meaning: your container was started with a volume.
Example:
$ docker run -d \
--name devtest \
--mount source=myvol2,target=/app \
nginx:latest
Check at the very least if you do have volumes created with:
docker volume ls