Any ideas?
This service was starting before but now I get this '/etc/Caddyfile: is a directory' message and the service Exits.
$ docker-compose -f docker-compose-linux.yml up caddy
Starting server_applications_1 ... done
server_workspace_1 is up-to-date
server_php-fpm_1 is up-to-date
Starting server_caddy_1 ... done
Attaching to server_caddy_1
caddy_1 | http plugins loaded: git
caddy_1 | 2020/06/22 06:22:18 loading Caddyfile via flag: read /etc/Caddyfile: is a directory
server_caddy_1 exited with code 1
the caddy service dockerfile:
FROM zuohuadong/caddy:alpine
MAINTAINER Huadong Zuo <admin#zuohuadong.cn>
ARG plugins="cors"
WORKDIR /var/www/platform/public
CMD ["/usr/bin/caddy", "-conf", "/etc/Caddyfile"]
the Caddy service container yml: where CADDY_CUSTOM_CADDYFILE=./caddy/Caddyfile
volumes:
- ${CADDY_CUSTOM_CADDYFILE}:/etc/Caddyfile
and the Caddyfile is there in the right directory
...server/caddy$ ll
total 16
drwxrwxr-x 2 ubuntu ubuntu 4096 Jun 22 17:44 ./
drwxrwxr-x 11 ubuntu ubuntu 4096 Jun 22 15:55 ../
-rw-r--r-- 1 ubuntu ubuntu 1452 Jun 11 10:54 Caddyfile
-rw-r--r-- 1 ubuntu ubuntu 268 Jun 22 18:13 Dockerfile
$ docker-compose -f docker-compose-linux.yml up caddy
Starting server_applications_1 ... done
server_workspace_1 is up-to-date
server_php-fpm_1 is up-to-date
Starting server_caddy_1 ... done
Attaching to server_caddy_1
caddy_1 | http plugins loaded: git
caddy_1 | 2020/06/22 06:22:18 loading Caddyfile via flag: read /etc/Caddyfile: is a directory
server_caddy_1 exited with code 1
Can you try to mount directory instead of file in your docker compose.
volumes:
- ${CADDY_CUSTOM_CADDYFILE}:/etc
where CADDY_CUSTOM_CADDYFILE=full_Path_to/caddy/
you will still be able to use
CMD ["/usr/bin/caddy", "-conf", "/etc/Caddyfile"]
Related
I'm running into an issue with MacOS Ventura where by all bind volumes - where I link a directory on my host machine to one on the container - created with docker-compose are empty. I've tested the same scripts on MacOS 12.4 and 12.6 and they work as expected giving me the same directory contents on the container as on the host, so it seems v13 changed some permission.
The docker-compose.yml file:
version: "3"
services:
bash:
image: ubuntu:latest
stdin_open: true
tty: true
volumes:
- ./:/app
command: "/bin/bash"
So this should be creating a directory on the container called /app and linking that to the host directory the compose file is in.
But when I start the container:
❯ docker-compose up --build
[+] Running 1/0
⠿ Container ruby-docker-bash-1 Created 0.0s
Attaching to ruby-docker-bash-1
And login, the /app directory is empty:
❯ docker exec -it ruby-docker-bash-1 /bin/bash
root#9644de175d48:/# cd app/
root#9644de175d48:/app# ls -la
total 4
drwxr-xr-x 2 root root 40 Feb 1 09:59 .
drwxr-xr-x 1 root root 4096 Feb 1 09:39 ..
The total 4 is really weird here as there are 4 files supposed to be there, but not accessible:
root#9644de175d48:/app# cat Gemfile
cat: Gemfile: No such file or directory
This is the directory contents on the host are:
❯ ls -la
.rw-r--r-- 2.7k paul 1 Feb 09:31 Dockerfile
.rw-r--r-- 3.9k paul 1 Feb 09:32 Gemfile
.rw-r--r-- 27k paul 1 Feb 09:32 Gemfile.lock
.rw-r--r-- 149 paul 1 Feb 10:00 docker-compose.yml
If anyone has any experience with what might be going wrong or how I can get past this absolute time-sink of an issue, I'd really appreciate it.
Thank you!
I figured it out. I use Colima on MacOS, as there is no MacOS VM by docker.
Then I found this comment on a Colima repo issue, https://github.com/abiosoft/colima/issues/500#issuecomment-1343103477, where a user had mentioned they weren't able to sync directories.
To fix volumes on MacOS, using Colima, I did the folowing:
colima delete # reset
colima start --mount-type 9p
This doesn't seem to be documented anywhere. I’ve been through the site, the readme.
But I did find this line of code inside of the Colima repo:
validMountTypes := map[string]bool{"9p": true, "sshfs": true}
if util.MacOS13OrNewer() {
validMountTypes["virtiofs"] = true
}
I’m in MacOS 13, so it seems like there are issues with virtiofs and not in the older 9p mount type.
I am trying to run single node Elasticsearch instance on a HPC cluster. To do this, I am converting the Elasticsearch docker container as a singularity container. When I launch the container itself I get the following error:
$ singularity exec --overlay overlay.img elastic.sif /usr/share/elasticsearch/bin/elasticsearch
Could not create auto-configuration directory
Exception in thread "main" java.lang.RuntimeException: starting java failed with [1]
output:
[0.000s][error][logging] Error opening log file 'logs/gc.log': Permission denied
[0.000s][error][logging] Initialization of output 'file=logs/gc.log' using options 'filecount=32,filesize=64m' failed.
error:
Invalid -Xlog option '-Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m', see error log for details.
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
at org.elasticsearch.server.cli.JvmOption.flagsFinal(JvmOption.java:113)
at org.elasticsearch.server.cli.JvmOption.findFinalOptions(JvmOption.java:80)
at org.elasticsearch.server.cli.MachineDependentHeap.determineHeapSettings(MachineDependentHeap.java:59)
at org.elasticsearch.server.cli.JvmOptionsParser.jvmOptions(JvmOptionsParser.java:132)
at org.elasticsearch.server.cli.JvmOptionsParser.determineJvmOptions(JvmOptionsParser.java:90)
at org.elasticsearch.server.cli.ServerProcess.createProcess(ServerProcess.java:211)
at org.elasticsearch.server.cli.ServerProcess.start(ServerProcess.java:106)
at org.elasticsearch.server.cli.ServerProcess.start(ServerProcess.java:89)
at org.elasticsearch.server.cli.ServerCli.startServer(ServerCli.java:213)
at org.elasticsearch.server.cli.ServerCli.execute(ServerCli.java:90)
at org.elasticsearch.common.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:54)
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:85)
at org.elasticsearch.cli.Command.main(Command.java:50)
at org.elasticsearch.launcher.CliToolLauncher.main(CliToolLauncher.java:64)
If I understand correctly, Elasticsearch is trying to create a logfile in /var/log/elasticsearch but does not have the correct permissions. So I created the following recipe to create the folders and set the permission such that any process can write into the log directory. My recipe is the following:
Bootstrap: docker
From: elasticsearch:8.3.1
%files
elasticsearch.yml /usr/share/elasticsearch/config/
%post
mkdir -p /var/log/elasticsearch
chown -R elasticsearch:elasticsearch /var/log/elasticsearch
chmod -R 777 /var/log/elasticsearch
mkdir -p /var/data/elasticsearch
chown -R elasticsearch:elasticsearch /var/data/elasticsearch
chmod -R 777 /var/data/elasticsearch
The elasticsearch.yml file has the following content:
cluster.name: "docker-cluster"
network.host: 0.0.0.0
discovery.type: single-node
ingest.geoip.downloader.enabled: false
After building this recipe the directory /var/log/elasticsearch seems to get created correctly:
$ singularity exec elastic.sif ls -alh /var/log/
total 569K
drwxr-xr-x 4 root root 162 Jul 8 14:43 .
drwxr-xr-x 12 root root 172 Jul 8 14:43 ..
-rw-r--r-- 1 root root 7.7K Jun 29 17:29 alternatives.log
drwxr-xr-x 2 root root 69 Jun 29 17:29 apt
-rw-r--r-- 1 root root 58K May 31 11:43 bootstrap.log
-rw-rw---- 1 root utmp 0 May 31 11:43 btmp
-rw-r--r-- 1 root root 187K Jun 29 17:30 dpkg.log
drwxrwxrwx 2 elasticsearch elasticsearch 3 Jul 8 14:43 elasticsearch
-rw-r--r-- 1 root root 32K Jun 29 17:30 faillog
-rw-rw-r-- 1 root utmp 286K Jun 29 17:30 lastlog
-rw-rw-r-- 1 root utmp 0 May 31 11:43 wtmp
But when I launch the container I get the permission denied error listed above.
What is missing here? What permissions is Elasticsearch expecting?
The following workaround seems to be working for me now:
When launching the singularity container, the elasticsearch process is executed inside the container with the same UID as my own UID (the user that is launching the singularity container with singularity exec). The elasticsearch container is configured to run elasticsearch with the a separate user elasticsearch that exists inside the container. The issue is that singularity (unlike docker) will run every process inside the container with my own UID and not the elasticsearch UID, resulting in the error above.
To work around this, I created a base ubuntu singularity image and then installed elasticsearch into the container following these installation instructions (https://www.elastic.co/guide/en/elasticsearch/reference/current/targz.html). Because the installation was performed with my system user and UID, the entire elasticsearch installation belongs to my system user and not a separate elasticsearch user. Then I can launch the elasticsearch service inside the container.
When I start nexus3 in a docker container I get the following error messages.
$ docker run --rm sonatype/nexus3:3.8.0
Warning: Cannot open log file: ../sonatype-work/nexus3/log/jvm.log
Warning: Forcing option -XX:LogFile=/tmp/jvm.log
Java HotSpot(TM) 64-Bit Server VM warning: Cannot open file ../sonatype-work/nexus3/log/jvm.log due to Permission denied
Unable to update instance pid: Unable to create directory /nexus-data/instances
/nexus-data/log/karaf.log (Permission denied)
Unable to update instance pid: Unable to create directory /nexus-data/instances
It indicates that there is a file permission issue.
I am using Red Hat Enterprise Linux 7.5 as host machine and the most recent docker version.
On another machine (ubuntu) it works fine.
The issue occurs in the persistent volume (/nexus-data). However, I do not mount a specific volume and let docker use a anonymous one.
If I compare the volumes on both machines I can see the following permissions:
For Red Hat, where it is not working is belongs to root.
$ docker run --rm sonatype/nexus3:3.8.0 ls -l /nexus-data
total 0
drwxr-xr-x. 2 root root 6 Mar 1 00:07 etc
drwxr-xr-x. 2 root root 6 Mar 1 00:07 log
drwxr-xr-x. 2 root root 6 Mar 1 00:07 tmp
On ubuntu, where it is working it belongs to nexus. Nexus is also the default user in the container.
$ docker run --rm sonatype/nexus3:3.8.0 ls -l /nexus-data
total 12
drwxr-xr-x 2 nexus nexus 4096 Mar 1 00:07 etc
drwxr-xr-x 2 nexus nexus 4096 Mar 1 00:07 log
drwxr-xr-x 2 nexus nexus 4096 Mar 1 00:07 tmp
Changing the user with the options -u is not an option.
I could solve it by deleting all local docker images: docker image prune -a
Afterwards it downloaded the image again and it worked.
This is strange because I also compared the fingerprints of the images and they were identical.
An example of docker-compose for Nexus :
version: "3"
services:
#Nexus
nexus:
image: sonatype/nexus3:3.39.0
expose:
- "8081"
- "8082"
- "8083"
ports:
# UI
- "8081:8081"
# repositories http
- "8082:8082"
- "8083:8083"
# repositories https
#- "8182:8182"
#- "8183:8183"
environment:
- VIRTUAL_PORT=8081
volumes:
- "./nexus/data/nexus-data:/nexus-data"
Setup the volume :
mkdir -p ./nexus/data/nexus-data
sudo chown -R 200 nexus/ # 200 because it's the UID of the nexus user inside the container
Start Nexus
sudo docker-compose up -d
hf
You should attribute correct right to the folder where the persistent volume is located.
chmod u+wxr -R <folder of /nexus-data volumes>
Be carefull, if you execute previous command, it would give write, read and execution right to all users. If you want to give more restricted right, you should modify the command.
This question is a minimal failing version of this other one:
How to get contents generated by a docker container on the local fileystem
I have the following files:
./test
-rw-r--r-- 1 miqueladell staff 114 Jan 21 15:24 Dockerfile
-rw-r--r-- 1 miqueladell staff 90 Jan 21 15:23 docker-compose.yml
drwxr-xr-x 3 miqueladell staff 102 Jan 21 15:25 html
./test/html:
-rw-r--r-- 1 miqueladell staff 0 Jan 21 15:22 file_from_local_filesystem
DockerFile
FROM php:7.0.2-apache
RUN touch /var/www/html/file_generated_inside_the_container
VOLUME /var/www/html/
docker-compose.yml
test:
image: test
volumes:
- ./html:/var/www/html/
After running a container built from the image defined in the Dockerfile what I want is having:
./html
-- file_from_local_filesystem
-- file_generated_inside_the_container
Instead of this I get the following:
build the image
$ docker build --no-cache -t test .
Sending build context to Docker daemon 4.096 kB
Step 1 : FROM php:7.0.2-apache
---> 2f16964f48ba
Step 2 : RUN touch /var/www/html/file_generated_inside_the_container
---> Running in b957cc9d7345
---> 5579d3a2d3b2
Removing intermediate container b957cc9d7345
Step 3 : VOLUME /var/www/html/
---> Running in 6722ddba76cc
---> 4408967d2a98
Removing intermediate container 6722ddba76cc
Successfully built 4408967d2a98
run a container with previous image
$ docker-compose up -d
Creating test_test_1
list files on the local machine filesystem
$ ls -al html
total 0
drwxr-xr-x 3 miqueladell staff 102 Jan 21 15:25 .
drwxr-xr-x 5 miqueladell staff 170 Jan 21 14:20 ..
-rw-r--r-- 1 miqueladell staff 0 Jan 21 15:22 file_from_local_filesystem
list files from the container
$ docker exec -i -t test_test_1 ls -alR /var/www/html
/var/www/html:
total 4
drwxr-xr-x 1 1000 staff 102 Jan 21 14:25 .
drwxr-xr-x 4 root root 4096 Jan 7 18:05 ..
-rw-r--r-- 1 1000 staff 0 Jan 21 14:22 file_from_local_filesystem
The volume from the local filesystem gets mounted on the container file system replacing the contents of it.
This is contrary at what I understand in the section "Permissions and Ownership" of this guide Understanding volumes
How could I get the desired output?
Thanks
EDIT: As is said in the accepted answer I did not understand volumes when asking the question. Volumes, as mountponint, replace the container content with the local filesystem that is mounted.
The solution I needed was to use ENTRYPOINT to run the necessary commands to initialize the contents of the mounted volume once the container is running.
The code that originated the question can be seen working here:
https://github.com/MiquelAdell/composed_wordpress/tree/1.0.0
This is from the guide you've pointed to
This won’t happen if you specify a host directory for the volume
Volumes you share from other containers or host filesystem replace directories from container.
If you need to add some files to volume, you should do it after you start container. You can do an entrypoint for example which does touch and then runs your main process.
Yep, pretty sure it should be the full path:
docker-compose.yml
test:
image: test
volumes:
- ./html:/var/www/html/
./html should be /path/to/html
Edit
Output after changing to full path and running test.sh:
$ docker exec -ti dockervolumetest_test_1 bash
root#c0bd7a722b63:/var/www/html# ls -la
total 8
drwxr-xr-x 2 1000 adm 4096 Jan 21 15:19 .
drwxr-xr-x 3 root root 4096 Jan 7 18:05 ..
-rw-r--r-- 1 1000 adm 0 Jan 21 15:19 file_from_local_filesystem
Edit 2
Sorry, I misunderstood the entire premise of the question :)
So you're trying to get file_generated_inside_the_container (which is created inside your docker image only) mounted to some location on your host machine - like a "reverse mount".
This isn't possible to do with any docker commands, but if all you're after is access to your VOLUMEs files on your host, you can find the files in the docker root directory (normally /var/lib/docker). To find the exact location of the files, you can use docker inspect [container_id], or in the latest versions use the docker API.
See cpuguy's answer in this github issue: https://github.com/docker/docker/issues/12853#issuecomment-123953258 for more details.
Is it possible to mount a volume from a container into another container on a different path? E.g.
contA exposes a volumen /source
mounting it in another container docker run --volumes-from contA -v /source/somedir:/etc/otherdir
I'm trying to use this with docker-compose and jwilder/nginx-proxy:
docker-compose.yml
myapp:
build: .
command: ./run.sh
volumes:
- /source
nginx:
image: jwilder/nginx-proxy
volumes_from:
- myapp
volumes:
- /source/vhost.d:/etc/nginx/vhost.d:ro
- /var/run/docker.sock:/tmp/docker.sock
links:
- myapp:myapp
If I'm trying so, I can't see my files at /etc/nginx/vhost.d:
$ docker-compose run nginx bash
root#f200c1c476c7:/app# ls -l
total 32
-rw-r--r-- 1 root root 1076 Apr 9 22:10 Dockerfile
-rw-r--r-- 1 root root 1079 Apr 9 22:10 LICENSE
-rw-r--r-- 1 root root 129 Apr 9 22:10 Procfile
-rw-r--r-- 1 root root 8385 Apr 9 22:10 README.md
-rw-r--r-- 1 root root 5493 Apr 9 22:10 nginx.tmpl
root#f200c1c476c7:/app# ls -l /etc/nginx/vhost.d
total 0
root#f200c1c476c7:/app# ls -l /source/nginx/
total 8
-rw-r--r-- 1 1000 staff 957 Apr 24 07:17 dockerhost.me
It doesn't seem possible, considering the syntax - v /host/path:/container/path is reserved for mounting a path from host (and not from another container)
That leaves you with the option of adding to your second container a symbolic link from /etc/otherdir to /source/somedir (which will exist because of the --volumes-from contA directive)