Docker nginx wont start saying my file is a directory (windows) - docker

Im trying to run my docker-compose file but i keep getting directories instead of files:
version: '3'
services:
web:
stdin_open: true
tty: true
command: bash
image: nginx:latest
ports:
- "8000:80"
volumes:
- ./../:/etc/nginx/conf.d/:ro
in folder ../ i have a file nginx.site.conf
when i sart the container: docker-compose -f scripts/prod/docker-compose.yml run web bash and i go the the directory i try to cat my file nginx.site.conf it doesn't work saying its a directory. Also i can open this directory but its empty.
i expect it to be a file:
somehow i can jump into it as a directory?
root#e360b4930cca:/etc/nginx/conf.d# cd nginx.site.conf/
root#e360b4930cca:/etc/nginx/conf.d/nginx.site.conf# ls
How do i fix this? i run on windows and i haven't changed anything to my setup. Only have got the default windows protection (no virus scanner) enabled, even with app and browser control disabled it doesnt work.
What are the reasons docker turns my file into a directory?

Actually coming back after a computer restart it worked again.
I think something in docker itself or maybe behind the scenes windows updates caused this.
Even restarting docker didn't work, i had to restart windows.

Related

Dockerized Solr not updating configsets

I have a Dockerized Solr and I have to update a configset. The file I modified is under this path: solrhome/server/solr/configsets/myconfig/conf and is called DIH_ContentIndex.xml. After that, I deleted my Docker images and containers with these commands:
docker container stop <solr_container_id>
docker container rm <solr_container_id>
docker image rm <solr_img_id>
docker volume rm <solr_volume>
I rebuilt everything but Solr is not taking changes, as I can see when i go in the Files section. So, I decided to add a configset, that we will call newconfig with my changes at the same level of the other one. Redid everything and restarted. But nothing. So, I decided to enter the container with docker exec -it --user root <solr_container_id> /bin/bash and decided to change the files manually (just to test). Stopped, restarted the container but still nothing. After deleting again everything about Docker, I can still see my changes from inside the container. At this point, I think either I'm not deleting everything or I'm not placing my new config in the right directory. What else do I need to do for a clean build?
Here is the fragment of docker-compose I'm trying to launch, just in case this is the fault.
solr:
container_name: "MySolr"
build: ./solr
restart: always
hostname: solr
ports:
- "8983:8983"
networks:
- my-network
volumes:
- vol_solr:/opt/solr
depends_on:
- configdb
- zookeeper
environment:
ZK_HOST: zookeeper:2181
Of course, everything else is running fine so there is no error witht he dependencies.
It is not a problem of browser cache. I already tried cleaning the cache and using a different browser.
Some updates: it actually copies my config inside the fresh-built image.. But still, can't select it from the Frontend. Clearly, I'm placing my config files in the wrong path.
Thank you in advance!
Solved! All I had to do was:
enter the Solr container, with this command:
docker exec -it --user root <solr_container_id> /bin/bash
entering as root to be able to install nano
copy my pre-existing config somewhere (in the same path of bin for convenience) and modify the file DIH_ContentIndex.xml
apt update
apt install nano
nano DIH_ContentIndex.xml
Go to solr/bin pload to ZK, using this command:
solr zk upconfig -n config_name -d /path/to/config_folder

When running docker-compose remotely, an error occurs with mounting volumes

I am trying to run a project on docker-compose via a remote server. Everything works, but as soon as I add the item about mounting the volume, it gives an error:
Error response from daemon: invalid mount config for type "bind": invalid mount path: 'C:/Users/user/Projects/my-raspberry-test' mount path must be absolute
To run I use tools from PhpStorm.
The docker-compose.yml file itself looks like this:
version: "3"
services:
php:
image: php:cli
volumes:
- ./:/var/www/html/
working_dir: /var/www/html/
ports:
- 80:80
command: php -S 0.0.0.0:80
I checked by ssh:
Daemon is running,
Docker works (on a similar Dockerfile with the same tasks),
Docker-compose works (on the same file).
Also checked docker remote run using phpstorm and file:
FROM php:cli
COPY . /var/www/html/
WORKDIR /var/www/html/
CMD php -S 0.0.0.0:80
It didn’t give an error and it worked.
OS on devices:
PC: Windows 10
Server: Fedora Server
Without mounting the volume in docker-compose, everything starts. Maybe someone faced a similar problem?
php for an example.
The path must be absolute for the remote host and the project data itself must be loaded there. That is, you need to upload the project to a remote host.
I corrected everything like this:
volumes:
- /home/peter-alexeev/my-test:/var/www/html/

Sharing data between docker containers without making data persistent

Let's say I have a docker-compose file with two containers:
version: "3"
services:
app:
image: someimage:fpm-alpine
volumes:
- myvolume:/var/www/html
web:
image: nginx:alpine
volumes:
- myvolume:/var/www/html
volumes:
myvolume:
The app container contains the application code in the /var/www/html directory which gets updated with each version of the image, so I don't want this directory to be persistent.
Yet I need to share the data with the nginx container. If I use a volume or a host bind the data is persistent and doesn't get updated with a new version. Maybe there is a way to automatically delete a volume whenever I pull a new image? Or a way to share an anonymous volume?
i think its better for you to use anonymous volume
volumes:
- ./:/var/www/html
You would have to be willing to drop back to docker-compose version 2 and use data containers with the volumes_from directive.
Which is equivalent to --volumes-from on a docker run command.
This should work fine. The problem isn't with docker. You can use volumes to communicate in this way. If you run docker-compose up in a directory with the following compose file:
version: "3"
services:
one:
image: ubuntu
command: sleep 100000
volumes:
- vol:/vol
two:
image: ubuntu
command: sleep 100000
volumes:
- vol:/vol
volumes:
vol:
Then, in a 2nd terminal docker exec -it so_one_1 bash (you might have to do a docker ps to find the exact name of the container, it can change). You'll find yourself in a bash container. Change to the /vol directory cd /vol and then echo "wobble" > wibble.txt", then exit` the shell (ctrl-d).
In the same terminal you can then type docker exec -it so_two_1 bash (again, check the names). Just like last time you can cd /vol and type ls -gAlFh you'll see the wibble.txt file we created in the other container. You can even cat wibble.txt to see the contents. It'll be there.
So if the problem isn't docker, what can it be? I think the problem is that nginx isn't seeing the changes on the filesystem. For that, I believe that setting expires -1; inside a location block in the config will actually disable caching completely and may solve the problem (dev only).

docker container does not work after restart

I have a VM and inside it, I am running an Elasticsearch Docker container built through docker-compose. It was working pretty well. Then after the power suddenly went out, I tried running the container back again but discovered an error that wasn't present before:
Then the container kept on restarting. And when I checked the file permissions (within the small window of time before the container restarts), I found this:
Here's my docker-compose.yml:
version: '2.3'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.8.0
hostname: elasticsearch
restart: always
user: root
ports:
- "9200:9200"
- "9300:9300"
volumes:
- ./elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
env_file:
- devopsfw-elk.env
What is actually happening here? I'm fairly new to Docker and Elasticsearch and I'm very confused as to the errors that are occuring.
The problem is that the file has been corrupted, delete it and restart the container.
rm -i ./*elasticsearch.yml*
If you have problems to delete this, read this:
https://superuser.com/questions/197605/delete-a-corrupt-file-in-linux
Looks like the file was owned by the root user and has been corrupted, in order to delete the file, you have to use the super user access aka sudo , so correct command would be
sudo rm -i ./*elasticsearch.yml*
And after that, create a file and restart the conatainer.

Mounted directory empty with docker-compose and custom Dockerfile

I am very (read very) new to Docker so experimenting. I have created a very basic Dockerfile to pull in Laravel:
FROM composer:latest
RUN composer_version="$(composer --version)" && echo $composer_version
RUN composer global require laravel/installer
WORKDIR /var/www
RUN composer create-project --prefer-dist laravel/laravel site
My docker-compose.yml file looks like:
version: '3.7'
services:
laravel:
build:
context: .
dockerfile: laravel.dockerfile
container_name: my_laravel
network_mode: host
restart: on-failure
volumes:
- ./site:/var/www/site
When I run docker-compose up, the ./site directory is created but the contents are empty. I've put this in docker-compose as I plan on on including other things like nginx, mysql, php etc
The command:
docker run -v "/where/i/want/data/site:/var/www/site" my_laravel
Results in the same behaviour.
I know the install is successful as I modified my dockerfile with the follwing two lines appended to it:
WORKDIR /var/www/site
RUN ls -la
Which gives me the correct listing.
Clearly misunderstanding something here. Any help appreciated.
EDIT: So, I was able to get this to work... although, it slightly more difficult than just specifying a path..
You can accomplish this by specifying a volume in docker-compose.yml.. The path to the directory (on the host) is labeled as device in the compose file.. It appears that the root of the path has to be an actual volume (possibly a share would work) but the 'destination' of the path can be a directory on the specified volume..
I created a new volume called docker on my machine but I suppose you could do this with your existing disk/volume..
I am on a Mac and this docker-compose.yml file worked for me:
version: '3.7'
services:
nodemon-test:
container_name: my-nodemon-test
image: oze4/nodemon-docker-test
ports:
- "1337:1337"
volumes:
- docker_test_app:/app # see comment below on which name to use here
volumes:
docker_test_app: # use this name under `volumes:` for the service
name: docker_test_app
driver: local
driver_opts:
o: bind
type: none
device: /Volumes/docker/docker_test_app
The container specified exists in my DockerHub.. this is the source code for it, just in case you are worried about anything malicious. I created it like two weeks ago to help someone else on StackOverflow.
Shows files from the container on my machine (the host)..
You can read more about Docker Volume configs here if you would like.
ORIGINAL ANSWER:
It looks like you are trying to share the build directory with your host machine.. After some testing, it appears Docker will overwrite the specified path on the container with the contents of the path on the host.
If you run docker logs my_laravel you should see an error about missing files at /var/www/site.. So, even though the build is successful - once Docker mounts the directory from your machine (./site) onto the container (/var/www/site) it overwrites the path within the container (/var/www/site) with the contents of the path on your host (./site) - which is empty.
To test and make sure the contents of /var/www/site are in fact being overwritten, you can run docker exec -it /bin/bash (you may need to replace /bin/bash with /bash).. This will give you command line access inside of the container. From there you can do ls -a /var/www/site..
Furthermore, you can also pre-stage ./site to have a random test file in it (test.txt or whatever), then docker-compose up -d, then run the same commands from the step above docker exec -it ... and see if the staged test.txt file is now inside the container - this gives you definitive evidence that when you run volumes, the data on your host overwrites data in the container.
With that being said, doing something like this and sharing a log directory will work... the volume path specified on the container is still overwritten, the difference is the container is writing to that path.. it doesn't rely on it for config files/app files.
Hope this helps.

Resources