I run docker on windows 10 with this command:
docker run -d -v /c/Users/tsh/docker:/usr/share/nginx/html -p 80:80 nginx
Inside Users/tsh/docker folder I have simple index.html file:
<h1>Hello!</h1>
It works perfectly well, when I point my browser on windows to virtualbox IP I can see web page with "Hello!" displayed.
But when I change content of the index.html to something like:
<h1>Hello from docker!</h1>
The web page still shows me the old "Hello!" text.
Is it possible when I change index.html data on the web page is also changed?
Upd:
docker run -it -v //c/Users/tsh/docker:/usr/share/nginx/html -p 80:80 nginx bash
root#ae5fc6b6126a:/# cd /usr/share/nginx/html
root#ae5fc6b6126a:/usr/share/nginx/html# cat index.html
<h1>Hello from docker!</h1>
root#ae5fc6b6126a:/usr/share/nginx/html#
Container see new data <h1>Hello from docker!</h1> but page still shows the old Hello!
This problem appears to be related to Virtualbox caching. I also encountered this problem recently editing CSS and I was able to create a "workaround" by resetting the image in Virtualbox. But, I call this a workaround only in a vague sense since it is not very useful to have to completely reboot the boot2docker image each time you make an edit to an HTML doc.
There seems to be some issues with the windows paths. Please try the workaround suggested in Github issue https://github.com/docker/docker/issues/12590
Use double leading slashes on the path:
docker run -d -v --name mynginx //c/Users/tsh/docker:/usr/share/nginx/html -p 80:80 nginx
You can debug your situation as follows:
First name your container as 'mynginx' using the above updated run command
Then you can enter into the container using following command:
docker exec -it mynginx /bin/bash
Now you should be inside the container, and there you can verify the contents of the mounted file using:
cat /usr/share/nginx/html/index.html
If the file here is showing your changes, and still your browser is showing the old file, that means the file is cached somewhere in the chain. Nginx / browser. If it is cached in the browser, you can check by opening in the incognito window or doing Ctrl + F5.
I had the same problem but with Apache. VirtualBox on Windows and Centos with httpd and php on docker. Problem fixed by changing httpd.conf parameter
#
# EnableMMAP and EnableSendfile: On systems that support it,
# memory-mapping or the sendfile syscall may be used to deliver
# files. This usually improves server performance, but must
# be turned off when serving from networked-mounted
# filesystems or if support for these functions is otherwise
# broken on your system.
# Defaults if commented: EnableMMAP On, EnableSendfile Off
#
#EnableMMAP off
EnableSendfile off
EnableSendfile to off because
...but must be turned off when serving from networked-mounted filesystems...
Sending files still works good. Hope this will help someone.
Related
I'm using Docker on Windows. Versions are engine: 20.10.14, desktop: 4.7.0. In my current director, I have a DockerFile (unimportant for now) and an index.html file.
I created an nginx docker container with a bind mount to copy these files into the container: docker container run -d --name nginx_cust -p 80:80 -v %cd%:/usr/share/nginx/html nginx.
When I access localhost:80, I don't see my index.html file reflected, and when I enter the running container with bash docker container exec -it nginx_cust bash and check the mounted directory, it's empty:
Inspecting the container, I see that the bind mount does look correct,
and I don't see anything in the container log about this. Any ideas why this is not working?
After a lot of playing around, I realized that this got fixed when I moved the input files to different directory - one that was less-deeply nested. I strongly suspect there was some long filename constraint being violated silently.
I am trying to build a nginx webserver to share files among team members.
In 'ubuntu 16.04', I am running following command:
root#automation00-new:/home/test# docker run -d -p 8081:80 -v /var/www/apj/:/usr/share/nginx/html --name test-nginx nginx:latest
As shown below docker is able to mount the files successfully.
root#automation00-new:/home/test# docker exec ec795af0f1f2 ls /usr/share/nginx/html
Builds
Logs_for_perf_Testing
json.txt
ravi
root#automation00-new:/home/test#
But when I try to access webserver using browser "http://1.1.1.1/8081" I am seeing '403 forbidden' error.
But if I try 'http://1.1.1.1/8081/json.txt', I am able to view the 'json.txt' contents on browser.
I want to browse all the directories and files inside.
Any idea on how to fix this issue please?
Please give us more context in terms of your nginx configuration.
Are you using the default nginx.conf or do you have done modifications?
The solution should be to add all relevant files to the nginx index
Details also here, you will need to modify your nginx.conf:
autoindex needs to be turned on for the location /
I'm new to Docker and currently following this tutorial:
Learn Docker in 12 minutes
I created the necessary files and I made it up to display "Hello World!" on localhost:80.
Beyond that point, I tried to mount the container using the direct reference to my folder so I can update the index.php file to mimic the development evironment, and then I come with this error:
All I did is change the way the image is ran so I can update the content of the index.php file and see the changes reflect in the webpage when I hit F5.
Currently using Docker for Windows on Windows 10 Pro
Docker for Windows is running
I followed every steps scrupulously so I don't get myself fooled and it didn't work for me it seems.
To answer Mornor's question, here is the result for docker ps
And here for docker logs [container-name]
And since I now better understand what happens under the hood, how do I go to solve my problem illustrated in the log?
Here is my Dockfile
And the command I executed to run my image:
docker run -p 80:80 -v /wmi/tutorials/docker/src/:/var/www/html/ hello-world
And so you see that the file exists:
Error is coming from Apache which tries to show you the directory contents as there is no index file available. Either your docker mapping is not working correctly, or your apache does not have php support installed on it. You are accessing http://localhost, try http://localhost/index.php.
If you get same error, problem is with mapping. If you get php code the problem is with missing PHP support in Apache.
I think you're wrongly mouting your index.php. What you could do to debug it, is to firstly check if the index.php is indeed mounted within the container.
You could issue the following command :
docker run -p 80:80 -v /wmi/tutorials/docker/src/:/var/www/html/ hello-world bash -c 'ls -lsh /var/www/html/'
(use sh instead of bash if it does not work). If you can indeed see a index.php, then congratulations your file is correctly mounted, and the error is not coming from Docker, but from Apache.
If index.php is not there, then you have to check your Dockerfile. You mount src/, check if /src is in the same directory as your Dockerfile.
Keep us updated :)
I know the answer is late but the answer is very easy:
this happens When using docker and you have SELinux, be aware that the host has no knowledge of container SELinux policy.
by adding z
docker run -p 80:80 -v /wmi/tutorials/docker/src/:/var/www/html/:z hello-world
this will automatically do the chcon .... that you need to do.
Check whether the html folder has the proper permission or not.
Thank you
I spent the weekend pouring over the Docker docs and playing around with the toy applications and example projects. I'm now trying to write a super-simple web service of my own and run it from inside a container. In the container, I want my app (a Spring Boot app under the hood) -- called bootup -- to have the following directory structure:
/opt/
bootup/
bin/
bootup.jar ==> the app
logs/
bootup.log ==> log file; GETS CREATED BY THE APP # STARTUP
config/
application.yml ==> app config file
logback.groovy ==> log config file
It's very important to note that when I run my app locally on my host machine - outside of Docker - everything works perfectly fine, including the creation of log files to my host's /opt/bootup/logs directory. The app endpoints serve up the correct content, etc. All is well and dandy.
So I created the following Dockerfile:
FROM openjdk:8
RUN mkdir /opt/bootup
RUN mkdir /opt/bootup/logs
RUN mkdir /opt/bootup/config
RUN mkdir /opt/bootup/bin
ADD build/libs/bootup.jar /opt/bootup/bin
ADD application.yml /opt/bootup/config
ADD logback.groovy /opt/bootup/config
WORKDIR /opt/bootup/bin
EXPOSE 9200
ENTRYPOINT java -Dspring.config=/opt/bootup/config -jar bootup.jar
I then build my image via:
docker build -t bootup .
I then run my container:
docker run -it -p 9200:9200 -d --name bootup bootup
I run docker ps:
CONTAINER ID IMAGE COMMAND ...
3f1492790397 bootup "/bin/sh -c 'java ..."
So far, so good!
My app should then be serving a simple web page at localhost:9200, so I open my browser to http://localhost:9200 and I get nothing.
When I use docker exec -it 3f1492790397 bash to "ssh" into my container, I see everything looks fine, except the /opt/bootup/logs directory, which should have a bootup.log file in it -- created at startup -- is instead empty.
I tried using docker attach 3f1492790397 and then hitting http://localhost:9200 in my browser, to see if that would generated some standard output (my app logs both to /opt/bootup/logs/bootup.log as well as the console) but that doesn't yield any output.
So I think what's happening is that my app (for some reason) doesn't have permission to create its own log file when the container starts up, and puts the app in a weird state, or even prevents it from starting up altogether.
So I ask:
Is there a way to see what user my app is starting up as?; or
Is there a way to tail standard output while the container is starting? Attaching after startup doesn't help me because I think by the time I run the docker attach command the app has already choked
Thanks in advance!
I don't know why your app isn't working, but can answer your questions-
Is there a way to see what user my app is starting up as?; or
A: Docker containers run as root unless otherwise specified.
Is there a way to tail standard output while the container is starting? Attaching after startup doesn't help me because I think by the time I run the docker attach command the app has already choked
A: Docker containers dump stdout/stderr to the Docker logs by default. There are two ways to see these- 1 is to run the container with the flag -it instead of -d to get an interactive session that will list the stdout from your container. The other is to use the docker logs *container_name* command on a running or stopped container.
docker attach 3f1492790397
This doesn't do what you are hoping for. What you want is docker exec (probably docker exec -it bootup bash), which will give you a shell in the scope of the container which will let you check for your log files or try and hit the app using curl from inside the container.
Why do I get no output?
Hard to say without the info from the earlier commands. Is your app listening on 0.0.0.0 or on localhost (your laptop browser will look like an external machine to the container)? Does your app require a supervisor process that isn't running? Does it require some other JAR files that are on the CLASSPATH on your laptop but not in the container? Are you running docker using Docker-Machine (in which case localhost is probably not the name of the container)?
I've been wondering if the linked container shuts down and starts again, does the container that is linked with it
restores the --link connection?
Fire 2 containers.
docker run -d --name fluentd millisami/fluentd
docker run -d --name railsapp --link fluentd:fluentd millisami/rails
Now if the fluentd container is stopped and restarted, will the railsapp container restores the link and linked ENV vars automatically?
UPDATE:
As of Docker 1.3 (or probably even from earlier version, not sure), the /etc/hosts file will be updated with the new ip of a linked containers if it restarts. This means that if you access it via its name within the /etc/hosts entry as opposed to the environment variable, your link will still work even after restart.
Example:
When you starts two containers app_image and mysql and link them like this:
$ docker run --name mysql mysql:5.6.20
$ docker run -d --link mysql:mysql app_image
you'll get an entry in your /etc/hosts within app_image:
# /etc/hosts example
mysql 172.17.0.12
and that ip will be refreshed in case mysql crashes and is restarted.
So don't use environment variables when referring to your linked container:
$ ping $MYSQL_PORT_3306_TCP_ADDR # --> don't use! won't work after the linked container restarts
$ ping mysql # --> instead, access it by the name as in /etc/hosts
Old answer:
Nope,it won't. In the face of crashing containers scenario, links are as good as dead. I think it is pretty much an open problem,i.e., there are many candidate solutions, but none are yet to be crowned the standard approach.
You might want to take a look at http://jasonwilder.com/blog/2014/07/15/docker-service-discovery/