I'am learning docker and start test simple project contains only index.php, but it not work.
I start docker in VirtualBox(CentOS) on my Windows OS.
I have index.php
<?php
echo "Hello world";
I have Dockerfile
FROM php:7.0-apache
COPY . /var/www/html
Then build image and start container:
docker build -t php-app .
docker run php-app
When I start container I see its ip
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.17.0.4.
In VirtualBox I can see index.php using
curl 172.17.0.4/index.php
But in my Windows OS I type 192.168.1.194 (VirtualBox ip) in browser and I don't see my index.php. Apparently problem in ports. But what should I change to see index.php in Docker via my Windows browser?
You need to expose your port.
Run the container with this command:
docker run -p 80:80 php-app
This way your container's 80 port which is where your Apache instance is listening, will be bound to your host's (your virtual box vm in this case) 80 port and you should be able to reach it from outside.
You can read docker's all command line options' documentation in the official page here: https://docs.docker.com/engine/reference/run/#pid-equivalent
Related
I am trying to run the demonstration for docker, and host networking using:
docker run --rm -d --network host --name my_nginx nginx
When inspecting the running container using Docker Desktop, it show port 80 as not bound. Also, when navigating to http://localhost:80, I am not able to see the default nginx welcome. I am only able to see any application when I manually bind ports to the host machine; i.e -p 80:80. I did give myself a custom local IP address and DNS options (Windows 10). Do I need to modify my hosts file on my system?
I run docker container docker run -it --network host ubuntu:latest bash, but when I start there some server(on port 3000 for example), i can not open it from the main os.
How can I start the container (without describing expose or publish port) for up there some servers on different ports dynamicaly, and i want that ports will be available from the outer. I want to create container once, and keep there all changes, and back there via command docker start ..., docker exec ...
Visit this link to your solution
Here's Docker networking in the section The Host Driver
You will find the following abstract
As the name suggests, host drivers use the networking provided by the host machine. And it removes network isolation between the container and the host machine where Docker is running. For example, If you run a container that binds to port 80 and uses host networking, the container’s application is available on port 80 on the host’s IP address. You can use the host network if you don’t want to rely on Docker’s networking but instead rely on the host machine networking.
One limitation with the host driver is that it doesn’t work on Docker desktop: you need a Linux host to use it. This article focuses on Docker desktop, but I’ll show you the commands required to work with the Linux host.
The following command will start an Ubuntu image and listen to port 80 on the host machine:
docker run -it --network host ubuntu:latest /bin/bash
I am run a docker windows container on windows 10 anniversary edition and am looking to setup IIS as a reverse proxy to the container. Per https://blogs.technet.microsoft.com/virtualization/2016/05/25/windows-nat-winnat-capabilities-and-limitations/ it seems to suggest that it is an impossibility as it is impossible to reference the internal NAT range using local host. Which leaves a dynamically assigned IP address which can only be discover by running a docker inspect command after running the image. I am hoping there is a more efficient way that I am overlooking.
We also used a fixed ip address on our containers but we used another container with nginx to do the reverse proxy. The idea is that on our Container Host (Windows Server 2016) we only install Docker and nothing else. All configuration is done in the containers. This way we can easily migrate to another host.
This is the Dockerfile for the nginx proxy
FROM microsoft/windowsservercore
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]
ARG NgInxVersion="1.13.5"
ENV NgInxZipUrl="http://nginx.org/download/nginx-${NgInxVersion}.zip"
RUN Invoke-WebRequest -Uri $env:NgInxZipUrl -UseBasicParsing -Outfile c:\\nginx.zip
RUN Expand-Archive -Path c:\\nginx.zip -DestinationPath c:\\nginx
WORKDIR "c:\\nginx\\nginx-${NgInxVersion}"
COPY ./nginx.conf conf\\nginx.conf
ENTRYPOINT powershell .\\nginx.exe
Notice that the nginx.conf is copied to the nginx configuration folder. It contains the reverse proxy settings. Extract from http node for one of our sites:
server {
listen 80;
server_name somesite.mydomain.com;
location / {
proxy_pass http://172.22.108.6/;
}
}
The nginx container should be run with -p 80:80
When we add new containers we run a powershell script that updates the nginx.conf and reloads nginx. (this will be rare)
Example:
A user browses to http://somesite.mydomain.com
Our DNS redirects somesite.mydomain.com to ip of our Container Host
Since port 80 is exposed to nginx container, request goes there
nginx will proxy the request to 172.22.108.6
User sees the webpage running on container with ip 172.22.108.6
We solved this problem by assigning an IP address on the default subnet to each Windows container and exporting port 80 out of the container. This gave us a stable address to put into an ARR Reverse proxy rule. For example the following creates a container at the address 172.20.118.129 and then verifies the container is running at the requested address.
PS C:\WINDOWS\system32> docker run -d --name myservice --ip=172.20.118.129 microsoft/iis:nanoserver
7d20d8a131805727868ddf85f7c1f455fa2489bb2607b250e694a9e530a61302
PS C:\WINDOWS\system32> docker inspect -f "{{ .NetworkSettings.Networks.nat.IPAddress }}" myservice
172.20.118.129
I'm using ASP Core image to create a new container.
I've developed simple service which uses port 5000.
Now I created Dockerfile and built container which exposes
EXPOSE 5000
Running this container with a command
docker run -it -p 8080:5000 <name>
or even
docker run -it -p 127.0.0.1:8080:5000 <name>
doesn't lead to navigation to the 127.0.0.1:8080. So my browser says this site can't be reached.
p.s. I've checked the service without docker - it works correctly
UPD1
docker ps
shows my launched container with ports mapping information:
127.0.0.1:8080->5000/tcp
UPD2
this is netstat output from the host
tcp 0 0 localhost:5000 *:* LISTEN
lynx 127.0.0.1:5000 shows 200 OK
netstat -a on a client box doesn't show 8081 port or 5000
UPD3
I've just created a new container for NodeJs using public image.
Created a simple server with exposed port. After running it works as expected.
Actually it looks like the problem with exact Asp image
Which OS are you using? If you're running on OSX or Windows you will need to use the IP of your boot2docker virtual machine, not 127.0.0.1.
docker-machine ip will show you the IP of your current host.
I've started using docker for dev, with the following setup:
Host machine - ubuntu server.
Docker container - webapp w/ tomcat server (using https).
As far as host-container access goes - everything works fine.
However, I can't manage to access the container's webapp from a remote machine (though still within the same network).
When running
docker port <container-id> 443
output is as expected, so docker's port binding seems fine.
172.16.*.*:<random-port>
Any ideas?
Thanks!
I figured out what I missed, so here's a simple flow for accessing docker containers webapps from remote machines:
Step #1 : Bind physical host ports (e.g. 22, 443, 80, ...) to container's virtual ports.
possible syntax:
docker run -p 127.0.0.1:443:3444 -d <docker-image-name>
(see docker docs for port redirection with all options)
Step #2 : Redirect host's physical port to container's allocated virtual port. possible (linux) syntax:
iptables -t nat -A PREROUTING -i <host-interface-device> -p tcp --dport <host-physical-port> -j REDIRECT --to-port <container-virtual-port>
That should cover the basic use case.
Good luck!
Correct me if I'm wrong but as far as I'm aware docker host creates a private network for it's containers which is inaccessible from the outside. That said your best bet would probably be to access the container at {host_IP}:{mapped_port}.
If your container was built with a Dockerfile that has an EXPOSE statement, e.g. EXPOSE 443, then you can start the container with the -P option (as in "publish" or "public"). The port will be made available to connections from remote machines:
$ docker run -d -P mywebservice
If you didn't use a Dockerfile, or if it didn't have an EXPOSE statement (it should!), then you can also do an explicit port mapping:
$ docker run -d -p 80 mywebservice
In both cases, the result will be a publicly-accessible port:
$ docker ps
9bcb… mywebservice:latest … 0.0.0.0:49153->80/tcp …
Last but not least, you can force the port number if you need to:
$ docker run -d -p 8442:80 mywebservice
In that case, connecting to your Docker host IP address on port 8442 will reach the container.
There are some alternatives of how to access docker containers from an external device (in the same network), check out this post for more information http://blog.nunes.io/2015/05/02/how-to-access-docker-containers-from-external-devices.html