Docker: cannot get access to server - docker

Here is a little of backstory. I implemented a couple of web APIs using microservices architecture. I am trying to make my microservices accessible via HTTPS. The microservices are developed using .net core, so according to Microsoft document, to enforce HTTPS, I need to configure Kestrel. Following is how I did it.
.UseKestrel(options =>
{
options.Listen(IPAddress.Loopback, 5000);
options.Listen(IPAddress.Loopback, 5001, listenOptions =>
{
listenOptions.UseHttps("cert.pfx", "pwd");
});
})
To make it simple, I use kestrel by itself and skip reverse proxy. I will certainly include Nginx as reverse proxy but that is the future work. I tested locally, it worked. Then, I deployed it onto Docker. Here is the docker-compose.override file
version: '3.4'
services:
dataservice:
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=https://+:443;http://+:80
ports:
- "5000:80"
- "5001:443"
In dockerfile, port 5000 and 5001 are exposed. I built the project into images, and run it on docker, using docker run -it --rm -p 5000:80 --name *name* *imagename*. Docker shows Now listening on: http://127.0.0.1:5000 and Now listening on: https://127.0.0.1:5001. Now the problem is, leave the https part aside, the APIs cannot even accessed by http. The browser just shows This page isn’t working 127.0.0.1 didn’t send any data. ERR_EMPTY_RESPONSE. I found a similar question from here Docker: cannot open port from container to host
, somehow this is about server should listen to 0.0.0.0. Though I am not fully understand the reason, I changed the kestrel configuration to
options.Listen(IPAddress.Any, 5000);
built and ran docker images again, and Docker shows Now listening on: http://0.0.0.0:5000, still it doesn't work. I also tried to replace the IP with localhost, it has no use. I did not use .UseHttpsRedirection(), https should have nothing to do with the problem.
Am I missing any configuration or doing anything wrong? It would be really helpful if anyone could shed some light. Thank you in advance.

You should listen on 80 and 443 inside the container, i.e. options.Listen(IPAddress.Any, 80); because this docker declaration
ports:
- "5000:80"
means that the local port 80 (the port from your source code) is exported to external port 5000, and not the other way around.

Related

Understanding network access to docker containers

I am currently learning docker to be able to host a webpage in a container using nginx. The webpage accesses another container which runs flask. I think I have already solved my problem, but I am not sure why my solution works and would be happy about some feedback.
After setting up the containers, I tried to access the webpage from a firefox browser running on the host, which was successful. The browser reported CORS problems as soon as a service of the web page tried to contact flask.
However, after some hours of trying to fix the problem, I used the chrome browser which responded with another error message indicating that the flask address couldn't be reached.
The containers are started with docker-compose with the following yaml:
version: "3.8"
services:
webbuilder:
build: /var/www/html/Dashboard/
container_name: webbuilder
ports:
- 4998:80
watcher:
build: /home/dev/watcher/
container_name: watcher
ports:
- 5001:5001
expose:
- 5001
volumes:
- /var/www/services/venv_worker:/var/www/services/venv_worker
- /var/www/services/Worker:/var/www/services/Worker
- /home/dev/watcher/conf.d:/etc/supervisor.conf.d/
command: ["/usr/bin/supervisord"]
webbuilder is the nginx server hosting the web page. watcher is the flask server serving on 0.0.0.0:5001. I exposed this port and mapped it to the host for testing purposes:
I know that the containers generated with docker-compose are connected in a network and can be contacted using their container names instead of an actual ip address. I tested this with another network and it worked without problems.
The webpage running on webbuilder starts the service contacting watcher (where the flask server is). Because the container names can be resolved, the web page used the following address for http requests in my first approach:
export const environment = {
production: true,
apiURL: 'http://watcher:5001'
};
In this first attempt, there was no ports section in the docker-compose.yml, as I thought that the webpage inside the container could contact directly the watcher container running flask. This lead to the cors error message described above.
In a desperate attempt to solve the problem, I replaced the container name in apiURL with the concrete ip address of the container and also mapped the flask port 5001 to the host port 5001.
Confusingly, this works. The following images show what happens in my opinion.
The first picture shows my initial understanding of the situation. As this did not work, I am sure that it is wrong that the http request is executed by webbuilder. Instead, webbuilder only serves the homepage, but the service is executed from the host as depicted in image 2:
Is the situation described in image 2 correct? I think so, but it would be good if someone can confirm.
Thanks in advance.

Bind incoming docker connection to specific hostname inside docker container

I'm trying to migrate some Webpack based projects to run inside docker containers and have some issues with configuring networking.
Our WebPack devServer is configured in the following way:
{
host: 'dev.ng.com',
port: 4000,
compress: true,
disableHostCheck: true
}
in /etc/hosts file we have the following record:
127.0.0.1 dev.ng.com
and everything works fine.
When I run it inside docker I was getting EADDRNOTAVAIL error till I added to my docker-compose.yml the following lines:
extra_hosts:
- "dev.ng.com:127.0.0.1"
But now my app inside the docker app is not available from the host.
The relevant docker-compose.yml part is following:
gui-client:
image: "gui-client"
ports:
- "4000:4000"
extra_hosts:
- "dev.ng.com:127.0.0.1"
If I change in my Webpack host: 'dev.ng.com' to host:'0.0.0.0' it works fine, but I prefer not to change the Webpack config and run it as is.
My knowledge of docker networks internals is limited by I guess that all inbound connections to docker container from the host should be redirected to dev.ng.com:4000 while now they redirected to 0.0.0.0:4000, can it be achieved?
Yes, 127.0.0.1 is reachable normally only from the localhost. Containers work like if they would be virtual machines.
You need to configure it to listen everywhere. So very likely, "dev.ng.com:0.0.0.0" is what you want. Such things should be carefully used in normal circumstances, because mostly we do not want to share internal services to the internet. But here it serves only the purpose to make your configuration independent from the ip/netmask what docker gives to your container app.
Beside that, you need to forward the incoming connections of the host to your container. This can be done by a
- ports:
"0.0.0.0:4000:4000"
In your docker-compose.yml.
Possibly you will also want to make your port 4000 (of the host) reachable from the external world, this can be done by your firewall rules.
In professional configurations, there is typically some frontend (to provide encryption/security/load balancing), but if you only want to show your work to your boss, a http://a.b.c.d:4000 is pretty enough.

Docker tutorials all bind to port 80, and fail on local and remote servers as port 80 is already in use

Trying to wrap my head around all these Docker tutorials, and there is really no explanation around what port 80 is all about. Just, "bind to port 80".
This is the 3rd Docker tutorial I've taken with the same error after running the sample Dockerfile:
Error starting userland proxy: listen tcp 0.0.0.0:80: bind: address
already in use
So, I've understood that port 80 is basically the default port, which would allow my app to run at example.com instead of example.com:80 - for example. My web server, and local machine complain that this port is in use. Of course it is, it's in use by default.
So, why are all these Docker tutorials binding to port 80? I bet they are doing it right, and I am missing something... but, cannot find a clear solution or description.
Here is the tutorial I'm doing: Digital Ocean's Install WordPress with Docker: https://www.digitalocean.com/community/tutorials/how-to-install-wordpress-with-docker-compose
Sure enough, port 80 fails for me:
webserver:
depends_on:
- wordpress
image: nginx:1.15.12-alpine
container_name: webserver
restart: unless-stopped
ports:
- "80:80"
volumes:
- wordpress:/var/www/html
- ./nginx-conf:/etc/nginx/conf.d
- certbot-etc:/etc/letsencrypt
networks:
- app-network
Changing this to throws no error, but this means we can only resolve http://example.com:90 -
ports:
- "90:80"
What am I missing here? Why are all of these definitions of port 80 failing locally on my Mac and on a remote Digital Ocean Ubuntu8.1 server?
Do you have something else running on port 80? You can try curl localhost:80 or lsof -i :80; you might have Apache or something else running there by default that you'd need to kill.
If you're using a mac like me, sudo apachectl stop helped resolve this for me. Macs have a built-in web server, and mine was running by default. Maybe due to some defaulted websharing feature on the macbook pro.
example.com and example.com:80 are same thing btw. Here, some application in your host is already listening to port 80, it has got nothing to do with the container. Possibly, you are running an nginx server in the host as well. Are you ?

Dockerizing a proxy server so I could connect to the container via vhost. Is it possible?

At the moment I have a Docker container that consists of a LAMP stack (PHP, Apache, MySQL). It is working fine on http://localhost:my_specified_port. However, I want to access it via http://some_domain.dev. My current solution is to have NGINX on local as a proxy and doing:
server {
listen 80;
server_name some_domain.dev
location / {
proxy_pass http://localhost:my_specified_port;
}
}
It is working fine. However, I want to make it easier for my coworkers, and dockerize the NGINX server and have the same result. Is it possible to do it? I can't find any solutions online. Thanks a lot.
EDIT: Big edit. I forgot to mention, that I want NGINX in docker work on another port than 80. So if some people have apache installed on their local, it would still work.
90% solution
A starting point to solve this would be to use jwilder/nginx-proxy - a dockerized Nginx reverse proxy. It automates the procedure you described in the question, by generating Nginx configuration depending on other containers' env vars.
And you need a way to point your domain to resolve through the Nginx reverse proxy. The most common way is to use /etc/hosts. The route would look like:
Browser searches for domain
/etc/hosts says the domain resolves to ip of reverse proxy
Reverse proxy checks if the domain exists in it settings
Reverse proxy forwards the request to appropriate container
But because you want Nginx reverse proxy to run on a port other than 80, you will still need to access the wanted domain with a port. So this solution is not 100% what you want, but it is still a good starting point to understand how to mock domain names with the use of a reverse proxy:
run docker run -d -p 8080:80 -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
add env var VIRTUAL_HOST=some_domain.dev to your LAMP container (you can also use VIRTUAL_PORT={{any port number}} if your app inside the container runs on a different port than 80)
add 127.0.0.1 some_domain.dev to /etc/hosts
access http://some_domain.dev:8080
If later you would want to add another containerized app with a different domain, you should:
add env var VIRTUAL_HOST=another_domain.dev to your new app's container
add 127.0.0.1 another_domain.dev to /etc/hosts
access http://another_domain.dev:8080
100% solution
If you want to mock domain names in a maintainable way that can scale across a development team - you can use jwilder/nginx-proxy together with mitm-nginx-proxy-companion and a browser proxy extension. mitm-nginx-proxy-companion includes mitmproxy and a dns server, which allows it to send "local domains requests" to the reserve proxy and the rest of requests to the "real" internet.
After the setup, the whole route of will look like this:
You try to access a domain in the browser.
The proxy extension forwards that request to mitmproxy instead of the "real" internet.
mitmproxy tries to resolve the domain name through the dns server in the same container.
If the domain is not a "local" one, it will forward the request to the "real" internet.
But if the domain is a "local" one, it will forward the request to the reverse proxy.
The reverse proxy in its turn forwards the request to the appropriate container that includes the service we want to access.
With mitm-nginx-proxy-companion you do not need to use /etc/hosts, so if you tried the first solution, you should remove the entries that you added before. I am adding the solution in a docker-compose form for the ease of writing/reading it
version: '3.3'
services:
lamp:
environment:
VIRTUAL_HOST: some_domain.dev
VIRTUAL_PORT: 9999
image: my_lamp_image
another_app:
environment:
VIRTUAL_HOST: another_domain.dev
image: my_app_image
nginx-proxy:
image: jwilder/nginx-proxy
labels:
- "mitmproxy.proxyVirtualHosts=true"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
nginx-proxy-mitm:
dns:
- 127.0.0.1
image: artemkloko/mitm-nginx-proxy-companion
ports:
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
Then you would need to
Run docker-compose up
Add a proxy extension to your browser, with proxy address being 127.0.0.1:8080
Access http://some_domain.dev
Access http://another_domain.dev
Note: I am the author of mitm-nginx-proxy-companion

Docker compose not exposing port for application container

I have exposed port 80 in my application container's dockerfile.yml as well as mapping "80:80" in my docker-compose.yml but I only get a "Connection refused" after I do a "docker-compose up" and try to do a HTTP GET on port 80 on my docker-machine's IP address. My docker hub provided RethinkDB instance's admin panel gets mapped just fine through that same dockerfile.yml ("EXPOSE 8080") and docker-compose.yml (ports "8080:8080") and when I start the application on my local development machine port 80 gets exposed as expected.
What could be going wrong here? I would be very grateful for a quick insight from anyone with more docker experience!
So in my case, my service containers both bound to localhost (127.0.0.1) and therefore seemingly the exposed ports were never picked up via my docker-compose port mapping. I configured my services to bind to 0.0.0.0 respectively and now they works flawlessly. Thank you #creack for pointing me in the right direction.
In my case I was using
docker-compose run app
Apparently
docker-compose run command does not create any of the ports specified in the service configuration.
See https://docs.docker.com/compose/reference/run/
I started using
docker-compose create app
docker-compose start app
and problem solved.
In my case I found that the service I am trying to set up had all their networks as internal: true. It is strange that it didn't give me an issue when doing a docker stack deploy
I have opened up https://github.com/docker/compose/issues/6534 to ask for a proper error message so it will be obvious for other people.
If you are using the same Dockerfile, make sure you also expose the port 80 EXPOSE 80 otherwise, your compose mapping 80:80 will not work.
Also make sure that your http server listens on 0.0.0.0:80 and not localhost or a different port.

Resources