I am currently learning docker to be able to host a webpage in a container using nginx. The webpage accesses another container which runs flask. I think I have already solved my problem, but I am not sure why my solution works and would be happy about some feedback.
After setting up the containers, I tried to access the webpage from a firefox browser running on the host, which was successful. The browser reported CORS problems as soon as a service of the web page tried to contact flask.
However, after some hours of trying to fix the problem, I used the chrome browser which responded with another error message indicating that the flask address couldn't be reached.
The containers are started with docker-compose with the following yaml:
version: "3.8"
services:
webbuilder:
build: /var/www/html/Dashboard/
container_name: webbuilder
ports:
- 4998:80
watcher:
build: /home/dev/watcher/
container_name: watcher
ports:
- 5001:5001
expose:
- 5001
volumes:
- /var/www/services/venv_worker:/var/www/services/venv_worker
- /var/www/services/Worker:/var/www/services/Worker
- /home/dev/watcher/conf.d:/etc/supervisor.conf.d/
command: ["/usr/bin/supervisord"]
webbuilder is the nginx server hosting the web page. watcher is the flask server serving on 0.0.0.0:5001. I exposed this port and mapped it to the host for testing purposes:
I know that the containers generated with docker-compose are connected in a network and can be contacted using their container names instead of an actual ip address. I tested this with another network and it worked without problems.
The webpage running on webbuilder starts the service contacting watcher (where the flask server is). Because the container names can be resolved, the web page used the following address for http requests in my first approach:
export const environment = {
production: true,
apiURL: 'http://watcher:5001'
};
In this first attempt, there was no ports section in the docker-compose.yml, as I thought that the webpage inside the container could contact directly the watcher container running flask. This lead to the cors error message described above.
In a desperate attempt to solve the problem, I replaced the container name in apiURL with the concrete ip address of the container and also mapped the flask port 5001 to the host port 5001.
Confusingly, this works. The following images show what happens in my opinion.
The first picture shows my initial understanding of the situation. As this did not work, I am sure that it is wrong that the http request is executed by webbuilder. Instead, webbuilder only serves the homepage, but the service is executed from the host as depicted in image 2:
Is the situation described in image 2 correct? I think so, but it would be good if someone can confirm.
Thanks in advance.
Related
My setup: I have a Raspberry pi at home connected to my Fritzbox 6660 Cable over Lan. The Pi is Running Docker with Portainer. While playing around and learning I was able to deploy numerous different containers with different programs. Now I would like to be able to connect to those containers from outside of my home network. In this example I will describe my Problem with my Grafana Container.(but I tryed other containers as well)
So Currently running are Grafana, InfluxDB(to feed Grafana) and nginx proxy manager.
I setup Nginx with the Docker compose file from nginx`s quick start page:
version: '3'
services:
app:
image: 'jc21/nginx-proxy-manager:latest'
restart: unless-stopped
ports:
- '80:80'
- '81:81'
- '443:443'
volumes:
- ./data:/data
- ./letsencrypt:/etc/letsencrypt
once Nginx was running I made sure that Grafana and Nginx are running on the same docker network (nginx_default in this case)
For my custom Domain I signed up for a Duckdns account and created my domain "http://example.duckdns.org"
I used Duckdns`s install instructions to configure the DynDns settings in my fritzbox
with Update-URL:http://www.duckdns.org/update?domains=example&token=xxxxxxx-680f-4c66-a982-60d7e2f56911&ip=
Domainname: example.duckdns.org
username: none (as stated from duckdns install page)
password: xxxxxxxx-680f-4c66-a982-60d7e2f56911
dont worry the "xxxxxx" is actually different in my case.
Further I enabled portforwarding to the static Ip adress of my Raspberry on the ports 80 and 443 since those are the once nginx needs.
Then I went on the nginxpm webpage on port 81 and set up a proxy host like so:
Domain names: grafana.example.duckdns.org (I also tryed without grafana at the beginning, same result)
Scheme: http
Forward Hostname: Raspberry pi Ip
Forward Port: 3000 because thats where I can reach Grafana
I also enabled Block common exploits and websockets support. I know I should enable SSL but wont for this example.
My Nginx now sais this Proxy Host is online. But still I cant connect. Browser says Timeout.
I have this raspberry pi for 2 weeks now and have dumped more than one week just to figure out how to reach over the web. even tryed traefik at some point. But also no success.
I have watched dozens of tutorials, and reconstructed way more than one documentation example. But everytime those tutorials say something about success when they show their container webpage from outside home network. My browsers just give me "ERR_CONNECTION_TIMED_OUT"
I also tryed NO_IP and ddnss.
So please if anyone has suggestions I would highly appreciate.
I am curious if you could solve this problem because I get a similar error and I tried any possible IP combination in Nginx. I can reach the "Congratulations! You've successfully started the Nginx Proxy Manager." side from outside, but the redirection to the docker container does not work.
Regards
I'm trying to migrate some Webpack based projects to run inside docker containers and have some issues with configuring networking.
Our WebPack devServer is configured in the following way:
{
host: 'dev.ng.com',
port: 4000,
compress: true,
disableHostCheck: true
}
in /etc/hosts file we have the following record:
127.0.0.1 dev.ng.com
and everything works fine.
When I run it inside docker I was getting EADDRNOTAVAIL error till I added to my docker-compose.yml the following lines:
extra_hosts:
- "dev.ng.com:127.0.0.1"
But now my app inside the docker app is not available from the host.
The relevant docker-compose.yml part is following:
gui-client:
image: "gui-client"
ports:
- "4000:4000"
extra_hosts:
- "dev.ng.com:127.0.0.1"
If I change in my Webpack host: 'dev.ng.com' to host:'0.0.0.0' it works fine, but I prefer not to change the Webpack config and run it as is.
My knowledge of docker networks internals is limited by I guess that all inbound connections to docker container from the host should be redirected to dev.ng.com:4000 while now they redirected to 0.0.0.0:4000, can it be achieved?
Yes, 127.0.0.1 is reachable normally only from the localhost. Containers work like if they would be virtual machines.
You need to configure it to listen everywhere. So very likely, "dev.ng.com:0.0.0.0" is what you want. Such things should be carefully used in normal circumstances, because mostly we do not want to share internal services to the internet. But here it serves only the purpose to make your configuration independent from the ip/netmask what docker gives to your container app.
Beside that, you need to forward the incoming connections of the host to your container. This can be done by a
- ports:
"0.0.0.0:4000:4000"
In your docker-compose.yml.
Possibly you will also want to make your port 4000 (of the host) reachable from the external world, this can be done by your firewall rules.
In professional configurations, there is typically some frontend (to provide encryption/security/load balancing), but if you only want to show your work to your boss, a http://a.b.c.d:4000 is pretty enough.
I have a GRPC service that I've implemented with TypeScript, and I'm interactively testing with BloomRPC. I've set it up (both client and server) with insecure connections to get things up and running. When I run the service locally (on port 3333), I'm able to interact with the service perfectly using BloomRPC - make requests, get responses.
However, when I include the service into a Docker container, and expose the same ports to the local machine, BloomRPC returns an error:
{
"error": "2 UNKNOWN: Stream removed"
}
I've double checked the ports, and they're open. I've enabled the additional GRPC debugging output logging, and tried tracing the network connections. I see a network connection through to the service on Docker, but then it terminates immediately. When I looked at tcpdump traces, I could see the connection coming in, but no response is provided from my service back out.
I've found other references to 2 UNKNOWN: Stream removed which appear to primarily be related to SSL/TLS setup, but as I'm trying to connect this in an insecure fashion, I'm uncertain what's happening in the course of this failure. I have also verified the service is actively running and logging in the docker container, and it responds perfectly well to HTTP requests on another port from the same process.
I'm at a loss as to what's causing the error and how to further debug it. I'm running the container using docker-compose, alongside a Postgres database.
My docker-compose.yaml looks akin to:
services:
sampleservice:
image: myserviceimage
environment:
NODE_ENV: development
GRPC_PORT: 3333
HTTP_PORT: 8080
GRPC_VERBOSITY: DEBUG
GRPC_TRACE: all
ports:
- 8080:8080
- 3333:3333
db:
image: postgres
ports:
- 5432:5432
Any suggestions on how I could further debug this, or that might explain what's happening so that I can run this service reliably within a container and interact with it from outside the container?
I have 3 containers. One is a lighttpd server serving static content (front). I have 2 flask servers handling the backend (back and model)
This is my docker-compose.yml
version: "3"
services:
front:
image: ecd3:latest
ports:
- 4200:80
tty: true
links:
- "back"
depends_on:
- back
networks:
- mynet
back:
image: esd3:latest
ports:
- 5000:5000
links:
- "model"
depends_on:
- model
networks:
- mynet
model:
image: mok:latest
ports:
- 5001:5001
networks:
- mynet
networks:
mynet:
I'm trying to send an http request to my flask server (back) from my frontend (front). I have bound the flask server to 0.0.0.0 and even used the service name in the frontend (http://back:5000/endpoint)
Trying to curl the flask server inside the frontend container (curl back:5000) gives me this:
curl: (52) Empty reply from server
Pinging the flask server from inside the frontend container works. This means that the connection must have been established.
Why can't I connect to my flask server from my frontend?
We discovered several things in the comments. Firstly, that you had a proxy problem that prevented one container using the API in another container.
Secondly, and critically, you discovered that the service names in your Docker Compose configuration file are made available in the virtual networking system set up by Docker. So, you can ping front from back and vice-versa. Importantly, it's worth noting that you can do this because they are on the same virtual network, mynet. If they were on different Docker networks, then by design the DNS names would not be available, and the virtual container IP addresses would not be reachable.
Incidentally, since you have all of your containers on the same network, and you have not changed any network settings, you could drop this network for now. In other words, you can remove the networks definition and the three container references to it, since they can just join the default network instead.
Thirdly, you learned that Docker's virtual DNS entries are not made available on the host, and so front and back are not available here. Even if the were (e.g. if manual entries were made in the hosts file) those IPs would not work, since there is no direct networking route from the host to the containers.
Instead, those containers are exposed by a Docker device that proxies connections from a custom localhost port down to those containers (4200, 5000 and 5001 in your case).
A good interim solution is to load your frontend at http://localhost:4200 and hardwire its API address as http://localhost:5000. You may have some CORS issues with that though, since browsers will see these as different servers.
Moreover, if you go live, you may have some problems with mobile networks and corporate firewalls - you will probably want your frontend app to sit on port 443, but since it is a separate server, you will either need a different IP address for your API, so it can also go on 443, or you will need to use another port. A clean solution for this is to put a frontend proxy in front of both containers, and then just expose the proxy in Docker. This will send HTTP requests from the outside to the correct container, depending on a filtering criteria set by you. I recommend Traefik for this, but there are undoubtedly several other approaches.
Here is a little of backstory. I implemented a couple of web APIs using microservices architecture. I am trying to make my microservices accessible via HTTPS. The microservices are developed using .net core, so according to Microsoft document, to enforce HTTPS, I need to configure Kestrel. Following is how I did it.
.UseKestrel(options =>
{
options.Listen(IPAddress.Loopback, 5000);
options.Listen(IPAddress.Loopback, 5001, listenOptions =>
{
listenOptions.UseHttps("cert.pfx", "pwd");
});
})
To make it simple, I use kestrel by itself and skip reverse proxy. I will certainly include Nginx as reverse proxy but that is the future work. I tested locally, it worked. Then, I deployed it onto Docker. Here is the docker-compose.override file
version: '3.4'
services:
dataservice:
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=https://+:443;http://+:80
ports:
- "5000:80"
- "5001:443"
In dockerfile, port 5000 and 5001 are exposed. I built the project into images, and run it on docker, using docker run -it --rm -p 5000:80 --name *name* *imagename*. Docker shows Now listening on: http://127.0.0.1:5000 and Now listening on: https://127.0.0.1:5001. Now the problem is, leave the https part aside, the APIs cannot even accessed by http. The browser just shows This page isn’t working 127.0.0.1 didn’t send any data. ERR_EMPTY_RESPONSE. I found a similar question from here Docker: cannot open port from container to host
, somehow this is about server should listen to 0.0.0.0. Though I am not fully understand the reason, I changed the kestrel configuration to
options.Listen(IPAddress.Any, 5000);
built and ran docker images again, and Docker shows Now listening on: http://0.0.0.0:5000, still it doesn't work. I also tried to replace the IP with localhost, it has no use. I did not use .UseHttpsRedirection(), https should have nothing to do with the problem.
Am I missing any configuration or doing anything wrong? It would be really helpful if anyone could shed some light. Thank you in advance.
You should listen on 80 and 443 inside the container, i.e. options.Listen(IPAddress.Any, 80); because this docker declaration
ports:
- "5000:80"
means that the local port 80 (the port from your source code) is exported to external port 5000, and not the other way around.