Bind incoming docker connection to specific hostname inside docker container - docker

I'm trying to migrate some Webpack based projects to run inside docker containers and have some issues with configuring networking.
Our WebPack devServer is configured in the following way:
{
host: 'dev.ng.com',
port: 4000,
compress: true,
disableHostCheck: true
}
in /etc/hosts file we have the following record:
127.0.0.1 dev.ng.com
and everything works fine.
When I run it inside docker I was getting EADDRNOTAVAIL error till I added to my docker-compose.yml the following lines:
extra_hosts:
- "dev.ng.com:127.0.0.1"
But now my app inside the docker app is not available from the host.
The relevant docker-compose.yml part is following:
gui-client:
image: "gui-client"
ports:
- "4000:4000"
extra_hosts:
- "dev.ng.com:127.0.0.1"
If I change in my Webpack host: 'dev.ng.com' to host:'0.0.0.0' it works fine, but I prefer not to change the Webpack config and run it as is.
My knowledge of docker networks internals is limited by I guess that all inbound connections to docker container from the host should be redirected to dev.ng.com:4000 while now they redirected to 0.0.0.0:4000, can it be achieved?

Yes, 127.0.0.1 is reachable normally only from the localhost. Containers work like if they would be virtual machines.
You need to configure it to listen everywhere. So very likely, "dev.ng.com:0.0.0.0" is what you want. Such things should be carefully used in normal circumstances, because mostly we do not want to share internal services to the internet. But here it serves only the purpose to make your configuration independent from the ip/netmask what docker gives to your container app.
Beside that, you need to forward the incoming connections of the host to your container. This can be done by a
- ports:
"0.0.0.0:4000:4000"
In your docker-compose.yml.
Possibly you will also want to make your port 4000 (of the host) reachable from the external world, this can be done by your firewall rules.
In professional configurations, there is typically some frontend (to provide encryption/security/load balancing), but if you only want to show your work to your boss, a http://a.b.c.d:4000 is pretty enough.

Related

disable IP forwarding if no port mapping definition in docker-compose.yml file

I am learning docker network. I created a simple docker-compose file that starts two tomcat containers:
version: '3'
services:
tomcat-server-1:
container_name: tomcat-server-1
image: .../apache-tomcat-10.0:1.0
ports:
- "8001:8080"
tomcat-server-2:
container_name: tomcat-server-2
image: .../apache-tomcat-10.0:1.0
After I start the containers with docker-compose up I can see that tomcat-server-1 responses on http://localhost:8001. At the first glance, the tomcat-server-2 is not available from localhost. That great, this is what I need.
When I inspect the two running containers I can see that they use the following internal IPs:
tomcat-server-1: 172.18.0.2
tomcat-server-2: 172.18.0.3
I see that the tomcat-server-1 is available from the host machine via http://172.18.0.2:8080 as well.
Then the following surprised me:
The tomcat-server-2 is also available from the host machine vie http://172.18.0.3:8080 despite port mapping is not defined for this container in the docker-compose.yml file.
What I would like to reach is the following:
The two tomcat servers must see each other in the internal docker network via hostnames.
Tomcat must be available from the host machine ONLY if the port mapping is defined in the docker-compose file, eg.: "8001:8080".
If no port mapping definition then the container could NOT be unavailable. Either from localhost or its internal IP, eg.: 172.18.0.3.
I have tried to use different network configurations like the bridge, none, and host mode. No success.
Of course, the host mode can not work because both tomcat containers use internally the same port 8080. So if I am correct then only bridge or none mode that I can consider.
Is that possible to configure the docker network this way?
That would be great to solve this via only the docker-compose file without any external docker, iptable, etc. manipulation.
Without additional firewalling setup, you can't prevent a Linux-native host from reaching the container-private IP addresses.
That having been said, the container-private IP addresses are extremely limited. You can't reach them from other hosts. If Docker is running in a Linux VM (as the Docker Desktop application provides on MacOS or Windows) then the host outside the VM can't reach them either. In most cases I would recommend against looking up the container-private IP addresses up at all since they're not especially useful.
I wouldn't worry about this case too much. If your security requirements need you to prevent non-Docker host processes from contacting the containers, then you probably also have pretty strict controls over what's actually running on the host and who can log in; you shouldn't have unexpected host processes that might be trying to connect to the containers.

Using Docker Compose to expose DDEV web container ports to host machine

I have configured a DDEV Drupal environment in which I need to run Pattern Lab for my theme. This environment will be used by students of mine who may not be well-versed in installing Node or Node dependencies tools on their local computers (Pattern Lab requires Node). As a workaround, I am setting Pattern Lab to run in the DDEV's web container. The problem I am having is that since Pattern Lab is running in a DDEV container, I can't access it on my host computer. Has anyone done something similar to expose Docker ports to the host machine?
Under the hood, DDEV uses docker-compose to define and run the multiple containers that make up the local environment for a project. docker-compose supports defining multiple compose files to facilitate sharing Compose configurations between files and projects, and DDEV is designed to leverage this ability. Here are the steps I took to solve this issue:
Creating a new docker-compose*.yaml file:
Inside .ddev/ I created a file called docker-compose.patternlab.yaml. The second part of the file name (patternlab), can be anything you want. It makes sense to use a name that is related to the action, app, or service you are trying to implement.
I added the code below to expose the web container's port 3000 to the host's port 3000 (https), and 3001(http):
# Override the web container's standard HTTP_EXPOSE and HTTPS_EXPOSE services
# to expose port 3000 of DDEV's web container.
version: '3.6'
services:
web:
# ports are a list of exposed *container* ports
ports:
- "3000"
environment:
- HTTP_EXPOSE=${DDEV_ROUTER_HTTP_PORT}:80,${DDEV_MAILHOG_PORT}:8025,3001:3000
- HTTPS_EXPOSE=${DDEV_ROUTER_HTTPS_PORT}:80,${DDEV_MAILHOG_HTTPS_PORT}:8025,3000:3000
After this file is updated, save your changes and restart DDEV.
Now I can access Pattern Lab in my host computer by going to my site's url and appending port 3000 or 3001 depending on the protocol I am using. Like this:
https://mysite.ddev.site:3000 or http://mysite.ddev.site:3001.
For more information on defining new services with docker compose, read the DDEV docs.
I hope this helps.

Setup Nginx reverse-proxy

I am absolutely new to the selfhosting community I want to setup a home server that grants access to different applications. Using docker I got a wordpress and a nextcloud app up and running. Now I want to add bitwarden and want it to be accessible via vault.myhosting.xx. Later I want to add ssl via letsencrypt.
I am using jwilder/nginx-proxy which makes it very easy to add new virtual host by doing minor changes in the docker-compose.yml of the specific app.
I wanted to do the same in bitwarden (I had some bugs when editing the docker-compose.yml see issue:188). The bitwarden developer suggested adjusting the reverse-proxy. But I do not know how to do this. I tried to implement a new virtual host in the reverse-proxy container but I do not understand how to do the linking to the bitwarden container.
A warring! English is not native for me. Im not sure that understand your problem exactly.
How it's work in approximation:
Any domain name is a real IP address alias.
Special NS servers (DNS) keep this pair domain-IP.
When any user enter the address it reffers to DNS.
DNS may be public like a GoogleDNS (8.8.8.8), or specific for internet provider.
How docker work:
Docker in out of the box use severals drivers to network connections:
bridge, host, null, overlay...
For example bridge:
Your machine with Docker it is HOST
Any Containers can there port provide to host port
In Docker-compose its look like
.yaml
services:
wordpress:
image: wordpress
ports:
- 8080:80
Its mean that host port 8080 will be alias to 80 port of container. Only one application in each time can listen specific port. If you try add one more it will crush container up process. So use different port. For port less then 1024 you may need root access.
Now when app provide port. You may access on address like localhost:8080
If you need access from internet to this port you may meet next problem:
After that problems
1) Switch NAT -> most of switch have setting to open host port to internet. Way is specific by each Vendor, but not very hard to set up.
2) Dynamic IP -> it may to resolving with service like http://GoDaddy.com , http://noip.com . Also you may to buy a domain name.
Also read or watch about "docker network"

It's possible to tie a domain to the docker container when building it?

Currently in the company where I am working on they have a central development server which contains a LAMP environment. Each developer has access to the application as: developer_username.domain.com. The application we're working on uses licenses and the licenses are generated under each domain and are tied to the domain only meaning I can't use license from other developer.
The following example will give you an idea:
developer_1.domain.com ==> license1
developer_2.domain.com ==> license2
developer_n.domain.com ==> licenseN
I am trying to dockerize this enviroment at least having PHP and Apache in a container and I was able to create everything I need and it works. Take a look to this docker-compose.yml:
version: '2'
services:
php-apache:
env_file:
- dev_variables.env
image: reypm/php55-dev
build:
context: .
args:
- PUID=1000
- PGID=1000
ports:
- "80:80"
- "9001:9001"
extra_hosts:
- "dockerhost:xxx.xxx.xxx.xxx"
volumes:
- ~/var/www:/var/www
That will build what I need but the problem comes when I try to access the server because I am using http://localhost and then the license won't work and I won't be able to use the application.
The idea is to access as developer_username.domain.com, so my question is: is this a work that should be done on the Dockerfile or the Docker Compose I mean at image/container level let's say by setting up a ENV var perhaps or is this a job for /etc/hosts on the host running the Docker?
tl;dr
No! Docker doesn't do that for you.
Long answer:
What you want to do is to have a custom hostname on the machine hosting docker mapped to a container in Docker compose network. right?
Let's take a step back and see how networking in docker works:
By default Compose sets up a single network for your app. Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name.
This network is not equal to your host network and without explicit ports exporting (for a specific container) you wouldn't have access to this network. All exposing does, is that:
The exposed port is accessible on the host and the ports are available to any client that can reach the host.
From now on you can put a reverse proxy (like nginx) or you can edit /etc/hosts to define how clients can access the host (i.e. Docker host, the machine running Docker compose).
The hostname is defined when you start the container, overwriting anything you attempt to put inside the image. At a high level, I'd recommend doing this with a mix of custom docker-compose.yml and a volume per developer, but each running an identical image. The docker-compose.yml can include the hostname and domain setting. Then everything else that needs to be hostname specific on the filesystem, and the license itself, should point to files on the volume. Lastly, include an entrypoint that does the right thing if a new hostname is started with a default or empty volume, populating it with the new hostname data and prompting for the license.

Sporadic 503s from specified ports

I've been working on using Rancher for manager our dashboard applications, part of this has involved exposing multiple kibana containers from the same port, and one kibana 3 container exposing on port 80.
I want to therefore send requests on specified ports: 5602, 5603, 5604 to specific containers, so I setup the following docker-compose.yml config:
kibana:
image: rancher/load-balancer-service
ports:
- 5602:5602
- 5603:5603
- 5604:5604
links:
- kibana3:kibana3
- kibana4-logging:kibana4-logging
- kibana4-metrics:kibana4-metrics
labels:
io.rancher.loadbalancer.target.kibana3: 5602=80
io.rancher.loadbalancer.target.kibana4-logging: 5603=5601
io.rancher.loadbalancer.target.kibana4-metrics: 5604=5601
Everything works as expected, but I get sporadic 503's. When I go into the container and look at the haproxy.cfg I see:
frontend d898fb95-ec51-4c73-bdaa-cc0435d8572a_5603_frontend
bind *:5603
mode http
default_backend d898fb95-ec51-4c73-bdaa-cc0435d8572a_5603_2_backend
backend d898fb95-ec51-4c73-bdaa-cc0435d8572a_5603_2_backend
mode http
timeout check 2000
option httpchk GET /status HTTP/1.1
server cbc23ed9-a13a-4546-9001-a82220221513 10.42.60.179:5603 check port 5601 inter 2000 rise 2 fall 3
server 851bdb7d-1f6b-4f61-b454-1e910d5d1490 10.42.113.167:5603
server 215403bb-8cbb-4ff0-b868-6586a8941267 10.42.85.7:5601
The IPs listed are all three Kibana containers, the first container has a health check has it, but none of the others do (kibana3/kibana4.1 dont have a status endpoint). My understanding of the docker-compose config is it should have only the one server per backend, but all three appear to be listed, I assume this is in part down to the sporadic 503s, and removing this manually and restarting the haproxy service does seem to solve the problem.
I am configuring the load balancer incorrectly or is this worth raising as a Github issue with Rancher?
I posted on the Rancher forums as that was suggested from Rancher Labs on twitter: https://forums.rancher.com/t/load-balancer-sporadic-503s-with-multiple-port-bindings/2358
Someone from rancher posted a link to a github issue which was similar to what I was experiencing: https://github.com/rancher/rancher/issues/2475
In summary, the load balancers will rotate through all matching backends, there is a work around involving "dummy" domains, which I've confirmed with my configuration does work, even if it is slightly inelegant.
labels:
# Create a rule that forces all traffic to redis at port 3000 to have a hostname of bogus.com
# This eliminates any traffic from port 3000 to be directed to redis
io.rancher.loadbalancer.target.conf/redis: bogus.com:3000
# Create a rule that forces all traffic to api at port 6379 to have a hostname of bogus.com
# This eliminates any traffic from port 6379 to be directed to api
io.rancher.loadbalancer.target.conf/api: bogus.com:6379
(^^ Copied from rancher github issue, not my workaround)
I'm going to see how easy it would be to route via port and raise a PR/Github issue as I think it's a valid usecase for an LB in this scenario.
Make sure that you are using the port initially exposed on the docker container. For some reason, if you bind it to a different port, HAProxy fails to work. If you are using a container from DockerHub that is using a port already taken on your system, you may have to rebuild that docker container to use a different port by routing it through a proxy like nginx.

Resources