Why CURL didn't work from inside docker container? - docker

I have 2 services in Docker (each service has its own docker-compose.yml, nginx + php-fpm).
Service #1 on port 48801.
Service #2 on port 48802.
My server IP 99.99.99.44 (CentOS 8).
I make CURL request (via PHP) from inside Service #1 to Service #2 (i.e. to 99.99.99.44:48802). But I get next error:
Failed to connect to 99.99.99.44 port 48802 after 1017 ms: Host is unreachable
There is a problem with my server. I need a help (or direction).
Some info.
On other server this services work fine.
Request from inside container to port 80 of this server works fine.
Request from host (not from inside container) to custom port 48802 works fine.
All services available from browser (via custom ports).
SELinux disabled.
Firewalld disabled.
My ip route result:
default via 99.99.99.1 dev eno1 proto static metric 100
99.99.99.1 dev eno1 proto static scope link metric 100
172.18.0.0/16 dev br-2f405adcc89d proto kernel scope link src 172.18.0.1
172.19.0.0/16 dev br-19c596fe7618 proto kernel scope link src 172.19.0.1

Related

how to channelize docker container traffic through a hosts network adapter

I have the following network route on my host pc. I am using softether vpn. Its setup on the adapter vpn_se
$ ip route
default via 192.168.43.1 dev wlan0 proto dhcp metric 600
vpn.ip.adress via 192.168.43.1 dev wlan0
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
192.168.30.0/24 dev vpn_se proto kernel scope link src 192.168.30.27
192.168.43.0/24 dev wlan0 proto kernel scope link src 192.168.43.103 metric 600
Now I want to route all traffic from a docker container to vpn_se
something like
172.17.0.6 via 192.168.30.1 dev vpn_se
How can i achieve this

Docker-compose "ports": listen on multiple IP addresses / IP range

Instead of listening to a single IP address like e.g. localhost:
ports:
- "127.0.0.1:80:80"
I want the container to only listen to a local network, i.e. e.g.:
ports:
- "10.0.0.0/16:80:80"
ERROR: The Compose file './docker-compose.yml' is invalid because:
services.SERVICE.ports contains an invalid type, it should be a number, or an object
Is this possible?
I don't want to use things like swarm mode etc., yet.
If IP range is not supported, maybe at least multiple IP addresses like 10.0.0.2 and 10.0.0.3?
ERROR: for CONTAINER Cannot start service SERVICE: driver failed programming external connectivity on endpoint CONTAINER (...): Error starting userland proxy: listen tcp 10.0.0.3:80: bind: cannot assign requested address
ERROR: for SERVICE Cannot start service SERVICE: driver failed programming external connectivity on endpoint CONTAINER (...): Error starting userland proxy: listen tcp 10.0.0.3:80: bind: cannot assign requested address
Or is it not even supported to listen to 10.0.0.3 ?
The host machine is connected to 10.0.0.0/16:
> ifconfig
ens10: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 10.0.0.2 netmask 255.255.255.255 broadcast 10.0.0.2
inet6 f**0::8**0:ff:f**9:b**7 prefixlen 64 scopeid 0x20<link>
ether **:00:00:**:**:** txqueuelen 1000 (Ethernet)
Listening to a single IP address seems not correct. The service is listening at an IP address.
Let's say your VM has two network interfaces (ethernet cards):
Network 1 → subnet: 10.0.0.0/24 and IP 10.0.0.100
Network 2 → subnet: 10.0.1.0/24 and IP 10.0.1.200
If you set 127.0.0.1:80:80 that means that your service listening at 127.0.0.1's (localhost) port 80.
If you want to access service from 10.0.0.0/24 subnet you should set 10.0.0.100:80:80 and use http://10.0.0.100:80 address to be able connect your container from external hosts
If you want to access service from multiple networks simultaneously you can bind the container port to multiple ports, where the IP is the connection source IP):
ports:
- 10.0.0.100:80:80
- 10.0.1.200:80:80
- 127.0.0.1:80:80
And don't forget to open 80 port at VM's firewall, if a firewall exists and restricts that network
I think you misunderstood this field.
When you map 127.0.0.1:80:80 you will map interface 127.0.0.1 from your host to your container.
In the case of the 127.0.0.1 you can only access it from inside your host.
When you map 10.0.0.3:80:80 you will map interface 10.0.0.3 from your host to your container. And all ip who can access 10.0.0.3 will have acces to your docker container mapping.
But in anycase this field will not do any filtering about who access this container
EDIT: After your modification i've seen my misunderstood about your question.
You want docker to create "bridge interface" to not share the ip of your host.
I don't think this is possible when using the port mapping
If you give Compose ports: (or docker run -p) an IP address, it must be a specific known IP address of a host interface, or 0.0.0.0 for "all interfaces". The Docker daemon gives this specific IP address to a bind(2) call, which takes an address and not a network, and follows the rules in ip(7) for IPv4.
With the output you show, you can only bind containers to 10.0.0.2. If you want to use other IP addresses on the same network, you also need to assign them to the host; see for example How can I (from CLI) assign multiple IP addresses to one interface? on Ask Ubuntu, and then you can bind a container to the newly-added address.
If your system is on multiple physical networks, you can have any number of ports: so long as the host address and host port are unique. In particular you can have multiple ports: that all forward to the same container port.
ports:
# make this visible to the external load balancer on port 80
- '192.168.17.2:80:3000'
# also make this visible to the internal network also on port 80
- '10.0.0.2:80:3000'
# and the management network but on port 3000
- '10.99.0.36:3000:3000'
Again, the host must already have these IP addresses in the ifconfig output.

Wireguard in Docker container cannot connect to bridged containers forwarded ports

I have the following setup:
Raspi with Docker and multiple Containers connected to my Router. Some containers are on a MACVLAN network and receive regular IP Address in my LAN (e.g. Pihole, Unbound, etc.), some are on bridged networks and expose certain ports (Portainer, nginx, etc.)
Router LAN (192.x.y.0/24)
|Raspi (192.x.y.5)
|Pihole (192.x.y.11)
|Webserver (192.x.y.20)
|Wireguard (192.x.y.13) - (VPN: 10.x.y.0/32, DNS 192.x.y.11) - (Allowed IPs: 192.x.y.0/24)
|
|Portainer (bridged - exposing 8000, 9000, 9443)
|NGINX (bridged - exposing 81, 80, 443)
When I connect a client through Wireguard,
I can access the internet (Pihole on 192.x.y.11 works as DNS - adblocking works)
I can access Piholes webUI on 192.x.y.11
I can access my webserver on 192.x.y.20
NOT working:
I cannot access the Portainer UI or NGINX UI on their respective forwarded IP:ports e.g. 192.x.y.5:81 for NGINX
What is missing in any config? I found nothing solving this issue - please help!

Docker Nginx-Proxy Container used for Port 80 Forwarding to other container based on Domain

I am trying to set up a Docker Nginx Proxy server to forward incoming requests to their corresponding Docker Container on 192.168.1.120 or to the Router's Web-Admin at 192.168.1.1
So right now I am in a bit of a pickle, but I need to set this up regardless. I have this setup right now
Router 192.168.1.1 (Web Admin + Port Forwarding)
Server1 LAMP - (Router Forwards -> port 80 for LAMP Server)
Server2 Docker - (Router Forwards -> 20 SSH, 8080, 9000 Docker Admin)
So I have to configure the port forwarding through my Router's web interface, which is accessible on port 8080. But the issue is that right now I moved to Florida, and I had stupidly added a port-forwarding rule on 8080 to forward to Shipyard Docker Manager, which I eventually planned to install an Nginx-Proxy Forwarding Docker container. I never got the forwarding Docker container working, and I eventually switched to Portainer on port 9000 which I had to configure because it was the only other port I had already set forwarded before I lost access to my Router's web interface, and thus lost the ability to forward ports.
The downside is that I cannot access my Router's web interface. The upside is that - I still have to implement an Nginx-Proxy port forwarding Container anyways, to set up dynamic port 80 forwarding to different Docker containers based on the URL.
So I want to mvoe my LAMP server on as a new Docker Container, and then I will also have a few other Rails Docker containers - but I need to configure a Docker Container to forward the app to differnt servers based on the port. I assume I need to have 2 dockers running - one for port 80 forwarding, and then one for port 8080 forwarding - this is not a problem.
I have not been able to correctly configure my Nginx config to forwarding an incoming request from my domain-name that I have point to my server (my.domain.com below), needs to get forwarded to my router 192.168.1.1. Any help / suggestions on how to configure my Nginx-Proxy Docker Container to forward this correctly, or what I should setup here to forward incoming requests to a web-server dynamically based on the URL. I can install any Docker containers I need for this.
My current Config /etc/nginx/nginx.conf, running on a Nginx-Proxy Docker Container on port 8080 (Google to find the Docker Image for nginx-proxy)
# My Nginx Config to forward my.domain.com
http {
resolver 127.0.0.1;
access_log /var/logs/nginx/access.log;
server {
listen 8080;
server_name my.domain.com;
return 301 http://192.168.1.1:8080/$request_uri;
}
}
I get these errors:
[error] 55#55: *2274 datacenter.URL.com could not be resolved (110: Operation timed out), client: 166.172.189.185, server: datacenter.URL.com, request: "GET / HTTP/1.1", host: "datacenter.URL.com:8080"
[error] 55#55: recv() failed (111: Connection refused) while resolving, resolver: 192.168.1.1:8080
EDIT: I just noticed that I can only have one Docker Container running at-a-time for each port. So I need to figure out how to forward requests to different servers's + ports based on the Domain Name. So each URL forwarding rule entry needs to be able to go to different servers all running on all different ports.

Centos 7 minimal install can't talk to internet

Newbie trying to install/set up Centos 7. Can ping other machines in the domain, but can't ping gateway, google.com etc. Gets destination host unreachable for gateway and unknown host google.com when pinging google.com
Please advice.
etc/sysconfig/network-scripts:
TYPE=Ethernet
BOOTPROTO=static
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
NAME=enp4s0
iUUID=c39e3407-a566-4586-8fb9-fd4e3bfc4617
DEVICE=enp4s0
ONBOOT=yes
IPADDR="192.168.192.150"
GATEWAY="208.67.254.41"
DNS1="8.8.8.8"
DNS2="8.8.4.4"
etc/resolv.conf
# Generated by NetworkManager
nameserver 8.8.8.8
nameserver 8.8.4.4
etc/sysconfig/network
# Created by anaconda
NETWORKING=yes
HOSTNAME=centos7
GATEWAY=208.67.254.41
Since it says unknown host google.com the machine is not able to route request to internet DNS server(8.8.8.8) to resolve google ip and when you ping the gateway it destination host not reachable
For a machine to connect to other machine their the machine should be within lan if not on lan then there should be a machine which acts a gateway machine within lan in your case you have pointed gateway to 208.67.254.41 obviously it is not on lan so this machine 208.67.254.41 should be accessible from some machine in lan to do so use route command
which add a routing entry in machines routing table
route add -host gw dev
In your case command goes like
route add -host 208.67.254.41 gw dev
eg : route add -host 192.168.12.45 gw 192.168.12.1 dev eth0
Comment entries if ipv6 is not used
Make sure to keep ip forwarding on in the gateway machine in /etc/sysclt.conf on gateway machine
Have you disabled Network Manager?
Command line:
service NetworkManager status

Resources