This question already has answers here:
Docker container to connect to Postgres not in docker
(2 answers)
Closed 2 years ago.
OK.. Sorry to clog up this site with endless questions.
I have a .NET REST API that works in DOCKER. (Windows container)
But, the moment I try to connect to Postgres on my host I am unable to connect. I get unable to connect, request timed out, connection was actively refused... I have modified my connection string over a thousand times trying to get this to work.
when I look at docker networks is get:
C:\Windows\SysWOW64>docker network ls
NETWORK ID NAME DRIVER SCOPE
4c79ae3895aa Default Switch ics local
40dd0975349e nat nat local
90a25f9de905 none null local
when I inspect my container, it says it is using NAT for network.
C:\Windows\SysWOW64>docker network inspect nat
[
{
"Name": "nat",
"Id": "40dd0975349e1f4b334e5f7b93a3e8fb6aef864315ca884d8587c6fa7697dec5",
"Created": "2020-07-08T15:02:17.5277779-06:00",
"Scope": "local",
"Driver": "nat",
"EnableIPv6": false,
"IPAM": {
"Driver": "windows",
"Options": null,
"Config": [
{
"Subnet": "172.22.96.0/20",
"Gateway": "172.22.96.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"0d2dc2658a9948d84b01eaa9f5eb5a0e7815933f5af17e5abea17b82a796e1ec": {
"Name": "***MyAPI***",
"EndpointID": "3510dac9e5c5d49f8dce18986393e2855008980e311fb48ed8c4494c9328c353",
"MacAddress": "00:15:5d:fc:4f:8e",
"IPv4Address": "172.22.106.169/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.windowsshim.hnsid": "3007307C-49DC-4DB5-91C8-0E05DAC8E2B6",
"com.docker.network.windowsshim.networkname": "nat"
},
"Labels": {}
}
]
When I look at my network properties of my host I have :
Name: vEthernet (nat)
Description: Hyper-V Virtual Ethernet Adapter #2
Physical address (MAC): 00:15:5d:fc:43:56
Status: Operational
Maximum transmission unit: 1500
IPv4 address: 172.22.96.1/20
IPv6 address: fe80::d017:d598:692a:2e67%63/64
DNS servers: fec0:0:0:ffff::1%1, fec0:0:0:ffff::2%1, fec0:0:0:ffff::3%1
Connectivity (IPv4/IPv6): Disconnected
I am guessing that the NAT in the docker network ls linking to this network hyper v adapter.
both have 172.22.96.1 as the IPAddress
connection string:
Server=172.22.96.1;Port=5433;Database=QuickTechAssetManager;Uid=QuickTech;Pwd=password;
SO... when I try to connect from container to host to connect to postgres I get errors even though the I can ping the UP address.
when I look at my host file, host.docker.internal is set to 10.0.0.47 (my wifi connection).
Is this "disconnect" part of my network problems.
I have posted a few questions on this and I get one answer and then nothing further.
I am would absolutely love to have someone work with me for a bit to resolve this one - what should be minor - issue.
I have modified my pg_hba.conf file, I have done everything I can find...
I will give a phone number or email to anyone who wants to help me solve this. I have been killing myself for over a week and am getting nowhere. I am not even sure is this sort of request is allowed here but I am desperate. I am three months into a project and cant get paid until I get this one minor problem solved.
here is the other question I asked a few days ago:
Docker container to connect to Postgres not in docker
rentedtrout#gmail.com for anyone who wants to work with me on this.
Please and thank you in advance.
Have you tried using the host only switch?
docker run --network host "imagename".
This will allow to use the same network as the one in the host i.e if you are able to connect to Postgres from host, then you will be able to connect it from the container as well (with the same ip address).
Related
I have a docker swarm in a cluster of machines and my use case is deploying several standalone containers that need to be connected which have static IP configurations, so I created an overlay to connect all the nodes of the swarm. I don't use/want to use anything related to docker SERVICES nor its replication in my docker swarm, it's not a real word scenario it's a test one.
The problem is when I deploy a container to a certain host a the swarm load balancer is created with a certain IP address which is random and I need it to be static because I can't change the configurations of the containers I want to deploy. I already searched how can I remove this load balancer, because as far as I'm concerned it's only used for external traffic coming into the swarm services/containers and they are not useful for my use case.
A solution would be deploy a dummy container and check which IP was assigned to the swarm load balancers in each node and then adjust the configuration files of the containers I want to deploy, but this approach does not scale well and it's a workaround of the actual problem. My problems started when randomly my containers couldn't start giving docker: Error response from daemon: attaching to network failed, make sure your network options are correct and check manager logs: context deadline exceeded. where I could not identify it's reason to happen and then inferred it was because these load balancers where using the same IP adress I wanted to use in my containers.
My question is how can I statically define the IP of these load balancers or remove them completely for every node? Thank you for your time.
Docker Swarm Architecture Here is the output of docker inspect network <my-overlay-network>
"Name": "my-network",
"Id": "mo8rcf8ozr05qrnuqh64wamhs",
"Created": "2020-11-16T01:59:20.100290182Z",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.1.0/24",
"Gateway": "10.0.1.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"95b8e9c3ab5f9870987c4077ce264b96a810dad573a7fa2de485dd6f4b50f307": {
"Name": "unruffled_haslett",
"EndpointID": "422d83efd66ae36dd10ab0b1eb1a70763ccef6789352b06b8eb3ec8bca48410f",
"MacAddress": "02:42:0a:00:01:0c",
"IPv4Address": "10.0.1.12/24",
"IPv6Address": ""
},
"lb-my-network": {
"Name": "my-network-endpoint",
"EndpointID": "192ffaa13b7d7cfd36c4751f87c3d08dc65e66e97c0a134dfa302f55f77dcef3",
"MacAddress": "02:42:0a:00:01:08",
"IPv4Address": "10.0.1.8/24",
"IPv6Address": ""
}
`
I just use a wider subnet mask of /16 instead of /24. Which allowed me to have more IP addresses and thus avoiding collisions with the Internal load balancers.
I have an existing (MacVLAN) docker network called "VPN" to which I normally attach docker containers that I all want to run on a VPN. There are two single 'host' docker containers running openvpn, each with their own IP, and I attach other containers to these as I please.
I have recently moved, and my new router is at address 192.168.0.1. However, the old house's router had the gateway at 192.168.2.254, and the existing docker network had the subnet mask, the IP range and the gateway all configured for this.
If I run docker network inspect VPN it gives:
[
{
"Name": "VPN",
"Id": [anidea],
"Created": [sometimenottolongago],
"Scope": "local",
"Driver": "macvlan",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "192.168.2.0/24",
"IPRange": "192.168.2.128/28",
"Gateway": "192.168.2.254"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {
"parent": "enp5s0"
},
"Labels": {}
}
]
There were two machines on the network, and I cannot access them currently. Both machines are a container to which other containers are attached to.
I have tried:
Recreating the docker containers with new IP addresses on the subnet of the new home network. This doesn't work as the docker network "VPN" allows only IP's on the old range.
Access the docker containers/machines at their old IP. Then I get a timeout; possibly I need to design some IP routing or something? This is where my knowledge (if any) starts getting cloudy.
I think the best is to just update the docker network "VPN" to play nice with the new Gateway/router/home network; I would like to change the IPAM["Config"] parameters to update for the new gateway and subnet. However, I can't find online how to do this (the only things that come up is how to change the default settings for the default docker network).
Long story short:
How do I change configuration/parameters of an existing docker network?
If, in the end, this is a bad way of doing things (for instance, if I can access the containers on the network as-currently-is), I'm also open for ideas.
The host machine is running ubuntu-server 20.04.1 LTS.
Thanks in advance.
The simplest approach to this would be to delete the VPN network and create it anew with new parameters but the same name. If you use docker-compose up to recreate containers, include the networks section in the first container that you recreate.
First, run this to delete the existing network:
docker network rm VPN
Then add the macvlan network definition to yml of your first re-created container. Here is the networks section I used, adapted somewhat to your situation:
networks:
VPN:
driver: macvlan
enable_ipv6: true # if needed
driver_opts:
parent: eth0
ipam:
config:
- subnet: 192.168.0.0/24
gateway: 192.168.0.1
ip_range: 192.168.0.8/30 # reserve some IP addresses for other machines
# in that subnet - adjust as needed
- subnet: xx:xx:xx:xx::/63 # put your IPv6 subnet here if needed
gateway: xx:xx:xx:xx:xx::xx # IPv6 (external) of your router
Alternatively, you could change your new router config to match the old one, and leave your macvlan VPN as is.
I am trying to restrict access to Windows docker containers to specific IP(s). Looks like this can be easily done with iptables on Linux containers, but I'm having a difficult time finding a proper solution for Windows Server containers. A similar thing I'm trying to do on a Windows container is described on the first answer on THIS StackOverFlow question but that's Linux.
For starters, I cannot seem to start the Windows Defender Firewall service INSIDE the container (not on the host). What exactly happens is described in THIS StackOverFlow question. But in short, Start-Service -Name MpsSvc simply does not work. Modifying registry keys don't work. It's described in that post. So does this mean using Windows Firewall inside a container to restrict access is out of the question?
Network isolation and security document states that default Outbound/Inbound traffic on Windows Server containers is ALLOW ALL. I want to lock it down so that traffic to the container only comes from one source.
Orchestrator we're using is Azure Service Fabric. In the cluster, there's a main SYSTEM node/host. That node hosts Traefik (it's like a traffic manager/load balancer). Traefik instances are running on each subsequent nodes in the cluster as well. And each node has a bunch of containers - simple. So basic idea here is, I want only traffic from the primary SYSTEM node to hit the containers. All else needs to be blocked.
How can I achieve this?
Base Image for the container we're using is mcr.microsoft.com/windows/servercore:ltsc2019. I'm trying to create our own version of the base image with some modifications (like this security hardening, logging, etc.), publish that to our Azure Container Registry (ACR), and plan is for developers to PULL from our ACR instead of Microsoft's public hub.
As for the network, not sure if it matters or not but it's using the default NAT network.
docker network inspect nat
[
{
"Name": "nat",
"Id": "78l2lk902jsxu82jskais92alxp51mcf2907djsla81m154985snjo1d69xh51da",
"Created": "2020-04-15T20:18:39.9097816-07:00",
"Scope": "local",
"Driver": "nat",
"EnableIPv6": false,
"IPAM": {
"Driver": "windows",
"Options": null,
"Config": [
{
"Subnet": "172.27.128.0/20",
"Gateway": "172.27.128.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {
"com.docker.network.windowsshim.hnsid": "5045b0b6-d9a6-4b50-b1da-f66b0b770feb",
"com.docker.network.windowsshim.networkname": "nat"
},
"Labels": {}
}
]
I've been trying the latest RC's for docker and compose for a few days, and finally, today, the new stable versions (1.10 and 1.6 respectively).
The new networking stuff added in 1.9 has been great so far. But since I upgraded to 1.10rc1 (and so far for every RC and stable), containers in the same user defined network can no longer find each other. In fact, they can't even reach the outside world right now.
A quick example, file test_docker/docker-compose.yml:
version: '2'
services:
db1:
image: mysql
environment:
MYSQL_ALLOW_EMPTY_PASSWORD: yes
db2:
image: mysql
environment:
MYSQL_ALLOW_EMPTY_PASSWORD: yes
This creates two MySQL containers with the official image. According to the compose docs, a new testdocker_default should be created, with both containers automatically connected, which is the case:
docker network inspect testdocker_default
[
{
"Name": "testdocker_default",
"Id": "820f702e8e685567e4f1a8638cd9be305e96e37fcd741306eed6c1cf0d54ba02",
"Scope": "local",
"Driver": "bridge",
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1/16"
}
]
},
"Containers": {
"16d5594bdfd11f55d33a207612b8447f6b50ff4be8b42d2313707b06ca618556": {
"Name": "testdocker_db2_1",
"EndpointID": "b6d5ff10fba860c01ac7a6508e56c5e116296cd06ea2158c695897e18fcd50ce",
"MacAddress": "02:42:ac:11:00:03",
"IPv4Address": "172.17.0.3/16",
"IPv6Address": ""
},
"9b8b885dab3b5012c9663cb97a07af66fbe385f92c69a614a4d56bf85305ec3a": {
"Name": "testdocker_db1_1",
"EndpointID": "09e43aef8e14b0e876d47fabe67a3827dc4cea5d44b199113d9ab2678d8ce22a",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
}
},
"Options": {}
}
]
Now, the documentation also says that the containers should be able to reach each other through db1 and db2, but this is not the case:
root#9b8b885dab3b:/# mysql -h db2 -u root
ERROR 2005 (HY000): Unknown MySQL server host 'db2' (111)
root#9b8b885dab3b:/# mysql -h testdocker_db2_1 -u root
ERROR 2005 (HY000): Unknown MySQL server host 'testdocker_db2_1' (111)
Additionally, neither container is able to reach the internet, unless I explicitly add Google's DNS to the /etc/resolv.conf.
I'm pretty sure I'm doing something wrong here, because I can't find issues raised by other people, but I can't figure out what it is.
Thanks guys!
Edit:
To clarify, containers can ping each other through their IP address, but the hostnames are not resolved.
This issue was reported on GitHub. The suggested workaround for the moment is to disable firewalld altogether.
I will update this answer with a better solution to this issue as soon as one is found.
Edit #1:
Pull request solving this issue (tested for Fedora 23). This PR is already merged with master, for anyone wanting to compile Docker from source.
Couldn't find an expected release date, but I'm guessing it will be released as a patch version in the next couple weeks. Will update this answer again with further information when available.
Edit #2:
Docker's 1.10.1 RC solves this issue. I'll mark this answer as accepted just to close this topic.
I've been putting together a POC mesos/marathon system that I am using to launch and control docker images.
I have a Vagrant virtual machine running in VirtualBox on which I run docker, marathon, zookeeper, mesos-master and mesos-slave processes, with everything working as expected.
I decided to add Chronos into the mix and initially I started with it running as a service on the vagrant VM, but then opted to switch to running it in a docker container using the mesosphere/chronos image.
I have found that I can get container image to start and run successfully when I specify HOST network mode for the container, but when I change to BRIDGE mode then I run into problems.
In BRIDGE mode, the chronos framework registers successfully with mesos (I can see the entry on the frameworks page of the mesos UI), but it looks as though the framework itself doesn't know that the registration was successful. The mesos master log if full of messages like:
strong textI1009 09:47:35.876454 3131 master.cpp:2094] Received SUBSCRIBE call for framework 'chronos-2.4.0' at scheduler-16d21dac-b6d6-49f9-90a3-bf1ba76b4b0d#172.17.0.59:37318
I1009 09:47:35.876832 3131 master.cpp:2164] Subscribing framework chronos-2.4.0 with checkpointing enabled and capabilities [ ]
I1009 09:47:35.876924 3131 master.cpp:2174] Framework 20151009-094632-16842879-5050-3113-0001 (chronos-2.4.0) at scheduler-16d21dac-b6d6-49f9-90a3-bf1ba76b4b0d#172.17.0.59:37318 already subscribed, resending acknowledgement
This implies some sort of configuration/communication issue but I have not been able to work out exactly what the root of the problem is. I'm not sure if there is any way to confirm if the acknowledgement from mesos is making it back to chronos or to check the status of the communication channels between the components.
I've done a lot of searching and I can find posts by folk who have encountered the same issue but I haven't found an detailed explanation of what needs to be done to correct it.
For example, I found the following post which mentions a problem that was resolved and which implies the user successfully ran their chronos container in bridge mode, but their description of the resolution was vague. There was also this post but the change suggested did resolve the issue that I am seeing.
Finally there was a post by someone at ILM who had what sound like exactly my problem and the resolution appeared to involve a fix to Mesos to introduce two new environment variables LIBPROCESS_ADVERTISE_IP and LIBPROCESS_ADVERTISE_PORT (on top of LIBPROCESS_IP and LIBPROCESS_PORT) but I can't find a decent explanation of what values should be assigned to any of these variables, so have yet to work out whether the change will resolve the issue I am having.
It's probably worth mentioning that I've also posted a couple of questions on the chronos-scheduler group, but I haven't had any responses to these.
If it's of any help the versions of software I'm running are as follows (the volume mount allows me to provide values of other parameters [e.g. master, zk_hosts] as files, without having to keep changing the JSON):
Vagrant: 1.7.4
VirtualBox: 5.0.2
Docker: 1.8.1
Marathon: 0.10.1
Mesos: 0.24.1
Zookeeper: 3.4.5
The JSON that I am using to launch the chronos container is as follows:
{
"id": "chronos",
"cpus": 1,
"mem": 1024,
"instances": 1,
"container": {
"type": "DOCKER",
"docker": {
"image": "mesosphere/chronos",
"network": "BRIDGE",
"portMappings": [
{
"containerPort": 4400,
"hostPort": 0,
"servicePort": 4400,
"protocol": "tcp"
}
]
},
"volumes": [
{
"containerPath": "/etc/chronos/conf",
"hostPath": "/vagrant/vagrantShared/chronos",
"mode": "RO"
}
]
},
"cmd": "/usr/bin/chronos --http_port 4400",
"ports": [
4400
]
}
If anyone has any experience of using chronos in a configuration like this then I'd appreciate any help that you might be able to provide in resolving this issue.
Regards,
Paul Mateer
I managed to work out the answer to my problem (with a little help from the sample framework here), so I thought I should post a solution to help anyone else the runs into the same issue.
The chronos service (and also the sample framework) were configured to communicate with zookeeper on the IP associated with the docker0 interface on the host (vagrant) VM (in this case 172.17.42.1).
Zookeeper would report the master as being available on 127.0.1.1 which was the IP address of the host VM that the mesos-master process started on, but although this IP address could be pinged from the container any attempt to connect to specific ports would be refused.
The solution was to start the mesos-master with the --advertise_ip parameter and specify the IP of the docker0 interface. This meant that although the service started on the host machine it would appear as though it had been started on the docker0 ionterface.
Once this was done communications between mesos and the chronos framework started completeing and the tasks scheduled in chronos ran successfully.
Running Mesos 1.1.0 and Chronos 3.0.1, I was able to successfully configure Chronos in BRIDGE mode by explicitly setting LIBPROCESS_ADVERTISE_IP, LIBPROCESS_ADVERTISE_PORT and pinning its second port to a hostPort which isn't ideal but the only way I could find to make it advertise its port to Mesos properly:
{
"id": "/core/chronos",
"cmd": "LIBPROCESS_ADVERTISE_IP=$(getent hosts $HOST | awk '{ print $1 }') LIBPROCESS_ADVERTISE_PORT=$PORT1 /chronos/bin/start.sh --hostname $HOST --zk_hosts master-1:2181,master-2:2181,master-3:2181 --master zk://master-1:2181,master-2:2181,master-3:2181/mesos --http_credentials ${CHRONOS_USER}:${CHRONOS_PASS}",
"cpus": 0.1,
"mem": 1024,
"disk": 100,
"instances": 1,
"container": {
"type": "DOCKER",
"volumes": [],
"docker": {
"image": "mesosphere/chronos:v3.0.1",
"network": "BRIDGE",
"portMappings": [
{
"containerPort": 9900,
"hostPort": 0,
"servicePort": 0,
"protocol": "tcp",
"labels": {}
},
{
"containerPort": 9901,
"hostPort": 9901,
"servicePort": 0,
"protocol": "tcp",
"labels": {}
}
],
"privileged": true,
"parameters": [],
"forcePullImage": true
}
},
"env": {
"CHRONOS_USER": "admin",
"CHRONOS_PASS": "XXX",
"PORT1": "9901",
"PORT0": "9900"
}
}