Azure Service Fabric On-Premise docker network default ip range - docker

We are running windows containers on an on-premise Azure Service Fabric installation. We are building the fabric nodes from a corporate template (Windows 2016 with container support) that also contains an internal firewall product (also controlling the flows between internal networks in the node). The configuration of this firewall is centrally managed.
In order to correctly configure the firewalls, we need to control the IP range of the docker network. To do this we created or own docker network (of type 'nat') and named it 'xyz'. (as the current docker-ee for windows version does not accept the "fixed-cidr" parameter in the configuration file).
When using containers in service fabric we ran into problems because when the container is started by sf, it tries to attach to a default network named 'nat'. Apparently it is not possible to name a custom network 'nat', or to pass the name of a network to which the container should attach to service fabric (either through classic application package or docker compose file).
To solve the problems following would work:
Fix the IP segment address during Docker for Windows installation
Have the option to specify the name of the network the container should connect to when started by service fabric (when starting the container manually this can be done with the --network option)
????
Any suggestions?

Only thing you can do today is option 1.
We are working on 2 :-)

Related

how can I connect two docker containers with nomad

I built two docker applications that communicate with each other using the docker network, but when I tried to run those applications using nomad. The problem within nomad is that the container name is not configurable and gives the container a random name. So I can't add those containers to the docker network and have them know each other with their specific names.
So how can I run two or more docker containers in the same docker network using nomad?
I'm aware of few approaches. First one works with nomad only, the others assume that consul is deployed as well.
Place both containers in the same task group. Nomad will locate them always on the same node and you can access address via Nomad env variables NOMAD_IP_<label>, NOMAD_PORT_<label> or NOMAD_ADDR_<label>.
Register the server application (docker container) in the consul service registry with nomad service stanza. You can then use nomad template stanza in "client" application to render config. Example/doc is here.
Setup consul connect (service mesh) in your deployment.
You could use consul DNS interface. Consul can work as a DNS server and every service is resolvable at <service_name>.service.<dc>.consul (doc). But you have to configure your servers to use consul DNS (doc).
Approach 1 is the easiest but has huge limitation (the same node). Approach 2 worked for me well for several years. Nomad is that intelligent that it will reload/restart your client IP should the server IP/port change.

Docker containers on WSL2 don't get added to the bridge network

Issue: My containers (all of which are webservers) can't communicate with each other by container name (the DNS lookup fails). I can make them communicate by creating a new network and adding each created container to that network, but I'd prefer to not have to do this manually.
Details: According to the docs all new containers should automatically get added to the bridge network and be able to communicate to each other simply by container_name:port. However, on WSL2, even though the bridge network exists, the containers don't seem to be added to it because they can't communicate with each other by name.
Workarounds that I've tried:
I am making it work right now by creating a network and adding containers on that network. However, this is cumbersome and not feasible when I eventually have a large number of containers.
docker-compose is an idea, but my integration test suite creates containers from inside it and all my integration tests will not work (and I'll have to switch to a new integration test suite entirely).
Is there a way that I can make new containers automatically join the bridge network (or my own network) without using docker-compose?
Docker Desktop version: 3.2.2 (61853)
Windows 10; Build 19042.928
Turns out my docker containers WERE getting added to the default bridge network. However, them not being able to communicate with each other is an intended design. Containers on the default bridge network can't talk to each other by host name; they must use IP to communicate.
docker run --network="bridge" <mycontainer>
You can check exactly what is going on inside with
docker inspect <containerID>
I would go with these test options to isolate issue
1- check bridge network itself working fine in WSL system, as WSL is new have some issue.
2- checking container through if yes it means docker is creating container correctly
3- try to resolve IP to check if it is resolving, if yes then it can be purely DNS issue
4- as per 3rd point will check DNS pod if it is functioning correctly.
If possible could you share exact error and DNS pod status.

How to Make Docker Use Specific IP Address for Browser Access from the Host

I'm using docker for building both UI and some backend microservices, and using Spring Zuul as the Proxy to pass Restful API calls from UI to the downstream microservices. My UI project needs to specify an IP address in the JS file before the build, and the Zuul project also needs to specify the IP addresses for the downstream microservices. So that after starting the containers, I can access my application using my docker machine IP http://192.168.10.1/myapp and the restful API calls in the browser network tab will be http://192.168.10.1/mymicroservices/getProduct, etc.
I can set all the IPs to my docker machine IP and build them without issues. However for my colleagues located in other countries, their docker machine IP will be different. How can I make docker use a specific IP, for example, 192.168.10.50, which I can set in the UI project and Zuul Proxy project, so that the docker IP will be the same for everyone, regardless of what their actual docker machine IP is?
What I've tried:
I've tried port forwarding in VirtualBox. It works for the UI, however the restful API calls failed.
I also tried the solution mentioned in this post:
Assign static IP to Docker container
However I can't access the services from the browser using the container IP address.
Do you have any better ideas? Thank you!
first of to clarify couple things,
If you are doin docker run ..... then you just starting container in your docker which is installed on the host machine. And there now way docker can change ip of your host machine. Thus if your other services are running somewhere else they will have to know something about docker host machine, ip or dns name.
so basically docker does runs on 127.0.0.1 if you are trying it on docker host machine, or on host machine IP if from outside of it. So docker don't need IP of host to start.
The other thing is if you are doing docker-composer up/start. Which means all services are in that docker compose file. In this case docker composer creates docker network for all containers in it. in this case you definitely can use fixed IPs for containers, though most often you don't need to because docker takes care of name resolution in that network.
if you are doing k8s way - then it is third way (production way), and it os another story.
if that is neither of above then please provide more info on how are you doing stuff.
EDIT - to:
if you are using docker composer and need to expose any of your containers to host machine you can do it through port mapping:
web:
image: some image here
ports:
- 8181:8080
left is the host machine port, right is container port
and then in browser on the host you can do request to localhost:8181
here is doc
https://docs.docker.com/compose/compose-file/#ports

Running couchbase cluster with multiple nodes in docker on windows 10

I created a couchbase 4.0 docker container with single node on windows 10. And added node ip in host machine loopback and forwarded port in vitural box so that couchbase client in my app running in host can connect with node in cluster. I was able to connect and do db operation when I have single node in cluster.
However when I created multiple node cluster in docker on windows 10. I was not able to do db operation. In golang app running in host I got message unable to complete action after 6 attemps on get and set operation.
How to run couchbase cluster of multiple nodes in docker on same host in windows machine so that I can connect with cluster and do db operation from app running in host machine.
If your app is not running inside of Docker host, as far as I know, you can't do this (I would LOVE to be proven wrong by a Docker expert).
Couchbase clients need access to every node in the cluster, and with Docker you can only forward one image to a given port outside the host. (FYI, there is a tool called sdk-doctor which you can use to verify connectivity/networking issues called SDK Doctor).
I would suggest running your golang app inside of the Docker host (using docker-compose is the way this is typically done).
Also, I would highly suggest upgrading to a more recent version of Couchbase.

Visual studio docker container capable of seeing kubernetes pods outside?

I am currently developing docker containers using visual studio, and these container images are supposed to run in a kubernetes cluster that I am also running locally.
Currently, the docker container that is running via visual studio is not being deployed to a kubernetes cluster, but for some reason am I able to ping the kubernetes pod's ip address from the docker container, but for which I don't quite understand; should they not be separated, and not be able to reach each other?
And it cannot be located on the kubernetes dashboard?
And since they are connected, why can't I use the kubernetes service to connect to my pod from my docker container?
The docker container is capable of pinging the cluster IP, meaning that it is reachable.
nslookup the service is not able to resolve the hostname.
So, as I already stated in the comment:
When Docker is installed, a default bridge network named docker0 is
created. Each new Docker container is automatically attached to this
network, unless a custom network is specified.
Thats mean you are able to ping containers by their respective IP. But you are not able to resolve DNS names of cluster objects - you VM know nothing about internal cluster DNS server.
Few option what you can do:
1) explicitly add record of cluster DNS to /etc/hosts inside VM
2) add a record to /etc/resolv.conf with nameserver and search inside VM. See one of my answers related to DNS resolution on stack: nslookup does not resolve Kubernetes.default
3)use dnsmasq as described in Configuring your Linux host to resolve a local Kubernetes cluster’s service URLs article. Btw I highly recommend you read it from the beginning till the end. It greatly describes how to work with DNS and what workaround you can use.
Hope it helps.

Resources