Visual studio docker container capable of seeing kubernetes pods outside? - docker

I am currently developing docker containers using visual studio, and these container images are supposed to run in a kubernetes cluster that I am also running locally.
Currently, the docker container that is running via visual studio is not being deployed to a kubernetes cluster, but for some reason am I able to ping the kubernetes pod's ip address from the docker container, but for which I don't quite understand; should they not be separated, and not be able to reach each other?
And it cannot be located on the kubernetes dashboard?
And since they are connected, why can't I use the kubernetes service to connect to my pod from my docker container?
The docker container is capable of pinging the cluster IP, meaning that it is reachable.
nslookup the service is not able to resolve the hostname.

So, as I already stated in the comment:
When Docker is installed, a default bridge network named docker0 is
created. Each new Docker container is automatically attached to this
network, unless a custom network is specified.
Thats mean you are able to ping containers by their respective IP. But you are not able to resolve DNS names of cluster objects - you VM know nothing about internal cluster DNS server.
Few option what you can do:
1) explicitly add record of cluster DNS to /etc/hosts inside VM
2) add a record to /etc/resolv.conf with nameserver and search inside VM. See one of my answers related to DNS resolution on stack: nslookup does not resolve Kubernetes.default
3)use dnsmasq as described in Configuring your Linux host to resolve a local Kubernetes cluster’s service URLs article. Btw I highly recommend you read it from the beginning till the end. It greatly describes how to work with DNS and what workaround you can use.
Hope it helps.

Related

Can't start minikube inside docker network

I'm trying to start minikube on ubuntu 18.04 inside nginx proxy manager docker network in order to setup some kubernetes services and manage the domain names and the proxy hosts in the nginx proxy manager platform.
so I have nginxproxymanager_default docker network and when I run minikube start --network=nginxproxymanager_default I get
Exiting due to GUEST_PROVISION: Failed to start host: can't create with that IP, address already in use
what might I been doing wrong?
A similar error was reported with kubernetes/minikube issue 12894
please check whether there are other services using that IP address, and try starting minikube again.
Considering minikube start man page
--network string
network to run minikube with.
Now it is used by docker/podman and KVM drivers.
If left empty, minikube will create a new network.
Using an existing NGiNX network (as opposed to docker/podman) might not be supported.
I have seen NGiNX set up as ingress, not directly as "network".

How to Make Docker Use Specific IP Address for Browser Access from the Host

I'm using docker for building both UI and some backend microservices, and using Spring Zuul as the Proxy to pass Restful API calls from UI to the downstream microservices. My UI project needs to specify an IP address in the JS file before the build, and the Zuul project also needs to specify the IP addresses for the downstream microservices. So that after starting the containers, I can access my application using my docker machine IP http://192.168.10.1/myapp and the restful API calls in the browser network tab will be http://192.168.10.1/mymicroservices/getProduct, etc.
I can set all the IPs to my docker machine IP and build them without issues. However for my colleagues located in other countries, their docker machine IP will be different. How can I make docker use a specific IP, for example, 192.168.10.50, which I can set in the UI project and Zuul Proxy project, so that the docker IP will be the same for everyone, regardless of what their actual docker machine IP is?
What I've tried:
I've tried port forwarding in VirtualBox. It works for the UI, however the restful API calls failed.
I also tried the solution mentioned in this post:
Assign static IP to Docker container
However I can't access the services from the browser using the container IP address.
Do you have any better ideas? Thank you!
first of to clarify couple things,
If you are doin docker run ..... then you just starting container in your docker which is installed on the host machine. And there now way docker can change ip of your host machine. Thus if your other services are running somewhere else they will have to know something about docker host machine, ip or dns name.
so basically docker does runs on 127.0.0.1 if you are trying it on docker host machine, or on host machine IP if from outside of it. So docker don't need IP of host to start.
The other thing is if you are doing docker-composer up/start. Which means all services are in that docker compose file. In this case docker composer creates docker network for all containers in it. in this case you definitely can use fixed IPs for containers, though most often you don't need to because docker takes care of name resolution in that network.
if you are doing k8s way - then it is third way (production way), and it os another story.
if that is neither of above then please provide more info on how are you doing stuff.
EDIT - to:
if you are using docker composer and need to expose any of your containers to host machine you can do it through port mapping:
web:
image: some image here
ports:
- 8181:8080
left is the host machine port, right is container port
and then in browser on the host you can do request to localhost:8181
here is doc
https://docs.docker.com/compose/compose-file/#ports

Make the DNS server of docker container another docker container running DNSmasq

I have a set of docker containers created with docker compose, which creates a "user-defined" bridge network.
One of these docker containers is running DNSmasq so we can define custom (internal) domain names to point to local IPs.
Trouble is, none of the other docker containers can resolve these domain names. I think the issue is that I can't get the docker DNS to forward its requests to the docker container running DNSmasq (i.e. it doesn't even know it exists).
As a test, I did a docker network inspect <network created by docker compose> and noted the IP address of the DNSmasq container. Then, in one of the other containers' /etc/resolv.conf, I set nameserver to that IP address. Then I can resolve all these internal domains names.
Sadly, putting dnsmasq in there doesn't work, despite the fact that the user-defined network has automatic service discovery enabled.
It seems that one way to make this work then is to force my DNSmasq container to always have the same IP, and then make sure the other docker containers point to that as their nameserver, e.g. by defining the network explicitly in the compose file.
Is there no other way? I'd rather not have to define the entire network to replicate this automatically created one when all I want is to know the IP address of one container.

Remote Docker container by hostname

How do you access remote Docker container by its hostname?
I need to access remote Docker containers by its hostnames (or some constant IP's) for development and testing purposes. I have tried:
looking for any DNS approach (have not found any clues),
importing /ets/hosts (probably impossible),
creating tunnes (only this works but it is very time consuming).
It's the same as running any other process on a host, Docker or not Docker: you access it via the host name or IP address of the host and the port the service is listening on (the first port of the docker run -p argument). Docker containers don't have externally visible individual IP addresses any more than non-Docker HTTP or ssh daemons do.
If you do have DNS infrastructure available to you, you could set up CNAME records to resolve particular service names to the specific hosts that are running them.
One solution that may help you is some sort of service registry; in the past I've used Consul with some success. You can configure Consul with some health checks or other probes ("look for an HTTP service on port 12345 that answers GET / calls"), and it will provide its own DNS service ("okay, http://whatevername.service.consul:12345/ will reach your service on whichever hosts it happens to be running on").
Nothing in the Docker infrastructure specifically helps this. Using /etc/hosts is distinctly not a best practice: the name-to-IP mapping needs to be kept in sync across all machines and you'll start wishing you had a network service to publish it for you, which is exactly what DNS is for.

Cross container communication with Docker

An application server is running as one Docker container and database running in another container. IP address of the database server is obtained as:
sudo docker inspect -f '{{ .NetworkSettings.IPAddress }}' db
Setting up JDBC resource in the application server to point to the database gives "java.net.ConnectException".
Linking containers is not an option since that only works on the same host.
How do I ensure that IP address of the database container is visible to the application server container?
If you want private networking between docker containers on remote hosts you can use weave to setup an overlay network between docker containers. If you don't need a private network just expose the ports using the -p switch and configure the addresses of the host machine as the destination IP in the required docker container.
One simple way to solve this would be using Weave. It allows you to create many application-specific networks that can span multiple hosts as well as datacenters. It also has a very neat DNS-based service discovery mechanism.
I should disclaim, I am one of Weave engineering team.
Linking containers is not an option since that only works on the same host.
So are you saying your application is a container running on docker server 1 and your db is a container on docker server 2? If so, you treat it like ordinary remote hosts. Your DB port needs to be exposed on docker server 2 and that IP:port needs to be configured into your application server, typically via environment variables.
The per host docker subnetwork is a Private Network. It's perhaps possible to have this address be routable, but it would be much pain. And it's further complicated because container IP's are not static.
What you need to do is publish the ports/services up to the host (via PORT in dockerfile and -p in your docker run) Then you just do host->host. You can resolve hosts by IP, Environment Variables, or good old DNS.
Few things were missing that were not allowing the cross-container communication:
WildFly was not bound to 0.0.0.0 and thus was only accepting requests on eht0. This was fixed using "-b 0.0.0.0".
Firewall was not allowing the containers to communication. This was removed using "systemctl stop firewall; systemctl disable firewall"
Virtual Box image required a Host-only adapter
After this, the containers are able to communicate. Complete details are available at:
http://blog.arungupta.me/2014/12/wildfly-javaee7-mysql-link-two-docker-container-techtip65/

Resources