Communication from Docker-Container to outside - docker

I am quite new to the docker topics and I have a question of connecting container services with traditional ones.
Currently I am thinking of replacing an traditional grafana installation (directly on a linux server) with a grafana docker container.
In grafana I have to connect to different data sources like a mysql instance, a Winsows SQL Database and so on. So grafana is doing a pull of data. All these data sources reside (and will still reside) on other hosts and they are not containers.
So how can I implement that my container is able to communicate with this data sources? Is it possible by default or do I have to implement a special kind of network? I saw that there is an option called macvlan...is that the correct way?
BR
Jan

This should work out of the box, as far as I understand. At least, I'm using Grafana inside a docker container and it works perfectly.
You can test a connectivity from inside your docker container to some external resource by opening a container shell like this:
docker exec -it <container ID> /bin/bash
And then
root#a9cbebfc4564:/# curl google.com
Or
root#a9cbebfc4564:/# ping <bla-bla>
Commands above depend on a docker image environment (like OS or installed software), but this can be solved in a same was as you can do on a regular Unix env
P.S. I encountered a docker2host connection issue once, but it was due to incorrect firewall configuration on a host side.

Since you are replacing a traditional installation, you can start with host networking. This mode give you same connectivity experience as installing on the host. A quick start is as simple as:
docker run --network host grafana/grafana
Notice there's no need to --publish or --publish-all ports as the Grafana container now share the host network.

Related

Can (Should) I Run a Docker Container with Same host name as the Docker Host?

I have a server application (that I cannot change) that, when you connect as a client, will give you other URLs to interact with. Those URLs are also part of the same server so the URL advertised uses the hostname of a docker container.
We are running in a mixed economy (some docker containers, some regular applications). We actually need to set up where we have the server running as a docker application on a single VM, and that server will be accessed by non-docker clients (as well as docker clients not running on the same docker network).
So you have a server hostname (the docker container) and a docker hostname (the hostname of the VM running docker).
The client's initial connection is to: dockerhostname:1234 but when the server sends URLs to the client, it sends: serverhostname:5678 ... which is not resolvable by the client. So far, we've addressed this by adding "server hostname " to the client's /etc/hosts file but this is a pain to maintain.
I have also set the --hostname of the server docker container to the same name as the docker host and it has mostly worked but I've seen where a docker container running on the same docker network as the server had issues connecting to the server.
I realize this is not an ideal docker setup. We're migrating from a history of delivering as rpm's to delivering containers .. but it's a slow process. Our company has lots of applications.
I'm really curious if anyone has advice/lessons learned with this situation. What is the best solution to my URL problem? (I'm guessing it is the /etc/hosts we're already doing)
You can do port-mapping -p 8080:80
How you build and run your container?
With a shell command, dockerfile or yml file?
Check this:
docker port
Call this and it will work:
[SERVERIP][PORT FROM DOCKERHOST]
To work with hostnames you need DNS or use hosts file.
The hosts file solution is not a good idea, it's how the internet starts in the past ^^
If something change you have to change all hosts files on every client!
Or use a static ip for your container:
docker network ls
docker network create my-network
docker network create --subnet=172.18.0.0/16 mynet123
docker run --net mynet123 --ip 172.18.0.22 -it ubuntu bash
Assign static IP to Docker container
You're describing a situation that requires a ton of work. The shortest path to success is your "adding things to /etc/hosts file" process. You can use configuration management, like ansible/chef/puppet to only have to update one location and distribute it out.
But at that point, you should look into something called "service discovery." There are a ton of ways to skin this cat, but the short of it is this. You need some place (lazy mode is DNS) that stores a database of your different machines/services. When a machine needs to connect to another machine for a service, it asks that database. Hence the "service discovery" part.
Now implementing the database is the hardest part of this, there are a bunch of different ways, and you'll need to spend some time with your team to figure out what is the best way.
Normally running an internal DNS server like dnsmasq or bind should get you most of the way, but if you need something like consul that's a whole other conversation. There are a lot of options, and the best thing to do is research, and audit what you actually need for your situation.

How can I connect to a VPN in docker not using VPN images?

Good morning!
Im using check point mobile to connect to my client VPN, and I have 2 containers in docker: mysql and karaf both sharing the network I created using the command docker network create --subnet=vpnAddress mynet
I used the command --network=mynet when running the containers.
Until here its all ok, I can connect via putty ssh to karaf, install the kar and all bundles are ok.
But when calling the services I realize that the container is not connected to the VPN, even so that I created a network with the VPN address. I need to be connected to the VPN in order to call the services.
Im connected externally(outside docker) to the VPN using the check point mobile, but I need docker to add/connect to the VPN.
Im using windows 10 (using docker with linux containers), I tried to go to C:\ProgramData\DockerDesktop\tmp-d4w and edit the file host.docker.internal too and change the IP to my VPN address, but none works.
I searched a lot, and I saw people talking about docker vpn images such as nordVpn or openVpn, but I cant use that.
I have been told I need to add the vpn network to docker, But im green at networking and I dont know how to do it, and what I did didn't work.
Hope you can help me. thanks!
edit: in docker engine i added the "bip": "vpnAddress/24"
I realize now that network bridge uses the VPN address now, tried to --network=bridge in both karaf and mysql container, but now karaf cant connect to mysql, but if I use the default docker create network mynet and run the 2 container using that network it works, but no luck with the VPN this way.
I haven't used Docker on Windows, but a quick look at some VPN containers shows that, in *nix at least, they use --device /dev/net/tun --cap-add=NET_ADMIN to expose the VPN "device" to the container. Other containers then use docker networking or links to connect to this VPN container - so looking at how the VPN containers do it might be helpful.
One suggestion for Mac seems to be using extra_hosts like so:
extra_hosts:
- "vpn.company.com:172.21.1.1"
You might be able to hack it with something like that. (or physically adding 172.21.1.1 vpn.company.com to /etc/hosts in the container). Also, checking for IP address conflicts between the Docker daemon and your host machine.
Windows docs seem to suggest they don't support network interfaces as "devices", so you probably need to either create a very specific docker network or modify host networking settings, starting with getting Docker daemon to recognize the VPN network.
See the Configure Advanced Networking section for some examples. I'd try creating a network associated with the VPN device first, then look into flags like --subnet and --gateway.
docker network create -d transparent \
-o com.docker.network.windowsshim.interface="Ethernet 2" TransparentNet2
This creates a network with a particular subnet and gateway, then runs a container with a statically-assigned IP on that network.
C:\> docker network create -d transparent \
--subnet=10.123.174.0/23 \
--gateway=10.123.174.1 MyTransparentNet
C:\> docker run -it --network=MyTransparentNet \
--ip=10.123.174.105 windowsservercore cmd
Good luck!

Inspect Docker Containers on Host from Container on same Host

I'm in the process of containerizing a service which monitors other services, many of which are also running on containers. Right now, it runs docker inspect from a Python subprocess on the host to monitor other services' containers. How can I get similar information from within another container?
I've considered running the same commands on the host via ssh, but it seems like there should be a better way. I can't be the first person to want to have one container monitor others, but all I find on the web is 3rd party solutions, which seem overkill for the problem at hand.
You can mount the docker socket inside the container that contains your python program.
docker run -v /var/run/docker.sock:/var/run/docker.sock my-python-program

Docker for Windows swarm overlay networking, connecting to the swarm from outside or localhost

I cannot connect to the published port on the swarm that uses overlay networking. I am using Docker for Windows with Windows containers. Both Windows and Docker are fully upgraded. After Windows' 1709 update, I was hoping this issue would be resolved. I looked for information on the Internet to see if I was doing something wrong to no avail. I would like to know if anyone was successfully able to get it working.
On a side note, when I direct the port on my machine in docker run -p 80:80 without using swarm, "localhost" does not work as well. I think this is a known limitation though. Both issues work when I switch to Linux containers.
Expected behavior
I am running a dotnet kestrel web server service. I should be able to connect to my service using the published port.
Actual behavior
Firefox gives me timeout, opera straight away returns connection refused. Cannot telnet into it either. Container IP's assigned by the overlay network do not work either.
Information
docker service ls gives me this:
Ports cannot be seen there, is it because publish mode is host? Ports information is available in the output of docker service ps
And when I change the publish mode, I can scale it as well and the port information is seen in docker service ls albeit still cannot connect. the one below is without the publish mode=host parameter:
For more info, this is the output of the docker network ls I wonder if i need some sort of bridge network like in Linux.
Steps to reproduce the behavior
Initialise swarm
Start the service, in my case: a simple web service built using aspnetcore:latest image. I tried different parameters, even used a docker-stack.yml:
docker service create --name=web --publish mode=host,published=80,target=80 web:aspnetcorelatest in the case above, I was unable to scale it on the same machine, which is normal i guess
docker service create --name=web --publish published=85,target=80 web:aspnetcorelatest
Try to connect using one of http://localhost or another IP. I tried connecting over VPN, from another machine as well as Internet IP.

Unable to connect outside database from Docker container App

we have two machineā€¦one is windows machine and another in Linux machine. My application is running under Docker Container at Linux machine. our data base is running at Windows machine.our application need to get data from windows machine DB.
As we have given proper data source detail like IP, username ,password in our application. it works when we do not use docker container but when we use docker container it do not work.
Can anyone help me out to get this solution that how we can connect outside DB from Docker enabled application as we are totally new guys in term of Docker.
Any help would be much appreciated.
Container's default network is "bridge",you should choose macvlan or host network.
method 1
docker run -d --net host image
this container will share your host IP address and will be able to access your database.
method 2
Use docker network create command to create a macvlan network,refrence here
then create your container by
docker run -d --net YOURNETWORK image
The container will have an IP address which is the same gateway with its host.
There are a lot of issues that could be affecting your container's ability to communicate with your database. In the future you should compose your question with as much detail as possible. To correctly answer this you will, at a minimum, need to include the following details:
Linux distribution name & version
Docker version
Output of docker inspect from the container
Linux firewall configuration
Network configuration
Is your Windows machine running on the same local network / subnet as your Linux machine? If so, please provide information about the subnet, as the default bridge set up by Docker may restrict access to local resources, whereas those over a wide area network would still be accessible.
You can try passing the --network=host option to your docker run command like so: docker run --network=host <image name>. Doing so eliminates the need to specify port mappings in your run command, as they are ignored when using the host's network.
Please edit your question and include the above requested details to get a complete answer.

Resources