I have a window container with an asp.net webapi app (not core) and second (linux) container with an sql server.
In linux container I have created new network:
docker network create budget-app-network
and created container:
docker run -d --name budget-db -p 11433:1433 --network budget-app-network --network-alias mssql budget-db
When I want to enable the window container by using:
docker run -d --name budget-app -p 888:80 --network budget-app-network budget-app
I got an error says:
docker: Error response from daemon: network budget-app-network not found.
I can't find how to connect the web api to the database. How can I make to communication? I believe it would work if I would have an two linux or two windows containers and not mixed them.
Background
When you are running Windows and Linux container on a windows host, you have two docker engines running. One engine is running natively on windows and is running the windows containers, and one inside a virtual machine (Hyerp-V) running the linux continuers. This is discussed in the following thread on github
Solution options
Because they are running on separate hosts, you need to manage the network in the same manner.
The easiest approach is to allow the containers to communicate trough the published ports, tough the windows host (routing the traffic trough the public IP of the host).
Also, you can use docker-compose as described in this post and allow docker-compose to create the network bridge between the VM containers and the windows contaierns.
Finally you have option to create a swarm by installing a linux and windows VMs (Hyper-V) and create a mixed-OS swarm. This is the most complicated option and the drawback is that you will have additional overhead from the additional windows machine running in hyper-v. The details are described in Microsfot's documentation
Related
I am quite new to the docker topics and I have a question of connecting container services with traditional ones.
Currently I am thinking of replacing an traditional grafana installation (directly on a linux server) with a grafana docker container.
In grafana I have to connect to different data sources like a mysql instance, a Winsows SQL Database and so on. So grafana is doing a pull of data. All these data sources reside (and will still reside) on other hosts and they are not containers.
So how can I implement that my container is able to communicate with this data sources? Is it possible by default or do I have to implement a special kind of network? I saw that there is an option called macvlan...is that the correct way?
BR
Jan
This should work out of the box, as far as I understand. At least, I'm using Grafana inside a docker container and it works perfectly.
You can test a connectivity from inside your docker container to some external resource by opening a container shell like this:
docker exec -it <container ID> /bin/bash
And then
root#a9cbebfc4564:/# curl google.com
Or
root#a9cbebfc4564:/# ping <bla-bla>
Commands above depend on a docker image environment (like OS or installed software), but this can be solved in a same was as you can do on a regular Unix env
P.S. I encountered a docker2host connection issue once, but it was due to incorrect firewall configuration on a host side.
Since you are replacing a traditional installation, you can start with host networking. This mode give you same connectivity experience as installing on the host. A quick start is as simple as:
docker run --network host grafana/grafana
Notice there's no need to --publish or --publish-all ports as the Grafana container now share the host network.
I use docker-compose to spin up a few containers as part of an application I'm developing. One of the containers needs to start a docker swarm service on the host machine. On Docker for Windows and Docker for Mac, I can connect to the host docker daemon using the REST Api by using the "host.docker.internal" DNS name and this works great. However, if I run the same compose file on linux, "host.docker.internal" does not work (yet, seems it may be coming in the next version of docker). To make matters worse, on Linux I can use network mode of "host" to work around the issue but that isn't supported on Windows or Mac.
How can I either:
Create a docker-compose file or structure a containerized application to be slightly different based on the host platform (windows|mac|linux) without having to create multiple docker-compose.yml files or different application code?
Access the host docker daemon in a consistent way regardless of the host OS?
If it matters, the container that is accessing the docker daemon of the host is using the docker python sdk and making api calls to docker over tcp without TLS (this is used for development only).
Update w/ Solution Detail
For a little more background, there's a web application (aspnet core/C#) that allows users to upload a zip file. The zip file contains, among other things, an exported docker image file. There's also an nginx container in front of all of this to allow for ssl termination and load balancing. The web application pulls out the docker image, then using the docker daemon's http api, loads the image, re-tags the image, then pushes it to a private docker repository (which is running somewhere on the developer's network, external to docker). After that, it posts a message to a message queue where a separate python application uses the python docker library to deploy the docker image to a docker swarm.
For development purposes, the applications all run as containers and thus need to interact with docker running on the host machine as a stand alone swarm node. SoftwareEngineer's answer lead me down the right path. I mapped the docker socket from the host into the web application container at first but ran into a limitation of .net core that won't be resolved until .net 5 which is that there's no clean way of doing http over a unix socket.
I worked around that issue by eventually realizing that nginx can reverse proxy http traffic to a unix socket. I setup all containers (including the dynamically loaded swarm service from the zips) to be part of an overlay network to give them all access to each other and allowing me to hit an http endpoint to control the host machine's docker/swarm daemon over http.
The last hurdle I ran into was that nginx couldn't write to the mapped in /var/run/docker.sock file so I modified nginx.conf to allow it to run as root within the container.
As far as I can tell, the docker socket is available at the path /var/run/docker.sock on all systems. I have personally verified this with a recent Linux distro (Ubuntu), Windows 10 Pro running Docker for Windows (2.2.0) with both WSL2 (Ubuntu and Alpine) and the windows cmd (cli) and powershell. From memory, it works with OSX too, and I used to do the same thing in WSL1.
Mapping this into a container is achieved on any terminal with the -v, --volume, or --mount flags. So,
docker container run -v /var/run/docker.sock:/var/run/docker.sock
Mounts the socket into an identical path within the container. This means that you can access the socket using the standard docker client (docker) from within the container with no extra configuration. Using this path inside a Linux container is recommended because the standard location and is likely to be less confusing to anyone maintaining your code in the future (including yourself).
we have two machineā¦one is windows machine and another in Linux machine. My application is running under Docker Container at Linux machine. our data base is running at Windows machine.our application need to get data from windows machine DB.
As we have given proper data source detail like IP, username ,password in our application. it works when we do not use docker container but when we use docker container it do not work.
Can anyone help me out to get this solution that how we can connect outside DB from Docker enabled application as we are totally new guys in term of Docker.
Any help would be much appreciated.
Container's default network is "bridge",you should choose macvlan or host network.
method 1
docker run -d --net host image
this container will share your host IP address and will be able to access your database.
method 2
Use docker network create command to create a macvlan network,refrence here
then create your container by
docker run -d --net YOURNETWORK image
The container will have an IP address which is the same gateway with its host.
There are a lot of issues that could be affecting your container's ability to communicate with your database. In the future you should compose your question with as much detail as possible. To correctly answer this you will, at a minimum, need to include the following details:
Linux distribution name & version
Docker version
Output of docker inspect from the container
Linux firewall configuration
Network configuration
Is your Windows machine running on the same local network / subnet as your Linux machine? If so, please provide information about the subnet, as the default bridge set up by Docker may restrict access to local resources, whereas those over a wide area network would still be accessible.
You can try passing the --network=host option to your docker run command like so: docker run --network=host <image name>. Doing so eliminates the need to specify port mappings in your run command, as they are ignored when using the host's network.
Please edit your question and include the above requested details to get a complete answer.
I am new to Docker. And have few easy questions hope you could help.
I have a windows 10 machine which installed "docker for windows". In its HyperV manager I could see a virtual machine called "MobyLinuxVM".
So my questions are:
1, When people talking about "Docker Host" and "Docker Engine", what are they according to my situation?
-- I assume "Docker Host" should be my windows PC, and "Docker Engine" is that Virtual machine inside Hyper-V.
2, If I use ipconfig to see my PC, I will find I got at lease 2 networks and IP addresses:
(a) Lan Adapter -- show my IP is 192.168.xxx.yyy
(b) DockerNAT -- show my IP is 10.0.75.1
Then when I try to use dock-compose.yml to create container, I found I could ONLY use:
environment:
- MAGENTO_HOST=10.0.75.2
- MARIADB_HOST=10.0.75.2
to create container and can be directly accessed (e.g. via browser to Magento website). So question is:
If my machine is 10.0.75.1 within Docker network, then what is 10.0.75.2? why I cannot use e.g. 10.0.75.3?
3, My yml script actually contains multiple containers creation -- e.g. 2 Magento containers + 2 MariaDB containers + etc. When I specify their docker 'HOST', why it's not my machine? (If we call my machine to be 'docker host' & hyper-v virtual image to be 'docker engine' in my 1st question.)
4, Also according to my 3rd question, I current deploy all containers within 1 host. Is it worth to use Docker Swarm which people can use to cluster multiple Docker hosts? If so, does that mean I need to use Hyper-V to create another "MobyLinuxVM"?
Thanks a lot!
1 Docker Engine + Docker Host
The Docker Engine is the group of processes that manage Docker containers. dockerd is usually the head of that process tree.
The Docker Host is the OS running Docker engine, that is MobyLinuxVM
Your VM host is your Windows box.
2 Docker Host IP
10.0.75.2 is most likely the address assigned to MobyLinuxVM. I don't run Docker for Windows so can't entirely confirm but searching the web seems to back this up.
3 - see 1
4 Swarm
You would need to run multiple VMs to setup swarm. Docker machine is the tool to use when setting up swarm instances. It allows you to manage multiple Docker instances and comes with a HyperV driver.
I'm getting in habit with rancher and docker and I'm now trying to figure out if it is possible to create multiple local custom hosts on the same physical machine. I'm running RancherOS in a local computer. Through the Rancher Web UI I'm able to create a local custom host and add containers to it.
When I try to add another local custom host copying the given command to the terminal (SSH into the rancher machine) it stars the process but nothing happen. The new host doesn't appear in the hosts list of the web interface and I don't receive any error from the terminal.
I couldn't get any useful information from the Rancher documentation about this possible issue.
I was wondering if it's not possible to have more than one custom virtual host on the same physical machine or if the command fails for some reason that I would like to know how to debug.
sudo docker run -e -d --privileged \
-v /var/run/docker.sock:/var/run/docker.sock rancher/agent:v0.8.2 \
http://192.168.1.150:8080/v1/projects/1a5/scripts/<registrationToken>
where registrationToken is replaced by the one provided by rancher.
There is nothing "virtual" about them. The agent talks to docker and manages one docker daemon, which is the entire machine. Running multiple does not make sense for a variety of reasons, such as when you type "docker run ..." on the machine, which agent is supposed to pick up that container? And they are not really isolated from each other regardless, because any of them can run privileged containers which can then do whatever they want that affects the others.
The only way to do what you're asking is to have actual virtual machines running on the physical machine, each with their own OS and docker daemon.
Another option might be to use linux containers to create separated environments, each having it's own ip address and running it's own docker daemon.