Running couchbase cluster with multiple nodes in docker on windows 10 - docker

I created a couchbase 4.0 docker container with single node on windows 10. And added node ip in host machine loopback and forwarded port in vitural box so that couchbase client in my app running in host can connect with node in cluster. I was able to connect and do db operation when I have single node in cluster.
However when I created multiple node cluster in docker on windows 10. I was not able to do db operation. In golang app running in host I got message unable to complete action after 6 attemps on get and set operation.
How to run couchbase cluster of multiple nodes in docker on same host in windows machine so that I can connect with cluster and do db operation from app running in host machine.

If your app is not running inside of Docker host, as far as I know, you can't do this (I would LOVE to be proven wrong by a Docker expert).
Couchbase clients need access to every node in the cluster, and with Docker you can only forward one image to a given port outside the host. (FYI, there is a tool called sdk-doctor which you can use to verify connectivity/networking issues called SDK Doctor).
I would suggest running your golang app inside of the Docker host (using docker-compose is the way this is typically done).
Also, I would highly suggest upgrading to a more recent version of Couchbase.

Related

Docker desktop networking Windows and Linux nodes

I have a windows service within a Docker container that needs to access a MySQL database in a Linux container on the same machine (dev machine currently).
I thought of creating an overlay network on the two "nodes" on the same machine but this isn't possible as creating the swarm worker fails on windows after creating the swarm master on linux.
Is this possible, if not what is the easiest way of doing this? The purpose of the windows container is simply to deploy to a test environment to gather data. Do I need to deploy the linux to the cloud or another machine maybe, so the windows container can communicate?
You can simply use docker compose, it will create the network automatically. Replace the MySQL host with the MySQL service name you defined in the compose yaml file. Detailed information please refer to docker-compose.

Cannot connect to RabbitMQ Server from another containerised .Net core RabbitMQ client

I have been able to set up containerised RabbitMQ server, and reach into it with basic .NET Core clients and check message send and receive working using management portal on http://localhost:15672/.
But I am having real frustrations when I also Containerise my Sender/Receiver .NET Core clients, on being able to establish a connection. I have set up an explicit "shipnetwork", so all containers in the following docker-compose deployment should see each other.
This is the Error I get in the sender attempting the connection:
My SendRabbit .NET core App is as follows. This code was working on my local Windows 10 development machine, with a host of 'localhost' against the RabbitMQ server running as a container. But when I change this to a [linux] docker project, and set the host to "rabbitmq", to correspond to the service name in the docker compose. Now I just get Endpoint Connection errors exceptions within my Sender container.
I have also attempted the same RabbitMQ server and Sender Image with the same docker-compose on a Google Cloud Linux Virtual Machine, and get the same errors. So I do not think it is the Windows 10 docker hosting VM environment hassles.
I thought docker was going to make development and deployment of microservices, but setting up a basic RabbitMQ connections is proving to be a real pain.
I have thought that maybe the rabbitmq server is not up and running, so perhaps ambitious to put in the same docker-compose. But I have checked running my SendRabbit container
$docker run --network shipnetwork sendrabbit
some minutes later. But I still get the same connection error
docker networks **** networks !
When I checked the actual docker networks, I had:
bridge
host
shipnetwork
rabbitship_shipnetwork
The docker compose was actually creating the 'new' network: rabbitship_shipnetwork every time it was spun up, and placing the rabbimq server on that network. The netwrok is named from appending the directory name, with the name in the compsos yaml. So I was using the wrong network in my senders. So I should have been using
$docker run --network rabbitship_shipnetwork sendrabbit
This works fine, and creates messages into the rabbitmq server
So I don't feel that docker-compose is actually very helpful in creating networks, since it is sensitive to the directory name it is run in ! Its unlikely that I can build an app .docker files, and deploy all Apps from a single directory, especially when rabbitmq has to be started separately, before senders and receivers can use it.
docker-compose 0

Add a VM running Ubuntu as a worker node in Docker Swarm

I'm trying to create swarm consisting of 2 nodes, using docker-machine, it is easy to provision a VM and add it as a node, but I want to create a swarm using a ubuntu VM machine and Windows docker as manager without using docker-machine.
Running
docker swarm init
in Windows (Host Machine) gives me a token to add a worker. I have Ubuntu running in VirtualBox, Docker is also installed in the VM and I'm able to ssh into it and run commands but whenever I try to add this Ubuntu Machine as a worker node by using the token generated from Windows Machine, it says
Error response from daemon: Timeout was reached before node joined. The attempt to join the swarm will continue in the background. Use the "docker info" command to see the current swarm status of your node.
I think it is related to port forwarding. I'm forwarding my VM port 22 to 127.0.0.1:22 in VBox for connecting via SSH. But I tried several combinations of forwarding. Still the VM is not able to join as a node in the swarm that I created in Windows.
Any guidance will be of great value.
Check if you have connectivity from your Ubuntu to your Windows machine. First, ssh to your Ubuntu and check:
Windows is addressable, for example using ping windows-ip.
If it is not, make sure both are in the same network, for example setting a bridge network in your VM configuration.
Windows is listening in ports needed by docker swarm:
TCP port 2376 for secure Docker client communication. This port is required for Docker Machine to work. Docker Machine is used to orchestrate Docker hosts.
TCP port 2377. This port is used for communication between the nodes of a Docker Swarm or cluster. It only needs to be opened on manager nodes.
TCP and UDP port 7946 for communication among nodes (container network discovery).
UDP port 4789 for overlay network traffic (container ingress networking).
You can check this using telnet windows-ip port.
If they are not reachable, check your Windows firewall.
I hope it helps!
I tried to create a similar Swarm with a Windows manager node but never really got it to work. You can initialize a single-node Swarm from Windows with docker swarm init. However adding multiple worker nodes does not appear to be supported at the moment:
https://docs.docker.com/engine/swarm/swarm-tutorial/.
"Currently, you cannot use Docker Desktop for Mac or Docker Desktop for Windows alone to test a multi-node swarm".
The following options are possible:
Pure Linux swarm (Linux manager + Linux workers) which runs only Linux containers
Hybrid Swarm (Linux manager + Windows workers + Linux workers) which runs Windows and Linux containers
(Sometimes) Pure Windows Swarm using Win Server 2019 as the manager. The regular Windows updates have been known to break various features of Swarm. For example, https://github.com/moby/moby/issues/40998
Then everyone either tries workarounds or waits for the next Windows update to fix the problem.
Personally I've had good luck with hybrid Swarm. It works fine with simple Ubuntu manager + standard Windows 10 workers. No need for Win Server.

Access Docker daemon on Host without knowing Host OS

I use docker-compose to spin up a few containers as part of an application I'm developing. One of the containers needs to start a docker swarm service on the host machine. On Docker for Windows and Docker for Mac, I can connect to the host docker daemon using the REST Api by using the "host.docker.internal" DNS name and this works great. However, if I run the same compose file on linux, "host.docker.internal" does not work (yet, seems it may be coming in the next version of docker). To make matters worse, on Linux I can use network mode of "host" to work around the issue but that isn't supported on Windows or Mac.
How can I either:
Create a docker-compose file or structure a containerized application to be slightly different based on the host platform (windows|mac|linux) without having to create multiple docker-compose.yml files or different application code?
Access the host docker daemon in a consistent way regardless of the host OS?
If it matters, the container that is accessing the docker daemon of the host is using the docker python sdk and making api calls to docker over tcp without TLS (this is used for development only).
Update w/ Solution Detail
For a little more background, there's a web application (aspnet core/C#) that allows users to upload a zip file. The zip file contains, among other things, an exported docker image file. There's also an nginx container in front of all of this to allow for ssl termination and load balancing. The web application pulls out the docker image, then using the docker daemon's http api, loads the image, re-tags the image, then pushes it to a private docker repository (which is running somewhere on the developer's network, external to docker). After that, it posts a message to a message queue where a separate python application uses the python docker library to deploy the docker image to a docker swarm.
For development purposes, the applications all run as containers and thus need to interact with docker running on the host machine as a stand alone swarm node. SoftwareEngineer's answer lead me down the right path. I mapped the docker socket from the host into the web application container at first but ran into a limitation of .net core that won't be resolved until .net 5 which is that there's no clean way of doing http over a unix socket.
I worked around that issue by eventually realizing that nginx can reverse proxy http traffic to a unix socket. I setup all containers (including the dynamically loaded swarm service from the zips) to be part of an overlay network to give them all access to each other and allowing me to hit an http endpoint to control the host machine's docker/swarm daemon over http.
The last hurdle I ran into was that nginx couldn't write to the mapped in /var/run/docker.sock file so I modified nginx.conf to allow it to run as root within the container.
As far as I can tell, the docker socket is available at the path /var/run/docker.sock on all systems. I have personally verified this with a recent Linux distro (Ubuntu), Windows 10 Pro running Docker for Windows (2.2.0) with both WSL2 (Ubuntu and Alpine) and the windows cmd (cli) and powershell. From memory, it works with OSX too, and I used to do the same thing in WSL1.
Mapping this into a container is achieved on any terminal with the -v, --volume, or --mount flags. So,
docker container run -v /var/run/docker.sock:/var/run/docker.sock
Mounts the socket into an identical path within the container. This means that you can access the socket using the standard docker client (docker) from within the container with no extra configuration. Using this path inside a Linux container is recommended because the standard location and is likely to be less confusing to anyone maintaining your code in the future (including yourself).

What is the Docker Engine?

When people talk about the 'Docker Engine' do they mean both the Client and the Daemon? Or is it something else entirely?
As I see it there is a Docker Client, a Docker Daemon. The Client runs locally and connects to the Daemon which does the actual running of the containers. The Client can connect to a remote Daemon. Are these both together the Engine? thanks
The Docker Engine is the Docker Daemon running on a single host, installed with the Docker Client CLI. Here are the docs that answer this specific question.
On top of that, you can have a Swarm running that joins multiple hosts to horizontally scale and provide fault tolerance. And there are numerous other projects from Docker, like their Registry, Docker Cloud, and Universal Control Plane, that are each separate from the engine.
Docker engine is a client-server application which comprises of 3 components.
1. Client: Docker CLI or the command line window that helps us to interact.
2. REST API: Client communicate with the server with REST API, the commands issued by the client is sent to the server in the form of REST API, it is this reason our server can either be in the local or remote machine.
3. Server: Server here is either the local or remote machine or host machine which has a daemon process running in it which receives the commands and creates, manages and destroys the docker objects like images, containers, volumes etc.

Resources