how to do the ibmmq replication in docker swarm or kubernetes? - docker

I am running the mq container on top of docker followed by the link and my container status is up. but unable to get the web-UI. It shows the logs as
2018-09-17T20:19:59.364Z AMQ9207E: The data received from host '10.10.10.10' on channel '????' is not valid.
2018-09-17T20:19:59.364Z AMQ9492E: The TCP/IP responder program encountered an error.
Could anybody suggest me how to run the IIB/MQ Cluster using Docker and Kubernetes inorder to achieve the auto Scaling and High Availability?

Related

How can I establish a VPN connection for a Docker container running in AWS Batch/Fargate?

I have a Dockerised Python script managed by AWS Batch/Fargate (triggered by EventBridge) which reads from a DB requiring an OpenVPN connection (since it's not within same VPC as the Docker container) - how can I do this?
I found a Docker image for OpenVPN, but the documentation instructs me to use the --net argument with docker run to indicate the VPN through which traffic should flow, but I don't think I can do this within the AWS stack since it seems to spin up the container behind the scenes?
I'd be super grateful for any help on this, thanks all!

Dask Client fails to connect to cluster when running inside a Docker container

I am running Dask Gateway in a Kubernetes namespace. I am able to connect to the Gateway using the following code, while not running in a Docker container.
from dask.distributed import Client
from dask_gateway import Gateway
gateway = Gateway('http://[redacted traefik ip]')
cluster = gateway.new_cluster()
However, when I run the same code from a Docker container, I get this warning after gateway.new_cluster().
distributed.comm.tcp - WARNING - Closing dangling stream in <TLS local=tls://[local ip]:51060 remote=gateway://[redacted ip]:80/dask-gateway.e71c345decde470e8f9a23c3d5a64956>
What is the cause for this? I have also tried running this with --net=host on the Docker container, that resulted in the same error.
Additional Info: This doesn't appear to be a Docker networking issue... I am able to use the Coiled clusters from within a Docker container, but not the Dask-Gateway clusters...
It appears that the initial outgoing connection from the docker container to the traefik pod succeeds. A dask-scheduler is successfully spun up in the cluster. However, the connection drop (timeout?) that prevents further interactions.

Unable to make Docker container use OpenConnect VPN connection

I have a VM running Ubuntu 16.04, on which I want to deploy an application packaged as a Docker container. The application needs to be able to perform an HTTP request towards a server under VPN (e.g. server1.vpn-remote.com)
I successfully configured the host VM in order to connect to the VPN through openconnect, I can turn this connection on/off using a systemd service.
Unfortunately, when I run docker run mycontainer, neither the host nor the container are able to reach server1.vpn-remote.com. Weirdly enough, there is no error displayed in the VPN connection service logs, which is stuck to the openconnect messages confirming a successful connection.
If I restart the VPN connection after starting mycontainer, the host machine is able to access server1.vpn-remote.com, but not the container. Moreover, if I issue any command like docker run/start/stop/restart on mycontainer or any other container, the connection gets broken again even for the host machine.
NOTE: I already checked on the ip routes and there seems to be no conflict between Docker and VPN subnets.
NOTE: running the container with --net="host" results in both host and container being able to access the VPN but I would like to avoid this option as I will eventually make a docker compose deployment which requires all containers to run in bridge mode.
Thanks in advance for your help
EDIT: I figured out it is a DNS issue, as I'm able to ping the IP corresponding to server1.vpn-remote.com even after the VPN connection seemed to be failing. I'm going through documentation regarding DNS management with Docker and Docker Compose and their usage of the host's /etc/resolv.conf file.
I hope you don't still need help six months later! Some of the details are different, but this sounds a bit like a problem I had. In my case the solution was a bit disappointing: after you've connected to your VPN, restart the docker daemon:
sudo systemctl restart docker
I'm making some inferences here, but it seems that, when the daemon starts, it makes some decisions/configs based on the state of the network at that time. In my case, the daemon starts when I boot up. Unsurprisingly, when I boot up, I haven't had a chance to connect to the VPN yet. As a result, my container traffic, including DNS lookups, goes through my network directly.
Hat tip to this answer for guiding me down the correct path.

Running couchbase cluster with multiple nodes in docker on windows 10

I created a couchbase 4.0 docker container with single node on windows 10. And added node ip in host machine loopback and forwarded port in vitural box so that couchbase client in my app running in host can connect with node in cluster. I was able to connect and do db operation when I have single node in cluster.
However when I created multiple node cluster in docker on windows 10. I was not able to do db operation. In golang app running in host I got message unable to complete action after 6 attemps on get and set operation.
How to run couchbase cluster of multiple nodes in docker on same host in windows machine so that I can connect with cluster and do db operation from app running in host machine.
If your app is not running inside of Docker host, as far as I know, you can't do this (I would LOVE to be proven wrong by a Docker expert).
Couchbase clients need access to every node in the cluster, and with Docker you can only forward one image to a given port outside the host. (FYI, there is a tool called sdk-doctor which you can use to verify connectivity/networking issues called SDK Doctor).
I would suggest running your golang app inside of the Docker host (using docker-compose is the way this is typically done).
Also, I would highly suggest upgrading to a more recent version of Couchbase.

Overlay network on Swarm Mode without Docker Machine

I currently have three hosts (docker1, docker2 and docker3) which I have not set up using Docker Machine, each one running the v1.12-rc4 Docker daemon.
I run docker swarm init on docker1, which in turn prints a docker swarm join command which I run on both docker2 and docker3. At that point, running docker info on each host contains the Swarm: active line.
It is at this point that the behavior seems to differ from what I used to get with the standalone Swarm container. Especially, running docker network ls will only show me the networks on the local host, and when trying to create an overlay network, it does not seem like worker nodes are aware of it (i.e. it does not show up on their docker network ls.)
I feel like I have missed out on some important information relating to the workings of the Swarm Mode as opposed to the Swarm container.
What is the correct way of setting up such a cluster without Docker Machine on Docker 1.12 while getting the overlay network feature?
I too thought this was an issue when I first started using it.
This works a little differently in 1.12rc4 - when you deploy a container to your swarm with that network attached to it, it should then create the network on the other nodes as well.
Hope this helps!
Issue
You are using the docker command (used to communicate with your localhost Docker daemon) and not the "swarm" command (used to communicate with the Swarm master).
Solution
It depends on the command you used to start Swarm.
A full step-by-step tutorial (including details on how to deploy an overlay network) is detailled on this answer. I'm sure that reading this will help you ;)
With a network scope of swarm, the network is only propagated to worker nodes on an as-needed basis. If you create a service using that network, and it gets scheduled on that worker node, the network will show up in the docker network ls.
With the now-upcoming 1.13 release, you can get a network that has similar behavior to the non-swarm networks by doing docker network create --attachable .... That network will be valid for both services and normal containers, and will be available to all members of the cluster. As of 1.13.0-rc2, those don't seem to show up in the output of docker network ls.

Resources