How Docker Compose And Rabbitmmq Config together? - docker

I built two microservices and RabbitMQ and containerize them by docker...
when I run docker-compose. "yaml up"
i have this error in microservice container " Mass Transit[0] Connection Failed: rabbitmq://localhost/ RabbitMQ.Client.Exceptions.BrokerUnreachableException: None of the specified endpoints were reachable "
without containerizing them and run by IIssexpress I have no problem

Related

Unable to run docker container in windriver operation system

Our project is a microservice application. we run 5 to 6 docker containers using docker-compose and this is working fine in ubuntu, centos, and redhat. Am not able to run the same in windriver operation system. All the containers are sharing information using the docker network. when I start the service using docker-compose, am getting the following error.
ERROR: for my-service Cannot start service my-service: failed to create endpoint my-service on network my-net: failed to add the host (veth78f811b) <=> sandbox (vethdd9d629) pair interfaces: operation not supported

connecting to an insecure local docker registry in uncontrolled CI environment

I'm building a microservice that performs operations on a docker registry.
The microservice i'm building has a test which starts a docker-registry via the docker-registry image in Docker Hub, so the microservice can connect to it, set it up, work on it etc...
The test fails in CI: The Docker client can't connect to the test-registry because it's insecure. This is in CI, and dynamic, different random ip/port each time, and the docker daemon is used by other parallel tests... so having the test edit the global jsons and restarting docker daemon seems like a bad solution.
Has anyone solved this? how do you test integration with docker-registry in CI? am i doomed to modify the global docker jsons and restart/trigger reload of config?
Some specifics:
The build tool is Bazel and runs in GCB so the test itself runs in RBE workers on the Google cloud which are isolated and don't have network access when running the tests and i can't really configure too much, it's not my machine, it's a radon machine each time for each test etc...
we ended up starting another container that has a docker daemon in it (without mounting the external docker daemon socket, so it's actually another docker daemon instance).
we do this at our leisure, so only after we know the private registry address and configure the docker daemon to startup with insecure registry flag.
in order for the containers to communicate we had their container have a name and share a network.

How to setup docker swarm in distributed server?

I have set up single host docker deployment using docker-compose. But now I have 4 server instances running on vultr each one has different services running.
For example,
Server 1: mongodb
Server 2: node/express
Server 3: redux
Server 4: load balancer
How can I connect all these services using docker swarm?
You should create swarm of nodes using docker swarm init and docker swarm join. Each node is docker engine installed on a different host. If you have just 4 hosts you can decide that all nodes will be managers.
Then you should deploy docker stack which will deploy your docker
services (mongodb, etc...) using the docker-compose.yml file: docker stack deploy --compose-file docker-compose.yml
Docker services will run on all nodes according to the number of
replicas you specify when you create each service.
If you want each service to run on specific node, assign labels for
each node and add service constraints.

Docker in docker and docker compose block one port for no reason

Right now I am setting up an application that has a deployment based upon docker images.
I use gitlab ci to:
Test each service
Build each service
Dockerize each image (create docker container)
Run integration tests (start docker compose that starts all services on special ports, run integration tests)
Stop prod images and run new images
I did this for each service, but I ran into an issue.
When I start my docker container for integration tests then it is setup within a gitlab ci task. For each task a docker based runner is used. I also mount my host docker socket to be able to use docker in docker.
So my gradle docker image is started by the gitlab runner. Then docker will be installed and all images will be started using docker compose.
One microservice listens to port 10004. Within the docker compose file there is a 11004:10004 port mapping.
My integration tests try to connect to port 11004. But this does not work right now.
When I attach to the image that run docker compose while it tries to execute the integration test then I am not able to do it manually by calling
wget ip: port
I just get the message connected and waiting for response. Either my tests can connect successfully. My service does not log any message about a new connection.
When I execute this wget command within  my host shell then it works.
It's a public ip and within my container I can also connect to other ports using telnet and wget. Just one port of one service is broken when I try to connect from my docker in docker instance.
When I do not use docker compose then it works. Docker compose seems to setup a special default network that does something weird.
Setting network to host also works...
So did anyone also make such an experience when using docker compose?
The same setup works flawless in docker for mac, but my server runs on Debian 8.
My solution for now is to use a shell runner to avoid docker in docker issues. It works there as well.
So docker in docker combined with docker compose seems to have an ugly bug.
I'm writing while I am sitting in the subway but I hope describing my issue is also sufficient to talk about experiences. I don't think we need some sourcecode to find bad configurations because it works without docker in docker and on Mac.
I figured out that docker in docker has still some weird behaviors. I fixed my issue by adding a new gitlab ci runner that is a shell runner. Therefore docker-compose is run on my host and everything works flawless.
I can reuse the same runner for starting docker images in production as I do for integration testing. So the easy fix has another benefit for me.
The result is a best practice to avoid pitfalls:
Only use docker in docker when there is a real need.
For example to make sure fast io communication between your host docker image and your docker image of interest.
Have fun using docker (in docker (in docker)) :]

Connecting to Docker container connection refused - but container is running

I am running 2 spring boot applications: A client and rest-api. The client communicates to the rest-api which communicates to a mongodb database. All 3 tiers are running inside docker containers.
I launch the containers normally specifying the exposed ports in the dockerfile and mapping them to a port on the host machine such as: -p 7070:7070, where 7070 is a port exposed in the Dockerfile.
When I run the applications through the java -jar [application_name.war] command, the application works fine and they all can communicate.
However, when I run the applications in a Docker container I get connection refused error, such as when the client tries to connect to the rest-api I get a connection refused error at http://localhost:7070.
But the command docker ps shows that the containers are all running and listening on the exposed and mapped ports.
I have no clue why the containers aren't recognizing that the other containers are running and listening on their ports.
Does this have anything to do with iptables?
Any help is appreciated.
Thanks
EDIT 1: The applications when ran inside containers work fine on my machine, and they don't throw any connection refused errors. The error only happens on that particular different machine.
I used container linking to solve this problem. Make sure you add --link <name>:<alias> at run-time to the container you want linked. <name> is the name of the container you want to link to and <alias> will be the host/domain of an entry in Spring's application.properties file.
Example:
spring.data.mongodb.host=mongodb if the alias supplied at run-time is 'mongodb':
--link myContainerName:mongodb

Resources