How to trace RPC between two docker containers? - docker

Suppose I run a server in container1 and a client in container2. client invokes an RPC function in server. How can I trace this RPC invocation?
I know there are some distributed tracing systems, but most need manual instrumentation. I want to trace the RPCs no matter what application runs in the container(i.e., the application may not be instrumented). My naive idea is to trace RPC communication between docker containers. Is there any solution?

Related

Cannot connect to RabbitMQ Server from another containerised .Net core RabbitMQ client

I have been able to set up containerised RabbitMQ server, and reach into it with basic .NET Core clients and check message send and receive working using management portal on http://localhost:15672/.
But I am having real frustrations when I also Containerise my Sender/Receiver .NET Core clients, on being able to establish a connection. I have set up an explicit "shipnetwork", so all containers in the following docker-compose deployment should see each other.
This is the Error I get in the sender attempting the connection:
My SendRabbit .NET core App is as follows. This code was working on my local Windows 10 development machine, with a host of 'localhost' against the RabbitMQ server running as a container. But when I change this to a [linux] docker project, and set the host to "rabbitmq", to correspond to the service name in the docker compose. Now I just get Endpoint Connection errors exceptions within my Sender container.
I have also attempted the same RabbitMQ server and Sender Image with the same docker-compose on a Google Cloud Linux Virtual Machine, and get the same errors. So I do not think it is the Windows 10 docker hosting VM environment hassles.
I thought docker was going to make development and deployment of microservices, but setting up a basic RabbitMQ connections is proving to be a real pain.
I have thought that maybe the rabbitmq server is not up and running, so perhaps ambitious to put in the same docker-compose. But I have checked running my SendRabbit container
$docker run --network shipnetwork sendrabbit
some minutes later. But I still get the same connection error
docker networks **** networks !
When I checked the actual docker networks, I had:
bridge
host
shipnetwork
rabbitship_shipnetwork
The docker compose was actually creating the 'new' network: rabbitship_shipnetwork every time it was spun up, and placing the rabbimq server on that network. The netwrok is named from appending the directory name, with the name in the compsos yaml. So I was using the wrong network in my senders. So I should have been using
$docker run --network rabbitship_shipnetwork sendrabbit
This works fine, and creates messages into the rabbitmq server
So I don't feel that docker-compose is actually very helpful in creating networks, since it is sensitive to the directory name it is run in ! Its unlikely that I can build an app .docker files, and deploy all Apps from a single directory, especially when rabbitmq has to be started separately, before senders and receivers can use it.
docker-compose 0

Persist log across Docker container restarts

So I have a microservice running as a Docker container under .NET Core and logging to Application Insights on Azure as it writes large amounts of data fetched from EventHub to a SQL Server.
Every once in a while I get an unhandled SqlException that appears to be thrown on a background thread, meaning that I can't catch it and handle it, nor can I fix this bug.
The workaround has been to set the restart policy to always and the service restarts. This works well, but now I can't track this exception in Application Insights.
I suppose the unhandled exception is written by the CLR to stderr so it appears in the Docker logs with some grepping, but is there a way to check for this on start up and subsequently log it to Application Insights so I can discover it without logging onto the Swarm cluster and grep for restart info?
To persist logs,
Approach 1
Mount docker log directory into host machine.
Example:
docker run -name Container_1 -v /host_dir/logs:/var/log/app docker_image:version
Docker container will write logs in /var/log/app directory.Now logs will be persisted in /host_dir/logs directory of host across docker restart,too.
Approach 2
Configure logging driver like syslog or fluentd in docker. You can look at https://docs.docker.com/engine/admin/logging/overview/ for configuring it.

publishing message to external Kafka Broker from docker container

In my IDE, I am able to utilize a spring-boot application that would produce message(with Kafkaproducer) to an external kafka broker. But once I have hosted my spring-boot application in docker container, my application can no longer submit to the broker.
Here is the error message:
o.s.k.support.LoggingProducerListener: Exception thrown when sending a message with key='null' and payload='....' to topic Category:
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
org.springframework.kafka.core.KafkaProducerException:
Failed to send; nested exception is org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
what I used to run docker was: docker run -p 9001:9001 -d image_name in which 9001 is my spring-boot port, I am able to Post to the port, just that once my message was posted, it won't get to the external broker.
I think I have the general concept that Docker containers are living in isolated land where you have to open/map the port in order to access it(like my -p 9002:9002), but does it work the same way to access out from container? if so, can someone please show me how I can run the docker container in order to be able to access to the external broker(let's say the broker URL is "192.168.1.1:9000")? I don't think I am able to modify anything on the broker now, but my assumption is that if I can access via my IDE, why not in docker? Thanks for all the help!
it was due to my ip-forwarding = 0, once that is turned on I am able to do outgoing request.

IBM Containers (Dockers) service in Bluemix start before network service is ready

I am having a problem in all my Dockers running in IBM Container service. The application (the docker it self, I mean) is started when the service still has not configured the red for that docker. After some seconds (maybe 20 or 30) the docker has full network connectivity. This is generating a lot of problems, as it takes about that time to have both internal and external IP interfaces correctly configured by system.
Currently I am inserting a sleep thread in all my dockers applications so they wait for that time before starting to work but I wonder if there is a way to instruct the host not to start the container until the network is ready.
Thanks
Note: This question is related to IBM Containers service, not generic Docker. That is why I don't specify any Docker version, as it is a CaaS service. Anyway, and to be precise, we run the container service using the cloud foundry extensions, not the docker command:
cf ic run --name CONTAINER_NAME -m 512 registry.ng.bluemix.net/MY_ZONE/MY_DOCKER_IMAGE

What is the Docker Engine?

When people talk about the 'Docker Engine' do they mean both the Client and the Daemon? Or is it something else entirely?
As I see it there is a Docker Client, a Docker Daemon. The Client runs locally and connects to the Daemon which does the actual running of the containers. The Client can connect to a remote Daemon. Are these both together the Engine? thanks
The Docker Engine is the Docker Daemon running on a single host, installed with the Docker Client CLI. Here are the docs that answer this specific question.
On top of that, you can have a Swarm running that joins multiple hosts to horizontally scale and provide fault tolerance. And there are numerous other projects from Docker, like their Registry, Docker Cloud, and Universal Control Plane, that are each separate from the engine.
Docker engine is a client-server application which comprises of 3 components.
1. Client: Docker CLI or the command line window that helps us to interact.
2. REST API: Client communicate with the server with REST API, the commands issued by the client is sent to the server in the form of REST API, it is this reason our server can either be in the local or remote machine.
3. Server: Server here is either the local or remote machine or host machine which has a daemon process running in it which receives the commands and creates, manages and destroys the docker objects like images, containers, volumes etc.

Resources