What is the best way to send docker notifications to Elasticsearch? - docker

We have docker running docker containers on different servers and we want to know when docker container is crashing.
We have Elasticsearch stack with Kibana.
So we think about the following pipeline:
docker container stops
docker sends alert to Elasticsearch
Elasticsearch sends alert to our Slack channel
What is the best way to do the first part, when docker sends alert to elasticsearch?
Thank you

The industry standard for alerting like that is to have an external watchdog service (Nagios, Kuma, etc) which would periodically run health check (a GET /_cluster/health request) and check that the cluster status is not "red". If the request fails or it's red - ping your Slack, PaderDuty, etc.

Related

how to do the ibmmq replication in docker swarm or kubernetes?

I am running the mq container on top of docker followed by the link and my container status is up. but unable to get the web-UI. It shows the logs as
2018-09-17T20:19:59.364Z AMQ9207E: The data received from host '10.10.10.10' on channel '????' is not valid.
2018-09-17T20:19:59.364Z AMQ9492E: The TCP/IP responder program encountered an error.
Could anybody suggest me how to run the IIB/MQ Cluster using Docker and Kubernetes inorder to achieve the auto Scaling and High Availability?

Docker proxy logging urls while build

I am struggling with a problem which is best described on a picture.
I need somehow log all urls requested when I run docker build command.
Can somebody help me how to achieve this?
You cannot monitor the connections called by build command. They are just normal connections pass through your network interface.
You may want to install tcpdump and use it to monitor a specific network interface or to filter specific HTTP requests. That is the best what you can do as far as I know.
UPDATE
If you want to monitor build process which happens inside docker container, you may use tcpdump as mentioned above to monitor the network interface of that Docker container using its binding IP address. By that you can see the connections flows in and out that Docker container only.

Local Logstash SNS Output to goaws SNS topic in Docker

First of all, if I left anything out please forgive me as this is my first post.
I have docker running goaws and i added a separate container running a python daemon that i wrote. The python daemon reads from the SQS endpoint i have subscribed to my SNS topic and does a POST to a webapp in another docker container running tomcat. All of this works perfectly in one docker-compose.yml. I can publish a message directly to my goaws SNS topic using the python publish API and i recieve the output in elasticsearch which is after my webapp. I view the elasticsearch cluster in Kibana (yet another container I have running).
I wanted to take things a step further and add Logstash to the stack in docker. I cant get logstash SNS output plugin to send a messsage to the goaws SNS topic. It wants to send it to sns.us-east-1.amazonaws.com which I dont have credentials for. Does anyone have any idea what is causing this issue?

how will docker service logs fetch logs?

I know docker service logs get logs from containers which are part of that service. But how will it fetch? is it fetch once and cache somewhere or every time I issue command "docker service logs" it will fetch the logs via network?
As mentioned in my comment and the other answer, docker engine always caches logs of the containers running on that docker engine and store them in /var/lib/docker/containers/<container id>/<container id>-json.log directory. When you do docker service logs from a machine where the containers of the said service are not running, docker always pulls the log from the machine over the network and it never caches.
That being said, the error you're facing received message length 1869051448 exceeding the max size 4194304 is because there might be a log line that is simply too long to fit in the gRPC object being sent across the network.
Solution
Specify the --tail <n> option to docker service logs where n is the number of lines from the end of the logs you want to see
Specify a task ID from docker service ps instead of a service name, giving you the logs from that task alone rather than the aggregated logs from across the service replicas.
This might still give you the error if you still have that long log line in your pulled logs.
By default docker logs to:
/var/lib/docker/containers/<container id>/<container id>-json.log
This question is already answered.
For some advanced logging options see logging drivers.

What is the Docker Engine?

When people talk about the 'Docker Engine' do they mean both the Client and the Daemon? Or is it something else entirely?
As I see it there is a Docker Client, a Docker Daemon. The Client runs locally and connects to the Daemon which does the actual running of the containers. The Client can connect to a remote Daemon. Are these both together the Engine? thanks
The Docker Engine is the Docker Daemon running on a single host, installed with the Docker Client CLI. Here are the docs that answer this specific question.
On top of that, you can have a Swarm running that joins multiple hosts to horizontally scale and provide fault tolerance. And there are numerous other projects from Docker, like their Registry, Docker Cloud, and Universal Control Plane, that are each separate from the engine.
Docker engine is a client-server application which comprises of 3 components.
1. Client: Docker CLI or the command line window that helps us to interact.
2. REST API: Client communicate with the server with REST API, the commands issued by the client is sent to the server in the form of REST API, it is this reason our server can either be in the local or remote machine.
3. Server: Server here is either the local or remote machine or host machine which has a daemon process running in it which receives the commands and creates, manages and destroys the docker objects like images, containers, volumes etc.

Resources