First of all, if I left anything out please forgive me as this is my first post.
I have docker running goaws and i added a separate container running a python daemon that i wrote. The python daemon reads from the SQS endpoint i have subscribed to my SNS topic and does a POST to a webapp in another docker container running tomcat. All of this works perfectly in one docker-compose.yml. I can publish a message directly to my goaws SNS topic using the python publish API and i recieve the output in elasticsearch which is after my webapp. I view the elasticsearch cluster in Kibana (yet another container I have running).
I wanted to take things a step further and add Logstash to the stack in docker. I cant get logstash SNS output plugin to send a messsage to the goaws SNS topic. It wants to send it to sns.us-east-1.amazonaws.com which I dont have credentials for. Does anyone have any idea what is causing this issue?
Related
i just wanna know, if you guys have a tutorial to a golang app send logs to elasticsearch with docker
i wanna to send my logs with tcp connection (with logstash or filebeat)
i will be very happy with the recommendation. Tks!
We have docker running docker containers on different servers and we want to know when docker container is crashing.
We have Elasticsearch stack with Kibana.
So we think about the following pipeline:
docker container stops
docker sends alert to Elasticsearch
Elasticsearch sends alert to our Slack channel
What is the best way to do the first part, when docker sends alert to elasticsearch?
Thank you
The industry standard for alerting like that is to have an external watchdog service (Nagios, Kuma, etc) which would periodically run health check (a GET /_cluster/health request) and check that the cluster status is not "red". If the request fails or it's red - ping your Slack, PaderDuty, etc.
I was trying to do a quick bootstrap to see some sample data in elasticsearch.
Here is where you do a Docker Compose to get a ES Cluster:
https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html
Next I needed to get logstash in place. I did that with: https://www.elastic.co/guide/en/logstash/current/docker-config.html
When I curl my host, curl localhost:9200 it gives me the sample connection string. So i can tell it is exposed. Now when I run the logstash docker file from above, i noticed that during the bootstrap code it cant connect to: localhost:9200
I was thinking that the private network created in for elastic is fine for the cluster and that i didnt need to add logstash to it. Do I have to do something different to get the default logstash to interact with the default docker?
I have been stuck on this for awhile. My host system is Debian 9. I am trying to think of what the issues might be. I know that -p 9200:9200 would couple the ports together, but 9200 has been claimed by ES, so I'm not sure how I should be handling things. I didn't see anything on the Website though which says "To link the out of the box logstash to the out of the box elasticsearch you need to do X,Y,Z"
When attempting to create a terminal to the logstash server with -it though, it is continually bootstrapping logstash and isn't giving me a terminal to see what is going on from the inside.
What Recommendations do you have?
Add --link your_elasticsearch_container_id:elasticsearch to the docker run command of logstash. Then the elasticsearch container will be visible to logstash under http://elasticsearch:9200, assuming you don't have TLS and the default port is used (what will be the case if you follow the docs you refer to).
If you need filebeat or kibana in the next step, see this question I answered recently: https://stackoverflow.com/a/60122043/7330758
I need to connect to the remote Docker container and send a couple of commands to start PredictionIO services, but from the outside, either via api docker or some of my own. I am new to this, so I have looked in many places, but I have not been able to find something to help me with this, thank you very much.
I was investigating the docker 1.40 api, but without great results.
I've got a docker-compose.yml which, when deployed locally as either using stack or compose yields 3 services (parse-server, mongodb, web-app in nginx). I can get logs from those services using docker service logs <id>.
Using the same docker-compose.yml to deploy the stack to Amazon EC2, docker service logs <id> calls to the running services returns nothing. As if I were cat'ing an empty file.
Does anybody know what could cause this and / or how I can fix it?
When you deploy a swarm to AWS using the Docker Docs buttons or via cloud, I believe it usually pipes all output to CloudWatch, organized by individual container. This is only helpful if that is how you created your swarm.