Hello I am unable to send http requests to my logstash http input. I receive a server returned no response message. I am able to send requests to my tcp endpoint using cat. I am also able to send http requests to my endpoint when running it from the cli as docker exec -it. I don't know what else to try at this point
I am using:
opensearchproject/opensearch-dashboards:latest as kibana.
opensearchproject/logstash-oss-with-opensearch-output-plugin:7.16.2 as logstash
opensearchproject/opensearch:1.2.3 as elasticsearch
My docker-compose file looks as follows:
version: '3.7'
services:
elasticsearch:
build:
context: elasticsearch/
volumes:
- ./elasticsearch/config/opensearch.yml:/usr/share/elasticsearch/config/opensearch.yml:ro,z
- elasticsearch:/usr/share/elasticsearch/data:z
ports:
- "9200:9200"
- "9300:9300"
environment:
- node.name=elasticsearch
- discovery.type=single-node
- "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m"
- discovery.seed_hosts=elasticsearch
# - "DISABLE_SECURITY_PLUGIN=true"
networks:
- elk
logstash:
build:
context: logstash/
volumes:
- ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro,z
- ./logstash/pipeline:/usr/share/logstash/pipeline:ro,z
ports:
- "5044:5044"
- "5000:5000/tcp"
- "5000:5000/udp"
- "9600:9600"
- "8080:8080"
networks:
- elk
depends_on:
- elasticsearch
expose:
- "5000"
- "8080"
kibana:
build:
context: kibana/
volumes:
- ./kibana/config/opensearch_dashboards.yml:/usr/share/kibana/config/opensearch_dashboards.yml:ro,z
ports:
- "5601:5601"
expose:
- "5601"
environment:
OPENSEARCH_HOSTS: '["https://elasticsearch:9200"]'
# - "DISABLE_SECURITY_DASHBOARDS_PLUGIN=true"
networks:
- elk
depends_on:
- elasticsearch
networks:
elk:
driver: bridge
volumes:
elasticsearch: null
logstash/pipeline/logstash.conf
input {
beats {
port => 5044
}
http {
port => 8080
}
tcp {
port => 5000
}
}
output {
opensearch {
hosts => "https://elasticsearch:9200"
user => "admin"
password => "admin"
ssl_certificate_verification => false
}
}
After almost a week, I just figured it out. When specifying the ports I must include /tcp for port 8080.
ports:
"5044:5044"
"5000:5000/tcp"
"5000:5000/udp"
"9600:9600"
"8080:8080/tcp"
Related
I'm trying to build my first ELK Stack in Docker for a school project.
The goal is to have an ElasticSearch container along with it's volume.
A LogStash container linked to ElasticSearch along with it's volume aswell.
A Filebeat container to send data to LogStash
And finaly a Kibana Container to visualise the logs.
The problem I'm having is that even though all seems to be configured and working, Kibana isn't receiving anything from ElasticSearch. When I try to add a new index pattern I'm faced with this message : "No data streams, indices, or index aliases match your index pattern."
And not a thing is visible in Index Management.
Here's the docker-compose.yml:
version: '3.6'
services:
Elasticsearch:
image: elasticsearch:8.5.3
container_name: elasticsearch
restart: always
volumes:
- elastic_data:/usr/share/elasticsearch/data/
environment:
ES_JAVA_OPTS: "-Xmx750m -Xms750m"
discovery.type: single-node
xpack.security.enabled: false
ports:
- '9200:9200'
- '9300:9300'
networks:
- elk
Logstash:
image: logstash:8.5.3
container_name: logstash
restart: always
volumes:
- ./logstash/logstash.conf:/etc/logstash/conf.d/logstash.conf:ro
command: logstash -f /etc/logstash/conf.d/logstash.conf
depends_on:
- Elasticsearch
ports:
- '9600:9600'
environment:
LS_JAVA_OPTS: "-Xmx750m -Xms750m"
networks:
- elk
Kibana:
image: kibana:8.5.3
container_name: kibana
restart: always
ports:
- '5601:5601'
environment:
- ELASTICSEARCH_URL=http://elasticsearch:9200
depends_on:
- Elasticsearch
networks:
- elk
Filebeat:
container_name: filebeat
image: "docker.elastic.co/beats/filebeat:8.5.3"
restart: always
user: root
volumes:
- ./filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro
- /var/lib/docker:/var/lib/docker:ro
- /var/run/docker.sock:/var/run/docker.sock
networks:
- elk
volumes:
elastic_data:
networks:
elk:
driver: bridge
logstash/logstash.conf:
input {
beats {
port => 5044
}
}
output {
elasticsearch {
hosts => "elasticsearch"
codec => "json"
index => "logstash-%{indexDay}"
}
stdout { codec => rubydebug }
}
And finally the filebeat/filebeat.yml:
filebeat.inputs:
- type: log
enabled: true
paths:
- '/var/lib/docker/containers/*/*.log'
processors:
- add_docker_metadata:
host: "unix:///var/run/docker.sock"
output.logstash:
hosts: ["logstash:5044"]
Any idea what I'm doing wrong?
I am trying to run Spring application with logstash, Elastic Search, Kafka and Kibana.
[main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://kafka:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://kafka:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
The above error is being repeated under logstash container
docker-compose.yml
version: '3.7'
services:
zookeeper:
image: wurstmeister/zookeeper
container_name: zookeeper
ports:
- "2181:2181"
#restart: always
networks:
- tweetapp-network
kafka:
image: wurstmeister/kafka
container_name: kafka
#restart: always
ports:
- "9092:9092"
depends_on:
- zookeeper
environment:
KAFKA_ADVERTISED_HOST_NAME: kafka
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_CREATE_TOPICS: "tweetapp-logs:1:1, Tweets:1:1"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- tweetapp-network
mongodb:
image: mongo
container_name: mongodb
# restart: always
ports:
- "27017:27017"
# volumes:
# - mongodb-volume:/data/db
networks:
- tweetapp-network
springboot:
image: tweetapp
#restart: always
ports:
- "8080:8080"
depends_on:
- mongodb
- kafka
- elasticsearch
- logstash
- kibana
networks:
- tweetapp-network
logstash:
image: logstash:7.7.0
container_name: logstash
hostname: logstash
ports:
- "9600:9600"
volumes:
- .\logstash:/usr/share/logstash/pipeline/
links:
- elasticsearch:elasticsearch
depends_on:
- elasticsearch
networks:
- tweetapp-network
elasticsearch:
image: elasticsearch:7.7.0
container_name: elasticsearch
hostname: elasticsearch
ports:
- "9200:9200"
environment:
- discovery.type=single-node
networks:
- tweetapp-network
kibana:
image: kibana:7.7.0
container_name: kibana
hostname: kibana
ports:
- "5601:5601"
links:
- elasticsearch:elasticsearch
depends_on:
- elasticsearch
networks:
- tweetapp-network
# Networks to be created to facilitate communication between containers
networks:
tweetapp-network:
I made sure that Elastic Search is working http://localhost:9200/ , for this URL I get JSON output.
logstash.config
input {
kafka {
bootstrap_servers => "kafka:9092"
topics => ["tweetapp-logs"]
}
}
filter {
grok {
match => [ "message", "%{GREEDYDATA}" ]
}
}
output {
elasticsearch {
hosts => ["kafka:9200"]
index => "tweetapp"
workers => 1
}
}
I am new to docker-compose, Elastic search and Kafka. Any help will be appreciated.
Seems like you are unlucky.
Replace Elasticsearch instead of kafka in host name
Try this:
input { kafka { bootstrap_servers => "kafka:9092" topics => ["tweetapp-logs"] } } filter { grok { match => [ "message", "%{GREEDYDATA}" ] } } output { elasticsearch { hosts => ["elasticsearch:9200"] index => "tweetapp" workers => 1 } }
I have a setup with docker-compose and the elastic stack. My 'main' container is running a Django application (there are some more containers for metrics, certificates, and so on).
The logging itself works with this setup but I have no container labels or tags in Kibana. So I can't differentiate between logs from different containers (except when I know what I'm looking for).
How do I configure logstash or logspout to label or tag all logs with the container where they're from? In the best case tagging container image and container id.
I tried to add a label to the container but that didn't change anything. I also tried specified logging, with driver syslog and a tag, but that didn't work either.
I guess I have to make a specific logstash config and do some stuff there?
Below is my current docker-compose.yml
version: '2'
services:
# django container
web:
build: .
command: gunicorn backend.wsgi:application --bind 0.0.0.0:8001 --log-level debug
restart: unless-stopped
container_name: web
depends_on:
- logspout
expose:
- 8001
env_file:
- ./environments/web.test.env
image: mycontainer
labels:
container: "web"
com.example.service: "web"
logspout:
image: gliderlabs/logspout:v3.2.11
command: 'udp://logstash:5000'
restart: unless-stopped
links:
- logstash
volumes:
- '/var/run/docker.sock:/tmp/docker.sock'
depends_on:
- elasticsearch
- logstash
- kibana
logstash:
image: logstash:7.9.1
restart: unless-stopped
environment:
- STDOUT=true
links:
- elasticsearch
expose:
- 5000
depends_on:
- elasticsearch
- kibana
command: 'logstash -e "input { udp { port => 5000 } } output { elasticsearch { hosts => elasticsearch } }"'
kibana:
image: kibana:7.9.1
restart: unless-stopped
links:
- elasticsearch
environment:
- ELASTICSEARCH_URL=http://elasticsearch:9200
ports:
- 5601:5601
depends_on:
- elasticsearch
elasticsearch:
image: elasticsearch:7.9.1
restart: unless-stopped
ports:
- 9200:9200
- 9300:9300
environment:
- node.name=elasticsearch
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- cluster.initial_master_nodes=elasticsearch
Any help would be appreciated, thanks!
Sorry, I'm really inexperienced with the elastic stack, but I got it right.
Indeed you have to provide a logstash config with filter, at least that's how I got it working. Additionally, I had to switch from UDP to just syslog in logspout, I guess the udp connection didn't forward all it got (for example the docker image).
Here are my configurations that work (there are definitely some improvements to do).
logstash.conf
input {
syslog {
port => 5000
type => "docker"
}
}
filter {
grok {
match => { "message" => "%{SYSLOG5424PRI}%{NONNEGINT:ver} +(?:%{TIMESTAMP_ISO8601:ts}|-) +(?:%{HOSTNAME:service}|-) +(?:%{NOTSPACE:containerName}|-) +(?:%{NOTSPACE:proc}|-) +(?:%{WORD:msgid}|-) +(?:%{SYSLOG5424SD:sd}|-|) +%{GREEDYDATA:msg}" }
}
syslog_pri { }
}
output {
elasticsearch { hosts => "elasticsearch" }
stdout {codec => rubydebug}
}
docker-compose.yml
version: '2'
services:
web:
build: .
command: gunicorn backend.wsgi:application --bind 0.0.0.0:8001 --log-level debug
restart: unless-stopped
container_name: web
depends_on:
- logspout
image: myimage
expose:
- 8001
env_file:
- ./environments/web.test.env
labels:
container: "web"
com.example.service: "web"
logspout:
image: gliderlabs/logspout:v3.2.11
command: 'syslog://logstash:5000'
restart: unless-stopped
links:
- logstash
volumes:
- '/var/run/docker.sock:/tmp/docker.sock'
depends_on:
- elasticsearch
- logstash
- kibana
logstash:
image: logstash:7.9.1
restart: unless-stopped
environment:
- LOGSPOUT=ignore
links:
- elasticsearch
depends_on:
- elasticsearch
- kibana
volumes:
- ./containers/logstash/logstash.conf:/usr/share/logstash/pipeline/logstash.conf
kibana:
image: kibana:7.9.1
restart: unless-stopped
links:
- elasticsearch
environment:
- LOGSPOUT=ignore
- ELASTICSEARCH_URL=http://elasticsearch:9200
ports:
- 5601:5601
depends_on:
- elasticsearch
elasticsearch:
image: elasticsearch:7.9.1
restart: unless-stopped
ports:
- 9200:9200
- 9300:9300
environment:
- LOGSPOUT=ignore
- node.name=elasticsearch
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- cluster.initial_master_nodes=elasticsearch
For some reason I can't get this to work. I'm trying to forward /api to API container.
Error I'm getting:
nuxt | [6:11:03 PM] Error: connect ECONNREFUSED 127.0.0.1:80
nuxt | at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1083:14)
I think /api is being redirected to 127.0.0.1:80 but I don't know why?
Traefik dashboard:
https://imgur.com/mqTXE9F
nuxt.config.js
...
axios: {
baseURL: '/api'
},
server: {
proxyTable: {
'/api': {
target: 'http://localhost:1337',
changeOrigin: true,
pathRewrite: {
"^/api": ""
}
}
}
},
...
docker-compose.yml
version: '3'
services:
reverse-proxy:
image: traefik
command: --api --docker
ports:
- "80:80"
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- mynet
nuxt:
# build: ./app/
image: "registry.gitlab.com/username/package:latest"
container_name: nuxt
restart: always
ports:
- "3000:3000"
command:
"npm run start"
networks:
- mynet
labels:
- "traefik.backend=nuxt"
- "traefik.frontend.rule=PathPrefixStrip:/"
- "traefik.docker.network=mynet"
- "traefik.port=3000"
api:
build: .
image: strapi/strapi
container_name: api
environment:
- APP_NAME=strapi-app
- DATABASE_CLIENT=mongo
- DATABASE_HOST=db
- DATABASE_PORT=27017
- DATABASE_NAME=strapi
- DATABASE_USERNAME=
- DATABASE_PASSWORD=
- DATABASE_SSL=false
- DATABASE_AUTHENTICATION_DATABASE=strapi
- HOST=api
- NODE_ENV=development
ports:
- 1337:1337
volumes:
- ./strapi-app:/usr/src/api/strapi-app
#- /usr/src/api/strapi-app/node_modules
depends_on:
- db
restart: always
networks:
- mynet
labels:
- "traefik.backend=api"
- "traefik.docker.network=mynet"
- "traefik.frontend.rule=PathPrefixStrip:/api"
- "traefik.port=1337"
db:
image: mongo
environment:
- MONGO_INITDB_DATABASE=strapi
ports:
- 27017:27017
volumes:
- ./db:/data/db
restart: always
networks:
- mynet
networks:
mynet:
external: true
I know that this is a little late, but you should remove the proxy from the webpack-dev-server and instead set the right rules using labels on your api service.
So if you're using Traefik v2, the label on your nuxt service should be
labels:
- "traefik.http.routers.nuxt.rule=Host(`myhost`)"
then the label on your api should be
labels:
- "traefik.http.routers.api.rule=Host(`myhost`) && PathPrefix(`/api`)"
I have a http server running in a docker. Accessing the server at / redirects it to /web. This works fine locally. I have setup traefik to connect to the docker through xxxxxxx.com domain. However, this results in a 404 page not found but when I try xxxxxxxx.com/web it works. How does traefik handle this kind of redirects? Thanks in advance.
Here's my docker-compose.yml file:
version: "3"
networks:
proxy:
external: true
internal:
external: false
services:
web:
restart: always
image: odoo:10.0
labels:
- traefik.backend=web
- traefik.frontend.rule=Host:portal.sironirestaurant.com
- traefik.docker.network=proxy
- traefik.port=8069
networks:
- internal
- proxy
depends_on:
- db
ports:
- 8069:8069
volumes:
- odoo-web-data:/var/lib/odoo
- ./config:/etc/odoo
- ./addons:/mnt/extra-addons
db:
restart: always
image: postgres:9.4
networks:
- internal
labels:
- traefik.enable=false
environment:
- POSTGRES_PASSWORD=xxxxxxx
- POSTGRES_USER=xxxxxx
- PGDATA=/var/lib/postgresql/data/pgdata
volumes:
- odoo-db-data:/var/lib/postgresql/data/pgdata
volumes:
odoo-web-data:
odoo-db-data: