Graylog docker installation, /api/api-browser/ page not showing - graylog

I installed the default graylog docker installation as the graylog documentation suggests. I just change the graylog version to 4.3.5. Here is my docker-compose.yml file:
version: '3.1'
services:
# MongoDB: https://hub.docker.com/_/mongo
mongo:
image: mongo:4.2
networks:
- graylog
# Elasticsearch: https://www.elastic.co/guide/en/elasticsearch/reference/7.10/docker.html
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.10.2
environment:
- http.host=0.0.0.0
- transport.host=localhost
- network.host=0.0.0.0
- "ES_JAVA_OPTS=-Dlog4j2.formatMsgNoLookups=true -Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
deploy:
resources:
limits:
memory: 1g
networks:
- graylog
# Graylog https://hub.docker.com/u/graylog
graylog:
image: graylog/graylog:4.3.5
environment:
# CHANGE ME (must be at least 16 characters)!
- GRAYLOG_PASSWORD_SECRET=somepasswordpepper
# Password: admin
- GRAYLOG_ROOT_PASSWORD_SHA2=8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918
- GRAYLOG_HTTP_EXTERNAL_URI=http://127.0.0.1:9000/
entrypoint: /usr/bin/tini -- wait-for-it elasticsearch:9200 -- /docker-entrypoint.sh
networks:
- graylog
restart: always
depends_on:
- mongo
- elasticsearch
ports:
# Graylog web interface and REST API
- 9000:9000
# Syslog TCP
- 1514:1514
# Syslog UDP
- 1514:1514/udp
# GELF TCP
- 12201:12201
# GELF UDP
- 12201:12201/udp
networks:
graylog:
driver: bridge
It is running well but when I navigate the "Nodes" page and then click the "API Browser" link it navigates the http://172.18.0.4:9000/api/api-browser/ and nothing there. If I change the url with localhost it just showed a broken page.
Some people have mentioned nginx but I can't find a well source on how to implement that. What do I need to do to view this API page?

I think you are accessing the wrong URL
Please try "/api/api-browser/global/index.html" instead of "/api/api-browser/"

Related

Dockerized elasticsearch and fscrawler: failed to create elasticsearch client, disabling crawler… Connection refused

I received the following error when attempting to connect Dockerized fscrawler to Dockerized elasticsearch:
[f.p.e.c.f.c.ElasticsearchClientManager] failed to create
elasticsearch client, disabling crawler… [f.p.e.c.f.FsCrawler] Fatal
error received while running the crawler: [Connection refused]
When fscrawler is run for the fist time (i.e., docker-compose run fscrawler) it creates /config/{fscrawer_job}/_settings.yml with the following default setting:
elasticsearch:
nodes:
- url: "http://127.0.0.1:9200"
This will cause fscrawler to attempt to connect to localhost (i.e., 127.0.0.1). However, this will fail when fscrawler is located within a docker container because it is attempting to connect with the localhost of the CONTAINER. This was particularly confusing in my case because elasticsearch WAS accessible as localhost, but on the localhost of my physical computer (and NOT localhost of the container). Changing the url allowed fscrawler to connect to network address where elasticsearch actually resides.
elasticsearch:
nodes:
- url: "http://elasticsearch:9200"
I used the following docker image: https://hub.docker.com/r/toto1310/fscrawler
# FILE: docker-compose.yml
version: '2.2'
services:
# FSCrawler
fscrawler:
image: toto1310/fscrawler
container_name: fscrawler
volumes:
- ${PWD}/config:/root/.fscrawler
- ${PWD}/data:/tmp/es
networks:
- esnet
command: fscrawler job_name
# Elasticsearch Cluster
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.3.2
container_name: elasticsearch
environment:
- node.name=elasticsearch
- discovery.seed_hosts=elasticsearch2
- cluster.initial_master_nodes=elasticsearch,elasticsearch2
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata01:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- esnet
elasticsearch2:
image: docker.elastic.co/elasticsearch/elasticsearch:7.3.2
container_name: elasticsearch2
environment:
- node.name=elasticsearch2
- discovery.seed_hosts=elasticsearch
- cluster.initial_master_nodes=elasticsearch,elasticsearch2
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata02:/usr/share/elasticsearch/data
networks:
- esnet
volumes:
esdata01:
driver: local
esdata02:
driver: local
networks:
esnet:
Ran docker-compose up elasticsearch elasticsearch2
to bring up elasticsearch nodes.
Ran docker-compose run fscrawler to create _settings.yml
Edited _settings.yml to
elasticsearch:
nodes:
- url: "http://elasticsearch:9200"
Started fscrawler docker-compose up fscrawler

Kibana couldnt estabilish connection to Elasticsearch inside docker-compose

I'm trying to run Elastic,Kibana inside docker-compose.
When I bring up the containers using docker-compose up, Elasticsearch loads up fine. After it loads, the Kibana containers start up. But once they load, they are not able to see or connect the Elasticsearch container, producing these messages:
Kibana docker Log:
{"type":"log","#timestamp":"2020-01-22T19:57:27Z","tags":["warning","elasticsearch","admin"],"pid":6,"message":"No living connections"}
{"type":"log","#timestamp":"2020-01-22T19:57:30Z","tags":["warning","elasticsearch","data"],"pid":6,"message":"Unable to revive connection: http://elasticsearch01:9200/"}
{"type":"log","#timestamp":"2020-01-22T19:57:30Z","tags":["warning","elasticsearch","data"],"pid":6,"message":"No living connections"}
Am not able to see the elasticsearch host from the Kibana container
curl -X GET http://elasticsearch01:9200
throws below quoted error
curl: (7) Failed connect to elasticsearch01:9200; No route to host
Deeply digged and out this is happening only in CentOS8 .
Also in same CENTOS8 am able to up and use standalone elasticsearch and kibana instance via systemctl service.
Am i missing something here ?
Can anyone help?
docker-compose.yml:
networks:
docker-elk:
driver: bridge
services:
elasticsearch01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.1
container_name: elasticsearch01
secrets:
- source: elasticsearch.yml
target: /usr/share/elasticsearch/config/elasticsearch.yml
restart: always
environment:
- node.name=elasticsearch01
- cluster.name=es-docker-cluster
- bootstrap.memory_lock=true
- ES_JAVA_OPTS=-Xms512m -Xmx512m
- discovery.type=single-node
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- elasticdata:/usr/share/elasticsearch/data
ports:
- "9200"
expose:
- "9200"
- "9300"
networks:
- docker-elk
kibana:
image: docker.elastic.co/kibana/kibana:7.5.1
container_name: kibana
depends_on: ['elasticsearch01']
environment:
- SERVER_NAME=kibanaServer
restart: always
secrets:
- source: kibana.yml
target: /usr/share/kibana/config/kibana.yml
restart: always
networks:
- docker-elk
volumes:
- kibanadata:/usr/share/kibana/data
ports: ['5601:5601']
links:
- elasticsearch01
volumes:
elasticdata:
driver: local
kibanadata:
driver: local
secrets:
elasticsearch.yml:
file: ./ELK_Config/elastic/elasticsearch.yml
kibana.yml:
file: ./ELK_Config/kibana/kibana.yml
System/Docker Info
OS: CentOS 8
ELK versions 7.4.0
Docker version 19.03.4, build 9013bf583a
Docker-compose:docker-compose version 1.25.0, build 0a186604

Cannot setup docker-compose file to launch kibana at version 7.3.2

I am looking for a working version of a docker-compose file that starts up kibana and elasticsearch together on docker for mac at version 7.3.2. I've followed the most recent instructions on kibana and elasticsearch's 7.3.2 documentation and my docker-compose.yml file below is the union of what I gathered from both docs. (The kibana doc was the most vague with respect to the docker compose config). I've also tried following other stack overflow articles (written for older versions) but they don't seem to work with the latest versions. I now suspect I'm missing something version specific. 7.3.1 didn't work with the same config either.
I should note that the elasticsearch portion of the file works fine; I can hit http://localhost:9200 and I get a json response. However Kibana's url (http://localhost:5601) returns Kibana server is not ready yet with this error:
kibana | {"type":"log","#timestamp":"2019-09-12T21:45:04Z","tags":["warning","elasticsearch","admin"],"pid":7,"message":"Unable to revive connection: http://elasticsearch:9200/"}
This is my best attempt so far:
version: '2.2'
services:
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.3.2
container_name: es01
environment:
- node.name=es01
- discovery.seed_hosts=es02
- cluster.initial_master_nodes=es01,es02
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata01:/usr/share/elasticsearch/data
ports:
- 9200:9200
- 9300:9300
networks:
- esnet
es02:
image: docker.elastic.co/elasticsearch/elasticsearch:7.3.2
container_name: es02
environment:
- node.name=es02
- discovery.seed_hosts=es01
- cluster.initial_master_nodes=es01,es02
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata02:/usr/share/elasticsearch/data
networks:
- esnet
kibana:
container_name: kibana
image: docker.elastic.co/kibana/kibana:7.3.2
ports:
- 5601:5601
networks:
- esnet
environment:
SERVER_NAME: kibana.example.org
ELASTICSEARCH_URL: http://elasticsearch:9200
volumes:
esdata01:
driver: local
esdata02:
driver: local
networks:
esnet:
Docker Compose automatically creates a private Docker network for you, and within that, the names of the service: blocks are valid hostnames.
When you set
ELASTICSEARCH_URL: http://elasticsearch:9200
None of your containers are named elasticsearch so the hostname lookup fails, but if you pick either node es01 or es02 it will work
ELASTICSEARCH_URL: http://es01:9200
(Note that you don’t explicitly need a networks: definition for this to work, Compose will create a network named default for you. You also don’t need to explicitly set container_name: unless you’re planning on trying to manage the same containers with non-Compose tooling.)
Use ELASTICSEARCH_HOSTS: http://es01:9200 instead of ELASTICSEARCH_URL to update the environment from the docker-compose.yml file. Here is the Elasticsearch documentation about environment variable configuration https://www.elastic.co/guide/en/kibana/current/docker.html#environment-variable-config.
You need to add network configuration also in kibana service like
networks:
- esnet
and in ELASTICSEARCH_HOST: http://es01:9200
note es01 is your container name

Graylog docker image bound to internal IP instead of 0.0.0.0

Based on the instructions on Graylog docker, I have the following docker-compose.yml to run the Graylog stack:
version: '2'
volumes:
es_data:
mongo_data:
graylog_journal:
services:
# MongoDB: https://hub.docker.com/_/mongo/
mongo:
image: mongo:3
volumes:
- mongo_data:/data/db
# Elasticsearch: https://www.elastic.co/guide/en/elasticsearch/reference/5.5/docker.html
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:5.6.12
volumes:
- es_data:/usr/share/elasticsearch/data
environment:
- http.host=0.0.0.0
- network.host=127.0.0.1
# Disable X-Pack security: https://www.elastic.co/guide/en/elasticsearch/reference/5.5/security-settings.html#general-security-settings
- xpack.security.enabled=false
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ports:
- 9200:9200
- 9300:9300
ulimits:
memlock:
soft: -1
hard: -1
mem_limit: 1g
# Graylog: https://hub.docker.com/r/graylog/graylog/
graylog:
image: graylog/graylog:2.4.6-1
volumes:
- graylog_journal:/usr/share/graylog/data/journal
- ./graylog/config:/usr/share/graylog/data/config
environment:
# CHANGE ME!
- GRAYLOG_PASSWORD_SECRET=somepasswordpepper
# Password: admin
- GRAYLOG_ROOT_PASSWORD_SHA2=8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918
- GRAYLOG_HTTP_EXTERNAL_URI=http://0.0.0.0:9000/api
- GRAYLOG_TRANSPORT_EMAIL_ENABLED=true
- GRAYLOG_TRANSPORT_EMAIL_HOSTNAME=localhost
- GRAYLOG_TRANSPORT_EMAIL_PORT=25
depends_on:
- mongo
- elasticsearch
ports:
# Graylog web interface and REST API
- 9000:9000
# Syslog TCP
- 514:514
# Syslog UDP
- 514:514/udp
# GELF TCP
- 12201:12201
# GELF UDP
- 12201:12201/udp
From the logs of the container, I can see it claims service bound to 0.0.0.0
2018-11-30 02:51:09,503 INFO : org.glassfish.grizzly.http.server.NetworkListener - Started listener bound to [0.0.0.0:9000]
2018-11-30 02:51:09,506 INFO : org.glassfish.grizzly.http.server.HttpServer - [HttpServer] Started.
2018-11-30 02:51:09,506 INFO : org.graylog2.shared.initializers.JerseyService - Started REST API at <http://0.0.0.0:9000/api/>
2018-11-30 02:51:09,507 INFO : org.graylog2.shared.initializers.JerseyService - Started Web Interface at <http://0.0.0.0:9000/>
But when I access the front end, it gives the following error:
Error message
Request has been terminated
Possible causes: the network is offline, Origin is not allowed by Access-Control-Allow-Origin, the page is being unloaded, etc.
Original Request
GET http://172.24.0.4:9000/api/system/sessions
Status code
undefined
Full error message
Error: Request has been terminated
Possible causes: the network is offline, Origin is not allowed by Access-Control-Allow-Origin, the page is being unloaded, etc.
What am I doing wrong?
Looks like it's missing a configuration in the docker-compose.yml file:
GRAYLOG_WEB_ENDPOINT_URI: http//<publicip>:9000/api
Seems have fixed the problem.

Rancher running an APP that needs internet access

Rancher v 1.6.10, Docker v 17.06.2-ce
I'm deploying a stack via Rancher UI that contains one of the docker containers that has an app which connects to Dropbox via the internet. But the app isn't able to access the internet.
However, if I don't use rancher and simply use docker-compose up natively, then it all works fine.
The networking that the Rancher creates appears to be the problem I guess.
Can I be advised please?
My docker compose file:
version: '2'
services:
elasticsearch1:
image: docker.elastic.co/elasticsearch/elasticsearch:5.6.3
container_name: es1
environment:
- cluster.name=idc-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- xpack.security.enabled=false
ulimits:
memlock:
soft: -1
hard: -1
mem_limit: 1g
volumes:
- esdata1:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- docker-elk
idcdb:
image: postgres:9.6
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD=DriveMe
- POSTGRES_USER=idc
- POSTGRES_DB=idc
volumes:
- pgdata:/var/lib/db
idcredis:
image: redis:4.0
environment:
- ALLOW_EMPTY_PASSWORD=yes
ports:
- '6379:6379'
volumes:
- redisdata:/var/lib/redis
booking-service:
environment:
- PORT=8085
- PROFILE=integration
ports:
- 8085:8085
image: idc/idc-booking-service
depends_on:
- idcdb
- idcredis
notification-service:
environment:
- PORT=8087
- PROFILE=integration
ports:
- 8087:8087
image: idc/idc-notification-service
depends_on:
- idcredis
analytics-service:
environment:
- PORT=8088
- PROFILE=integration
ports:
- 8088:8088
image: idc/idc-analytics-service
depends_on:
- idcredis
- elasticsearch1
kibana:
image: docker.elastic.co/kibana/kibana:5.6.3
environment:
- "ELASTICSEARCH_URL=http://elasticsearch1:9200"
networks:
- docker-elk
volumes:
pgdata: {}
redisdata: {}
esdata1:
driver: local
networks:
docker-elk:
driver: bridge
You should specify the networks while starting docker
--net=host
if this does not solve your problem
sudo gedit /etc/NetworkManager/NetworkManager.conf
comment out the following line:
#dns=dnsmasq
then
sudo restart network-manager
You could use a Rancher LB and add it to your application as follows:
In the stack where you application is you will have to click on Add Service button and select Add a Load Balancer
Then you make sure that where is says Access is set to Public
In the Request Host you will have to add the desired URL such as: mylocal.dev
Then you will have to add the port 80 so it will be accessible from the outside world on port 80
Select the service you want the LB to apply for and the internal application port.
Thats' all :) now you should be able to connect to mylocal.dev from the outside world.

Resources