Errno::EADDRNOTAVAIL - LogStashLogger not connecting to Logstash - Ruby on Rails - ruby-on-rails

I've been fighting on getting the logs to work on a Dockerized Rails 5.1.5 application with the ELK (Elasticsearch Logstash Kibana) stack (which is Dockerized as well). I've been having problems on setting it up. I haven't been able to send a single log to logstash from Rails.
The current problem is:
ERROR -- : [LogStashLogger::Device::TCP] Errno::EADDRNOTAVAIL - Cannot
assign requested address - connect(2) for "localhost" port 5228
At first, I thought there was a problem with my current ELK configuration. After days spent, I finally figured out that ELK was working correctly by sending a dummy .log file through the nc command using Ubuntu for Win10, and it worked 🎉🎉🎉🎉 !!!
Now that I know the problem is with Rails, I've been trying a different set of combinations, but I still haven't gotten it to work:
I checked that the configuration from Logstash is correctly accepting TCP, and is the right port.
Changed to a different port, both in Logstash and Rails.
I'm currently using Docker-Compose V3. I initialized ELK first, and then Rails (but the problem still creeped in)
Switched between UDP and TCP.
Did not specified a codec in the logstash.conf.
Specified the json_lines codec in the logstash.conf
I've tried specifying a link between logstash and rails in docker-compose.yml (Even though it's deprecated for docker-compose v3)
I've tried bringing them together through a networkin docker-compose.
I've tried specifying a depends_on logstash in the rails app in docker-compose.
I'm running out of ideas:
Here's the logging config (Right now it's in development.rb):
config.log_level = :debug
config.lograge.enabled = true
config.lograge.formatter = Lograge::Formatters::Logstash.new
config.logger = LogStashLogger.new(type: :tcp, host: 'localhost', port: 5228)
The logstash conf:
input {
tcp {
port => 5228
codec => json_lines
}
}
## Add your filters / logstash plugins configuration here
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
}
}
And last, but not least, here's the Docker-compose:
version: '3'
services:
db:
# This specifies a MySQL image that will be created with root privileges
# Note that MYSQL_ROOT_PASSWORD is Mandatory!
# We specify the 5.7.21'th version of MySQL from the docker repository.
# We are using mariadb because there's an apparent problem with permissions.
# See: https://github.com/docker-library/mysql/issues/69
image: mariadb:10.3.5
restart: always
environment:
MYSQL_ROOT_PASSWORD: "rootPassword"
MYSQL_USER: "ruby"
MYSQL_PASSWORD: "userPassword"
MYSQL_DATABASE: "dev"
ports:
- "3306:3306"
volumes:
- db-data:/var/lib/mysql/data
- ./db/rails_.sql:/docker-entrypoint-initdb.d/rails_.sql
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.2.3
container_name: elasticsearch
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- elasticsearch:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- elk
logstash:
container_name: logstash
image: docker.elastic.co/logstash/logstash:6.2.3
volumes:
- ./elk/logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml
- ./elk/logstash/pipeline/logstash.conf:/etc/logstash/conf.d/logstash.conf
command: logstash -f /etc/logstash/conf.d/logstash.conf
ports:
- "5050:5050"
- "5228:5228"
environment:
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
networks:
- elk
depends_on:
- elasticsearch
kibana:
image: docker.elastic.co/kibana/kibana:6.2.3
volumes:
- ./elk/kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml
ports:
- "5601:5601"
networks:
- elk
depends_on:
- elasticsearch
app:
depends_on:
- db
environment:
RAILS_ENV: development
LOGSTASH_HOST: localhost
SECRET_MYSQL_HOST: 'db'
SECRET_MYSQL_DATABASE: 'dev'
SECRET_MYSQL_USERNAME: 'ruby'
SECRET_MYSQL_PASSWORD: 'userPassword'
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3001 -b '0.0.0.0'"
stdin_open: true
tty: true
links:
- db
- logstash
volumes:
- "./:/var/www/rails"
ports:
- "3001:3001"
expose:
- "3001"
networks:
- elk
db-data:
driver: local
elasticsearch:
driver: local
networks:
elk:
driver: bridge
Any help is greatly appreciated 😊

By default Compose sets up a single network for your app. Each
container for a service joins the default network and is both
reachable by other containers on that network, and discoverable by
them at a hostname identical to the container name.
According to the docker-compose.yaml file, logstash container is accessible on logstash:5228 from other containers, so logging config on app container should be changed to:
config.log_level = :debug
config.lograge.enabled = true
config.lograge.formatter = Lograge::Formatters::Logstash.new
config.logger = LogStashLogger.new(type: :tcp, host: 'logstash', port: 5228, formatter: :json_lines, sync: true))
Check that logstash.conf is like this:
input {
tcp {
port => 5228
codec => json_lines
}
}
## Add your filters / logstash plugins configuration here
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
}
}

Related

docker(-compose): access wikijs container only through nginx-proxy-manager; 502 Bad Gateway

I'm running a server that I want to setup to provide several webservices. One service is WikiJS.
I want the service to only be accessible through nginx-proxy-manager via a subdomain, but not directly accessing the IP (and port) of the server.
My try was:
version: "3"
services:
nginxproxymanager:
image: 'jc21/nginx-proxy-manager:latest'
restart: unless-stopped
ports:
# These ports are in format <host-port>:<container-port>
- '80:80' # Public HTTP Port
- '443:443' # Public HTTPS Port
- '8181:81' # Admin Web Port
# Add any other Stream port you want to expose
# - '21:21' # FTP
# Uncomment the next line if you uncomment anything in the section
# environment:
# Uncomment this if you want to change the location of
# the SQLite DB file within the container
# DB_SQLITE_FILE: "/data/database.sqlite"
# Uncomment this if IPv6 is not enabled on your host
# DISABLE_IPV6: 'true'
volumes:
- ./data:/data
- ./letsencrypt:/etc/letsencrypt
networks:
- reverseproxy-nw
db:
image: postgres:11-alpine
environment:
POSTGRES_DB: wiki
POSTGRES_PASSWORD: ###DBPW
POSTGRES_USER: wikijs
logging:
driver: "none"
restart: unless-stopped
volumes:
- db-data:/var/lib/postgresql/data
networks:
- reverseproxy-nw
wiki:
image: requarks/wiki:2
depends_on:
- db
environment:
DB_TYPE: postgres
DB_HOST: db
DB_PORT: 5432
DB_USER: wikijs
DB_PASS: ###DBPW
DB_NAME: wiki
restart: unless-stopped
ports:
- "3001:3000"
networks:
- reverseproxy-nw
volumes:
db-data:
networks:
reverseproxy-nw:
external: true
In nginx-proxy-manager I then tried to use "wikijs" as the forwarding host.
The service is accessible if I try: http://publicip:3001, however not via the assigned subdomain in nginx-proxy-manager. I only get a 502 which usually means, that nginx-proxy-manager cannot access the given service.
What do I have to change to make the service available unter the domain but not from http://publicip:3001 ?
Thanks in advance.
Ok, I finally found out what my conceptual problem was:
I needed to create a network bridge for the two containers. Basically it was as basic as specifying the driver of the network:
networks:
reverseproxy-nw:
driver: bridge
Like this the wikijs-container is only available through nginx as I want it to be.

Docker ERROR 2002 (HY000): Can't connect to MySQL server on 'db' (115)

i using docker with https://github.com/markshust/docker-magento . when i in step by step try to import db i got error ERROR 2002 (HY000): Can't connect to MySQL server on 'db' (115)
I tried this solution ERROR 2002 (HY000): Can't connect to MySQL server on 'db' (115).
my yml file
## Mark Shust's Docker Configuration for Magento
## (https://github.com/markshust/docker-magento)
##
## Version 41.0.2
## To use SSH, see https://github.com/markshust/docker-magento#ssh
## Linux users, see https://github.com/markshust/docker-magento#linux
## If you changed the default Docker network, you may need to replace
## 172.17.0.1 in this file with the result of:
## docker network inspect bridge --format='{{(index .IPAM.Config 0).Gateway}}'
version: "3"
services:
app:
image: markoshust/magento-nginx:1.18-5
ports:
- "80:8000"
- "443:8443"
volumes: &appvolumes
- ~/.composer:/var/www/.composer:cached
- ~/.ssh/id_rsa:/var/www/.ssh/id_rsa:cached
- ~/.ssh/known_hosts:/var/www/.ssh/known_hosts:cached
- appdata:/var/www/html
- sockdata:/sock
- ssldata:/etc/nginx/certs
extra_hosts: &appextrahosts
## M1 Mac support to fix Docker delay, see #566
- "app:172.17.0.1"
- "phpfpm:172.17.0.1"
- "db:172.17.0.1"
- "redis:172.17.0.1"
- "elasticsearch:172.17.0.1"
- "rabbitmq:172.17.0.1"
## Selenium support, replace "magento.test" with URL of your site
- "magento.test:172.17.0.1"
phpfpm:
image: markoshust/magento-php:7.2-fpm
volumes: *appvolumes
env_file: env/phpfpm.env
db:
image: mariadb:10.1
command: mysqld --innodb_force_recovery=6 --lower_case_table_names=1 --skip-ssl --character_set_server=utf8mb4 --explicit_defaults_for_timestamp
ports:
- "3306:3306"
env_file: env/db.env
volumes:
- dbdata:/var/lib/mysql
redis:
image: redis:6.0-alpine
ports:
- "6379:6379"
elasticsearch:
image: markoshust/magento-elasticsearch:7.9.3-1
ports:
- "9200:9200"
- "9300:9300"
environment:
- "discovery.type=single-node"
## Set custom heap size to avoid memory errors
- "ES_JAVA_OPTS=-Xms1g -Xmx1g"
## Avoid test failures due to small disks
## More info at https://github.com/markshust/docker-magento/issues/488
- "cluster.routing.allocation.disk.threshold_enabled=false"
- "index.blocks.read_only_allow_delete"
rabbitmq:
image: rabbitmq:3.8.22-management-alpine
ports:
- "15672:15672"
- "5672:5672"
volumes:
- rabbitmqdata:/var/lib/rabbitmq
env_file: env/rabbitmq.env
mailcatcher:
image: sj26/mailcatcher
ports:
- "1080:1080"
## Selenium support, uncomment to enable
#selenium:
# image: selenium/standalone-chrome-debug:3.8.1
# ports:
# - "5900:5900"
# extra_hosts: *appextrahosts
volumes:
appdata:
dbdata:
rabbitmqdata:
sockdata:
ssldata:
Remember that you will need to connect to the running docker container. So you probably want to use TCP instead of Unix socket. Check the output of the docker ps command and look for running MySQL containers. If you find one then use MySQL command like this: MySQL -h 127.0.0.1 -P <mysql_port> (you will find a port in docker ps output). If you can't find any running MySQL container in docker ps output then try docker images to find MySQL image name and try something like this to run it: docker run -d -p 3306:3306 tutum/MySQL where "tutum/MySQL" is image name found in docker images.

Trying to connect all my docker containers to a separate MariaDB container

So, I've setup several container apps that use MariaDB as their db backend, using docker-compose.
Containers are setup as needed and therefore MariaDB gets installed each time on every container that uses the db.
For example, I have some containers (PHPMyAdmin, NGiNX-PM, etc.) that use MariaDB, and they, in turn, have a version of it installed within their container. I also have a separate container (MariaDB) that I would rather have shared amongst the other containered apps and, thereby, I'd only have to maintain one version of the db.
I've searched for a solution, but no luck. Needless to say, I'm a noob at docker.
The only thing I can come up with is that all the apps need to be installed through the same docker-compose.yaml file to use the same db? That would make for a very long file if I had many containers running, and I'd prefer to have a directory per app and all the app's contents available in this one location.
I'm sure there is a way, I just haven't been able to figure it out.
So this is what I've tried:
The following setup is what I've tried but I am unable to get it to work:
(/docker/apps/mariadb/mariadb.yml)
version: '3.9'
networks:
NET:
external: true
services:
#############################################################################################
# MariaDB (docker-compose -f mariadb.yml up -d) #
#############################################################################################
mariadb:
image: jsurf/rpi-mariadb:latest
restart: unless-stopped
environment:
- TZ=${TIMEZONE}
- MYSQL_DATABASE=dockerApps
- MYSQL_USER=root
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
- MYSQL_PASSWORD=${MYSQL_PASSWORD}
volumes:
- $HOME/docker/apps/mariadb/db:/var/lib/mysql
expose:
- '3306'
networks:
- NET
(/docker/apps/nginxpm/nginxpm.yml)
version: '3.9'
networks:
NET:
external: true
services:
#############################################################################################
# NGiNX Proxy Manager (docker-compose -f nginxpm.yml up -d) #
#############################################################################################
nginxpm:
container_name: NGiNX_Proxy_Manager
image: 'jc21/nginx-proxy-manager:latest'
ports:
- '80:80'
- '81:81'
- '443:443'
volumes:
- ./config.json:/app/config/production.json
- ./data:/data
- ./letsencrypt:/etc/letsencrypt
networks:
- NET
depends_on:
- mariadb
(/docker/apps/phpmyadmin/phpmyadmin.yml)
version: "3.9"
networks:
NET:
external: true
services:
#############################################################################################
# phpMyAdmin (docker-compose up -d -OR- docker-compose -f phpmyadmin.yml up -d) #
#############################################################################################
phpmyadmin:
image: phpmyadmin:latest
container_name: phpMyAdmin
restart: unless-stopped
environment:
PMA_HOST: mariadb
PMA_USER: root
PMA_PASSWORD: ${MYSQL_PASSWORD}
volumes:
# Must add ServerName directive to end of file "ServerName 127.0.0.1"
- $HOME/docker/apps/phpmyadmin/apache2.conf:/etc/apache2/apache2.conf
ports:
- '8004:80'
networks:
- NET
Any help in this matter is greatly appreciated.
Ok, so after some more reading and testing, I've found the answer to my issue. I was assuming that "depends_on" was supposed to connect the containers, somehow. Not true!
I found that "external_links" is the correct way of connecting them.
So, my final docker-compose file looks like this:
(/docker/apps/nginxpm/nginxpm.yml)
version: '3.9'
networks:
NET:
external: true
services:
#############################################################################################
# NGiNX Proxy Manager (docker-compose -f nginxpm.yml up -d) #
#############################################################################################
nginxpm:
container_name: NGiNX_Proxy_Manager
image: 'jc21/nginx-proxy-manager:latest'
ports:
- '80:80'
- '81:81'
- '443:443'
volumes:
- ./config.json:/app/config/production.json
- ./data:/data
- ./letsencrypt:/etc/letsencrypt
networks:
- NET
external_links:
- mariadb

Sending Logs to Logstash in ELK using Ruby on Rails Docker Image

I've been following these tutorials, repos, docs, and everything:
https://medium.com/#AnjLab/how-to-set-up-elk-for-rails-log-management-using-docker-and-docker-compose-a6edc290669f
https://github.com/OrthoDex/Docker-ELK-Rails
http://ericlondon.com/2017/01/26/integrate-rails-logs-with-elasticsearch-logstash-kibana-in-docker-compose.html
https://manas.tech/blog/2015/12/15/logging-for-rails-apps-in-docker.html
https://logz.io/blog/docker-logging/
https://www.youtube.com/watch?v=KR2FZiqu57I
https://dzone.com/articles/docker-logging-with-the-elk-stack-part-i
Docker container cannot send log to docker ELK stack
Well, you got the point. I have several more which I've omitted due to brevity
Unfortunately, either they're outdated, don't show the whole picture, or my config is bad (I think it's the latter one).
I honestly don't know if I'm missing anything.
I'm currently using a docker-compose version 3. I'm currently using this image (sebp/elk). I'm able to boot everything, and access Kibana, but I am not able to send the logs to Logstash so it gets processed and sent to ElasticSearch.
I've tried these approaches to no avail:
Inside application.rb
1) Use Lograge and send it to port 5044 (which apparently is the one which Logstash is listening to)
config.lograge.enabled = true
config.lograge.logger = LogStashLogger.new(type: :udp, host: 'localhost', port: 5044)
2) Setting it to STDOUT, and let Docker process it as gelf and send it to Logstash:
logger = ActiveSupport::Logger.new(STDOUT)
logger.formatter = config.log_formatter
config.logger = ActiveSupport::TaggedLogging.new(logger)
And mapping it back to the compose file:
rails_app:
logging:
driver: gelf
options:
gelf-address: "tcp://localhost:5044"
3) I've tried
Other Errors I've encountered:
Whenever I try to use the config.lograge.logger = LogStashLogger.new(type: :udp, host: 'localhost', port: 5044) I get:
- Errno::EADDRNOTAVAIL - Cannot assign requested address - connect(2) for "localhost port 5044.
Funny thing this error may disappear from time to time.
Other problem is that whenever I try to create a dummy log entry I receive a Elasticsearch Unreachable: [http://localhost:9200...] This is inside the container... I don't know if it can't connect because the URL is exposed outside from it... or there's another error. I can curl to localhost:9200 and receive a positive response.
I was checking the sebp/ELK image, and I see that it's using Filebeat. Could that be the reason why I am not able to send the logs?
I'd appreciate any kind of help!
Just in case, here's my docker-compose.yml file:
version: '3'
services:
db:
image: mariadb:10.3.5
restart: always
environment:
MYSQL_ROOT_PASSWORD: "rootPassword"
MYSQL_USER: "ruby"
MYSQL_PASSWORD: "userPassword"
MYSQL_DATABASE: "dev"
ports:
- "3306:3306"
volumes:
- db-data:/var/lib/mysql/data
- ./db/rails_cprint.sql:/docker-entrypoint-initdb.d/rails_cprint.sql
elk:
image: sebp/elk:623
ports:
- 5601:5601
- 9200:9200
- 5044:5044
- 2020:2020
volumes:
- elasticsearch:/var/lib/elasticsearch
app:
build: .
depends_on:
- db
- elk
environment:
RAILS_ENV: development
LOGSTASH_HOST: localhost
SECRET_MYSQL_HOST: 'db'
SECRET_MYSQL_DATABASE: 'dev'
SECRET_MYSQL_USERNAME: 'ruby'
SECRET_MYSQL_PASSWORD: 'userPassword'
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3001 -b '0.0.0.0'"
stdin_open: true
tty: true
logging:
driver: gelf
options:
gelf-address: "tcp://localhost:5044"
links:
- db
volumes:
- "./:/var/www/cprint"
ports:
- "3001:3001"
expose:
- "3001"
volumes:
db-data:
driver: local
elasticsearch:
driver: local
Don't use "localhost" in your configs. In the docker network, the service name is the host name. Use "elk" instead, for example:
config.lograge.logger = LogStashLogger.new(type: :udp, host: 'elk', port: 5044)

Docker-compose re-write file for version 2 to version 1

I have an elk docker-compose file that I think cannot run on my server because the docker-compose version is too old...
$ docker-compose -version
docker-compose version 1.6.2, build 4d72027
And here is my docker-compose file...
version: '2'
services:
elasticsearch:
image: elasticsearch:5
command: elasticsearch
environment:
# This helps ES out with memory usage
- ES_JAVA_OPTS=-Xmx1g -Xms1g
volumes:
# Persist elasticsearch data to a volume
- elasticsearch:/usr/share/elasticsearch/data
# Extra ES configuration options
- ./es/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
ports:
- "9200:9200"
- "9300:9300"
logstash:
image: logstash:5
command: logstash -w 4 -f /etc/logstash/conf.d/logstash.conf
environment:
# This helps Logstash out if it gets too busy
- LS_HEAP_SIZE=2048m
volumes:
# volume mount the logstash config
- ./logstash/logstash.conf:/etc/logstash/conf.d/logstash.conf
- /Users/rriviere/workspace/parks-dpe/activities-api-v1/app-logs/:/tmp/app-logs
ports:
# GELF port for Docker logs
- "12201:12201/udp"
# UDP port for syslogs
- "5000:5000/udp"
# Default TCP port
- "5001:5001"
links:
- elasticsearch
kibana:
image: kibana:5
environment:
# Point Kibana to the elasticsearch container
- ELASTICSEARCH_URL=http://elasticsearch:9200
ports:
- "5601:5601"
links:
- elasticsearch
kopf:
image: rancher/kopf:v0.4.0
ports:
- "8080:80"
environment:
KOPF_ES_SERVERS: "elasticsearch:9200"
links:
- elasticsearch
volumes:
elasticsearch:
Whilst I'm not looking for an exact answer here can someone help me with what is required to create myself a docker compose v1 file that would do the same things.
thanks
you better check the docs for more details but just looking at your file, I think if you remove following lines then you are good to go:
version: '2'
services:
version 2 has some additional features which I dont see in your case, so you are pretty compatible with v1.

Resources