I've been following these tutorials, repos, docs, and everything:
https://medium.com/#AnjLab/how-to-set-up-elk-for-rails-log-management-using-docker-and-docker-compose-a6edc290669f
https://github.com/OrthoDex/Docker-ELK-Rails
http://ericlondon.com/2017/01/26/integrate-rails-logs-with-elasticsearch-logstash-kibana-in-docker-compose.html
https://manas.tech/blog/2015/12/15/logging-for-rails-apps-in-docker.html
https://logz.io/blog/docker-logging/
https://www.youtube.com/watch?v=KR2FZiqu57I
https://dzone.com/articles/docker-logging-with-the-elk-stack-part-i
Docker container cannot send log to docker ELK stack
Well, you got the point. I have several more which I've omitted due to brevity
Unfortunately, either they're outdated, don't show the whole picture, or my config is bad (I think it's the latter one).
I honestly don't know if I'm missing anything.
I'm currently using a docker-compose version 3. I'm currently using this image (sebp/elk). I'm able to boot everything, and access Kibana, but I am not able to send the logs to Logstash so it gets processed and sent to ElasticSearch.
I've tried these approaches to no avail:
Inside application.rb
1) Use Lograge and send it to port 5044 (which apparently is the one which Logstash is listening to)
config.lograge.enabled = true
config.lograge.logger = LogStashLogger.new(type: :udp, host: 'localhost', port: 5044)
2) Setting it to STDOUT, and let Docker process it as gelf and send it to Logstash:
logger = ActiveSupport::Logger.new(STDOUT)
logger.formatter = config.log_formatter
config.logger = ActiveSupport::TaggedLogging.new(logger)
And mapping it back to the compose file:
rails_app:
logging:
driver: gelf
options:
gelf-address: "tcp://localhost:5044"
3) I've tried
Other Errors I've encountered:
Whenever I try to use the config.lograge.logger = LogStashLogger.new(type: :udp, host: 'localhost', port: 5044) I get:
- Errno::EADDRNOTAVAIL - Cannot assign requested address - connect(2) for "localhost port 5044.
Funny thing this error may disappear from time to time.
Other problem is that whenever I try to create a dummy log entry I receive a Elasticsearch Unreachable: [http://localhost:9200...] This is inside the container... I don't know if it can't connect because the URL is exposed outside from it... or there's another error. I can curl to localhost:9200 and receive a positive response.
I was checking the sebp/ELK image, and I see that it's using Filebeat. Could that be the reason why I am not able to send the logs?
I'd appreciate any kind of help!
Just in case, here's my docker-compose.yml file:
version: '3'
services:
db:
image: mariadb:10.3.5
restart: always
environment:
MYSQL_ROOT_PASSWORD: "rootPassword"
MYSQL_USER: "ruby"
MYSQL_PASSWORD: "userPassword"
MYSQL_DATABASE: "dev"
ports:
- "3306:3306"
volumes:
- db-data:/var/lib/mysql/data
- ./db/rails_cprint.sql:/docker-entrypoint-initdb.d/rails_cprint.sql
elk:
image: sebp/elk:623
ports:
- 5601:5601
- 9200:9200
- 5044:5044
- 2020:2020
volumes:
- elasticsearch:/var/lib/elasticsearch
app:
build: .
depends_on:
- db
- elk
environment:
RAILS_ENV: development
LOGSTASH_HOST: localhost
SECRET_MYSQL_HOST: 'db'
SECRET_MYSQL_DATABASE: 'dev'
SECRET_MYSQL_USERNAME: 'ruby'
SECRET_MYSQL_PASSWORD: 'userPassword'
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3001 -b '0.0.0.0'"
stdin_open: true
tty: true
logging:
driver: gelf
options:
gelf-address: "tcp://localhost:5044"
links:
- db
volumes:
- "./:/var/www/cprint"
ports:
- "3001:3001"
expose:
- "3001"
volumes:
db-data:
driver: local
elasticsearch:
driver: local
Don't use "localhost" in your configs. In the docker network, the service name is the host name. Use "elk" instead, for example:
config.lograge.logger = LogStashLogger.new(type: :udp, host: 'elk', port: 5044)
Related
I have a problem about network in docker. In the docker-compose.yml includes 2 instance below
webserver (frontend + backend)
database
But i tried to bridge network and default but not working at all.The backend cannot connect to database show error "connection refuse". then i tried to docker exec -t .. into webserver and then ping to database it show "timeout".
I cannot connect database with ip address (i got a database ip address from docker exec and then hostname -i) but i connected success using "localhost"
this my docker-compose.yml
version: '3.8'
services:
postgres_server:
container_name: postgres14-4_container
image: postgres:14.4
command: postgres -c 'max_connections=200'
restart: always
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
ports:
- '5222:5432'
volumes:
- db:/var/lib/postgresql14/data
networks:
- web_network
webserver:
container_name: frontend_backend_container
image: webserver
ports:
- '9090:80'
- '8081:8081'
env_file:
- backend_env
depends_on:
- postgres_server
restart: always
networks:
- web_network
volumes:
db:
driver: local
networks:
web_network:
driver: bridge
To configure remote connections to postgres, you have to adjust pg_hba.conf. For example add:
# Remote access
host all all 0.0.0.0/0 trust
where is your backend_env file?
I guess you have there the host + port to connect to the db.
You don't need to define anything special (like the bridge).
The webserver container should be able to access the postgres_server via postgres_server:5432 (not localhost and not 5222).
I built a service orchestration with docker-compose that connects an Elixir app that uses BroadayRabbitMQ to another container that uses RabbitMQ-3-Management Docker image. The problem is, even though these services are on the same network (as in I built a docker network to support them) & set the env variables. I receive "[error] Cannot connect to a RabbitMQ broker: :unknown_host" How do I get RabbitMQ to connect to my Elixir release container?
docker-compose.yml
version: "3.8"
services:
poll_workers_app:
container_name: coder_workers_ex
build:
context: .
dockerfile: CoderWorkersProd.Dockerfile
volumes:
- .:/app
depends_on:
- db
- rabbitmq
env_file:
- config/docker.env
ports:
- '4000:4000'
tty: true
networks:
- rabbitmq_network
db:
image: 'postgres:12'
container_name: coder_workers_db
environment:
PGDATA: /var/lib/postgresql/data/pgdata
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_HOST_AUTH_METHOD: trust
restart: always
volumes:
- 'pgdata:/var/lib/postgresql/data'
ports:
- "5432:5432"
networks:
- rabbitmq_network
rabbitmq:
hostname: rabbitmq
image: rabbitmq:3-management
container_name: coder_workers_rabbitmq
env_file:
- config/docker.env
ports:
- '5672:5672'
- '15672:15672'
- '15692:15692'
volumes:
- rabbitmq-data:/var/lib/rabbitmq
networks:
- rabbitmq_network
volumes:
pgdata:
rabbitmq-data: {}
networks:
rabbitmq_network:
external:
name: rabbitmq_network
rabbit_report_pipeline.ex
defmodule CoderWorkers.Pipelines.RabbitReportPipeline do
use Broadway
require Logger
alias CoderWorkers.Responses.RabbitResponse
alias CoderWorkers.Cache.Responses
#producer BroadwayRabbitMQ.Producer
#queue "coder.worker.rabbit_report.status"
#connection [
username: System.get_env("RABBITMQ_USERNAME"),
password: System.get_env("RABBITMQ_PASSWORD"),
host: System.get_env("RABBITMQ_HOST")
]
.env
RABBITMQ_DEFAULT_USER=guest
RABBITMQ_DEFAULT_PASS=guest
RABBITMQ_DEFAULT_VHOST=rabbitmq
RABBITMQ_USERNAME=guest
RABBITMQ_PASSWORD=guest
RABBITMQ_HOST=rabbitmq
error
coder_workers_ex | 20:03:25.676 [error] Cannot connect to RabbitMQ broker: :unknown_host
coder_workers_ex | 20:03:41.300 [error] Cannot connect to RabbitMQ broker: :unknown_host
coder_workers_ex | 20:04:10.856 [error] Cannot connect to RabbitMQ broker: :unknown_host
coder_workers_ex | 20:04:39.142 [error] Cannot connect to RabbitMQ broker: :unknown_host
I can not see how you connect from the coder_workers_ex to the rabbit_mq. So just in case:
You can connect inside the network by using the container name as the host, not the host name. That is only used inside the container, as you can read also here.
For instance: attaching a terminal session to coder_workers_ex container and executing: ping rabbitmq will not work, but ping coder_workers_rabbitmq will.
If this does not help, then maybe show the code that connects the containers so we can try to help you better.
EDIT: as pointed out in the comments by David Maze: Connecting can be done using either: comtainer_name, hostname, or service-block-name. So while this answer gives you a correct working solution, it is not the correct answer, because this was not your problem in the first place.
Looks like your docker setup is correct, it won't work because you have to use runtime configuration (read about config/runtime.exs).
When you are using a attribute, in your case #connection, it will always get evaluated at compile-time.
To avoid this, there is a rule of thumb:
If your configuration is not a compile-time configuration, always use a function to
fetch the configuration.
Example:
def connection() do
[
username: System.get_env("RABBITMQ_USERNAME"),
password: System.get_env("RABBITMQ_PASSWORD"),
host: System.get_env("RABBITMQ_HOST")
]
end
This should work as-is, however it is recommended to store all your configuration inside of config files. So all you have to do is create your runtime.exs file:
import Config
if config_env() == :prod do
config :my_app, :rabbitmq_config,
username: System.get_env("RABBITMQ_USERNAME"),
password: System.get_env("RABBITMQ_PASSWORD"),
host: System.get_env("RABBITMQ_HOST")
end
Then, you can get the configuration using Application.get_env/3 or Application.fetch_env/2, remember to fetch configurations using functions.
To make this easier, I would recommend using credo, it enforces you to use Applicaiton.compile_env/3 in situations where you are trying to call runtime configurations at compile-time.
I've been fighting on getting the logs to work on a Dockerized Rails 5.1.5 application with the ELK (Elasticsearch Logstash Kibana) stack (which is Dockerized as well). I've been having problems on setting it up. I haven't been able to send a single log to logstash from Rails.
The current problem is:
ERROR -- : [LogStashLogger::Device::TCP] Errno::EADDRNOTAVAIL - Cannot
assign requested address - connect(2) for "localhost" port 5228
At first, I thought there was a problem with my current ELK configuration. After days spent, I finally figured out that ELK was working correctly by sending a dummy .log file through the nc command using Ubuntu for Win10, and it worked 🎉🎉🎉🎉 !!!
Now that I know the problem is with Rails, I've been trying a different set of combinations, but I still haven't gotten it to work:
I checked that the configuration from Logstash is correctly accepting TCP, and is the right port.
Changed to a different port, both in Logstash and Rails.
I'm currently using Docker-Compose V3. I initialized ELK first, and then Rails (but the problem still creeped in)
Switched between UDP and TCP.
Did not specified a codec in the logstash.conf.
Specified the json_lines codec in the logstash.conf
I've tried specifying a link between logstash and rails in docker-compose.yml (Even though it's deprecated for docker-compose v3)
I've tried bringing them together through a networkin docker-compose.
I've tried specifying a depends_on logstash in the rails app in docker-compose.
I'm running out of ideas:
Here's the logging config (Right now it's in development.rb):
config.log_level = :debug
config.lograge.enabled = true
config.lograge.formatter = Lograge::Formatters::Logstash.new
config.logger = LogStashLogger.new(type: :tcp, host: 'localhost', port: 5228)
The logstash conf:
input {
tcp {
port => 5228
codec => json_lines
}
}
## Add your filters / logstash plugins configuration here
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
}
}
And last, but not least, here's the Docker-compose:
version: '3'
services:
db:
# This specifies a MySQL image that will be created with root privileges
# Note that MYSQL_ROOT_PASSWORD is Mandatory!
# We specify the 5.7.21'th version of MySQL from the docker repository.
# We are using mariadb because there's an apparent problem with permissions.
# See: https://github.com/docker-library/mysql/issues/69
image: mariadb:10.3.5
restart: always
environment:
MYSQL_ROOT_PASSWORD: "rootPassword"
MYSQL_USER: "ruby"
MYSQL_PASSWORD: "userPassword"
MYSQL_DATABASE: "dev"
ports:
- "3306:3306"
volumes:
- db-data:/var/lib/mysql/data
- ./db/rails_.sql:/docker-entrypoint-initdb.d/rails_.sql
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.2.3
container_name: elasticsearch
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- elasticsearch:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- elk
logstash:
container_name: logstash
image: docker.elastic.co/logstash/logstash:6.2.3
volumes:
- ./elk/logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml
- ./elk/logstash/pipeline/logstash.conf:/etc/logstash/conf.d/logstash.conf
command: logstash -f /etc/logstash/conf.d/logstash.conf
ports:
- "5050:5050"
- "5228:5228"
environment:
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
networks:
- elk
depends_on:
- elasticsearch
kibana:
image: docker.elastic.co/kibana/kibana:6.2.3
volumes:
- ./elk/kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml
ports:
- "5601:5601"
networks:
- elk
depends_on:
- elasticsearch
app:
depends_on:
- db
environment:
RAILS_ENV: development
LOGSTASH_HOST: localhost
SECRET_MYSQL_HOST: 'db'
SECRET_MYSQL_DATABASE: 'dev'
SECRET_MYSQL_USERNAME: 'ruby'
SECRET_MYSQL_PASSWORD: 'userPassword'
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3001 -b '0.0.0.0'"
stdin_open: true
tty: true
links:
- db
- logstash
volumes:
- "./:/var/www/rails"
ports:
- "3001:3001"
expose:
- "3001"
networks:
- elk
db-data:
driver: local
elasticsearch:
driver: local
networks:
elk:
driver: bridge
Any help is greatly appreciated 😊
By default Compose sets up a single network for your app. Each
container for a service joins the default network and is both
reachable by other containers on that network, and discoverable by
them at a hostname identical to the container name.
According to the docker-compose.yaml file, logstash container is accessible on logstash:5228 from other containers, so logging config on app container should be changed to:
config.log_level = :debug
config.lograge.enabled = true
config.lograge.formatter = Lograge::Formatters::Logstash.new
config.logger = LogStashLogger.new(type: :tcp, host: 'logstash', port: 5228, formatter: :json_lines, sync: true))
Check that logstash.conf is like this:
input {
tcp {
port => 5228
codec => json_lines
}
}
## Add your filters / logstash plugins configuration here
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
}
}
I am trying to move a working Rails app to docker environment.
Following the UNIX(/docker) philosophy I would like to have each service in its own container.
I managed to get redis and postgres working fine, but I am struggling to get slor and rails talking to each other.
In file app/models/spree/sunspot/search_decorator.rb when the line executes
#solr_search.execute
the following error appear on the console:
Errno::EADDRNOTAVAIL (Cannot assign requested address - connect(2) for "localhost" port 8983):
While researching for a solution I have found people just installing solr in the same container as their rails app. But I would rather have it in a separate container.
Here are my config/sunspot.yml
development:
solr:
hostname: localhost
port: 8983
log_level: INFO
path: /solr/development
and docker-compose.yml files
version: '2'
services:
db:
(...)
redis:
(...)
solr:
image: solr:7.0.1
ports:
- "8983:8983"
volumes:
- solr-data:/opt/solr/server/solr/mycores
entrypoint:
- docker-entrypoint.sh
- solr-precreate
- mycore
networks:
- backend
app:
build: .
env_file: .env
environment:
RAILS_ENV: $RAILS_ENV
depends_on:
- db
- redis
- solr
ports:
- "3000:3000"
tty: true
networks:
- backend
volumes:
solr-data:
redis-data:
postgres-data:
networks:
backend:
driver: bridge
Any suggestions?
Your config/sunspot.yml should have the following:
development:
solr:
hostname: solr # since our solr instance is linked as solr
port: 8983
log_level: WARNING
solr_home: solr
path: /solr/mycore
# this path comes from the last command of our entrypoint as
# specified in the last parameter for our solr container
If you see
Solr::Error::Http (RSolr::Error::Http - 404 Not Found
Error: Not Found
URI: http://localhost:8982/solr/development/select?wt=json
Create a new core using the admin interface at:
http://localhost:8982/solr/#/~cores
or using the following command:
docker-compose exec solr solr create_core -c development
I wrote a blog post on this: https://gaurav.koley.in/2018/searching-in-rails-with-solr-sunspot-and-docker
Hopefully that helps those who come here at later stage.
When you declare services in a docker-compose file, containers will have their name as hostname. So your solr service will be available, inside the backend network, as solr.
What I'm seeing from your error is that the ruby code is trying to connect at localhost:8983, while it should connect to solr:8983.
Probably you'll need also to change your hostname inside config/sunspot.yml, but I don't work with solr so I'm not sure about this.
I am trying to set up an extensible docker production environment for a few projects on a virtual machine.
My setup is as follows:
Front end: (this works as expected: thanks to Tevin Jeffery for this)
# ~/proxy/docker-compose.yml
version: '2'
services:
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
ports:
- '80:80'
- '443:443'
volumes:
- '/etc/nginx/vhost.d'
- '/usr/share/nginx/html'
- '/etc/nginx/certs:/etc/nginx/certs:ro'
- '/var/run/docker.sock:/tmp/docker.sock:ro'
networks:
- nginx
letsencrypt-nginx-proxy:
container_name: letsencrypt-nginx-proxy
image: 'jrcs/letsencrypt-nginx-proxy-companion'
volumes:
- '/etc/nginx/certs:/etc/nginx/certs'
- '/var/run/docker.sock:/var/run/docker.sock:ro'
volumes_from:
- nginx-proxy
networks:
- nginx
networks:
nginx:
driver: bridge
Database: (planning to add postgres to support rails apps as well)
# ~/mysql/docker-compose.yml
version: '2'
services:
db:
image: mariadb
environment:
MYSQL_ROOT_PASSWORD: wordpress
# ports:
# - 3036:3036
networks:
- db
networks:
db:
driver: bridge
And finaly a wordpress blog to test if everything works:
# ~/wp/docker-compose.yml
version: '2'
services:
wordpress:
image: wordpress
# external_links:
# - mysql_db_1:mysql
ports:
- 8080:80
networks:
- proxy_nginx
- mysql_db
environment:
# for nginx and dockergen
VIRTUAL_HOST: gizmotronic.ca
# wordpress setup
WORDPRESS_DB_HOST: mysql_db_1
# WORDPRESS_DB_HOST: mysql_db_1:3036
# WORDPRESS_DB_HOST: mysql
# WORDPRESS_DB_HOST: mysql:3036
WORDPRESS_DB_USER: root
WORDPRESS_DB_PASSWORD: wordpress
networks:
proxy_nginx:
external: true
mysql_db:
external: true
My problem is that the Wordpress container can not connect to the database. I get the following error when I try to start (docker-compose up) the Wordpress container:
wordpress_1 | Warning: mysqli::mysqli(): (HY000/2002): Connection refused in - on line 22
wordpress_1 |
wordpress_1 | MySQL Connection Error: (2002) Connection refused
wp_wordpress_1 exited with code 1
UPDATE:
I was finally able to get this working. my main problem was relying on the container defaults for the environment variables. This created an automatic data volume with without a database or user for word press. After I added explicit environment variables to the mysql and Wordpress containers, I removed the data volume and restarted both containers. This forced the mysql container to recreate the database and user.
To ~/mysql/docker-compose.yml:
environment:
MYSQL_ROOT_PASSWORD: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpress
MYSQL_DATABASE: wordpress
and to ~/wp/docker-compose.yml:
environment:
# for nginx and dockergen
VIRTUAL_HOST: gizmotronic.ca
# wordpress setup
WORDPRESS_DB_HOST: mysql_db_1
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: wordpress
WORDPRESS_DB_NAME: wordpress
One problem with docker-compose is that although sometimes your application is linked to your database, the application will NOT wait for your database to be up and ready. Here is an official Docker read:
https://docs.docker.com/compose/startup-order/
I've faced a similar problem where my test application would fail because it couldn't connect to my server because it wasn't up and running yet.
I made a similar workaround to the article posted in the link above by running a shell script to ping the address of the DB until it is available to be used. This script should be the last CMD command in your application.
RESPONSE=$(curl --write-out "%{http_code}\n" --silent --output /dev/null "YOUR_MYSQL_DATABASE:3306")
# Until the mysql sends a 200 HTTP response, we're going to keep checking
until [ $RESPONSE -eq "200" ]; do
sleep 2
echo "MySQL is not ready yet.. retrying... RESPONSE: ${RESPONSE}"
RESPONSE=$(curl --write-out "%{http_code}\n" --silent --output /dev/null "YOUR_MYSQL_DATABASE:3306")
done
# Once we know the server's up, we can start run our application
enter code to start your application here
I'm not 100% sure if this is the problem you're having. Another way to debug your problem is to run docker-compose in detached mode with the -d flag and run docker ps to see if your database is even running. If it is running, run docker logs $YOUR_DB_CONTAINER_ID to see if MySQL is giving you any errors when starting