Solr on a separate docker container with sunspot-rails - ruby-on-rails

I am trying to move a working Rails app to docker environment.
Following the UNIX(/docker) philosophy I would like to have each service in its own container.
I managed to get redis and postgres working fine, but I am struggling to get slor and rails talking to each other.
In file app/models/spree/sunspot/search_decorator.rb when the line executes
#solr_search.execute
the following error appear on the console:
Errno::EADDRNOTAVAIL (Cannot assign requested address - connect(2) for "localhost" port 8983):
While researching for a solution I have found people just installing solr in the same container as their rails app. But I would rather have it in a separate container.
Here are my config/sunspot.yml
development:
solr:
hostname: localhost
port: 8983
log_level: INFO
path: /solr/development
and docker-compose.yml files
version: '2'
services:
db:
(...)
redis:
(...)
solr:
image: solr:7.0.1
ports:
- "8983:8983"
volumes:
- solr-data:/opt/solr/server/solr/mycores
entrypoint:
- docker-entrypoint.sh
- solr-precreate
- mycore
networks:
- backend
app:
build: .
env_file: .env
environment:
RAILS_ENV: $RAILS_ENV
depends_on:
- db
- redis
- solr
ports:
- "3000:3000"
tty: true
networks:
- backend
volumes:
solr-data:
redis-data:
postgres-data:
networks:
backend:
driver: bridge
Any suggestions?

Your config/sunspot.yml should have the following:
development:
solr:
hostname: solr # since our solr instance is linked as solr
port: 8983
log_level: WARNING
solr_home: solr
path: /solr/mycore
# this path comes from the last command of our entrypoint as
# specified in the last parameter for our solr container
If you see
Solr::Error::Http (RSolr::Error::Http - 404 Not Found
Error: Not Found
URI: http://localhost:8982/solr/development/select?wt=json
Create a new core using the admin interface at:
http://localhost:8982/solr/#/~cores
or using the following command:
docker-compose exec solr solr create_core -c development
I wrote a blog post on this: https://gaurav.koley.in/2018/searching-in-rails-with-solr-sunspot-and-docker
Hopefully that helps those who come here at later stage.

When you declare services in a docker-compose file, containers will have their name as hostname. So your solr service will be available, inside the backend network, as solr.
What I'm seeing from your error is that the ruby code is trying to connect at localhost:8983, while it should connect to solr:8983.
Probably you'll need also to change your hostname inside config/sunspot.yml, but I don't work with solr so I'm not sure about this.

Related

Localhost not found even if my docker containers are up?

I am relatively new to dev in general, to the Docker universe and to Rails in particular, apologize in advance if it sounds like a silly question.
I am trying to run an application in a monorepo composed of 4 services (2 websites and 2 APIs) + Postgresql, with the help of Docker Compose. The final goal is to run it on a VPS with Traefik (once I get the current app to work locally).
Here are the different services :
Postgres (through the Postgres image available in Dockerhub)
a B2C website (NextJS)
an admin website (React with create Vite)
an API (Rails). It should be linked to the Postgres database
a Strapi API (for the content of the B2C website). Strapi has its own SQLite database. Only the B2C website requires the data coming from Strapi.
When I run the docker compose up -d command, it seems to be working (see pic below)
but when I go to one of the websites (except for the Strapi that seems to be correctly working) (https://localhost:3009, or 3008 or 3001), I get nothing (see below).
However, I don't see any error in the logs of any apps. For instance the Rails API logs below:
I assume that I have mistakes in my config, especially in the database.yml config of the Rails api and the docker-compose.yml file.
database.yml :
default: &default
adapter: postgresql
encoding: unicode
pool: <%= ENV.fetch("RAILS_MAX_THREADS") { 5 } %>
host: pg
development:
<<: *default
database: chana_api_v2_development
test:
<<: *default
database: chana_api_v2_test
production:
<<: *default
database: chana_api_v2_production
username: chana
password: <%= ENV["CHANA_DATABASE_PASSWORD"] %>
docker-compose.yml
version: '3'
services:
# ----------------POSTGRES -----------------
pg:
image: postgres:14.6
container_name: pg
networks:
- chana_postgres_network
ports:
- "5432:5432"
environment:
POSTGRES_DB: chana_development
POSTGRES_USER: chana
POSTGRES_PASSWORD: chana
volumes:
- ./data:/var/lib/postgresql/data
# ----------------- RAILS API -----------------
api:
build: ./api
container_name: api
networks:
- chana_postgres_network
- api_network
volumes:
- ./api:/chana_api
ports:
- "3001:3000"
depends_on:
- pg
# ----------------- STRAPI -----------------
strapi:
build:
context: ./strapi
args:
BASE_VERSION: latest
STRAPI_VERSION: 4.5.0
container_name: chana-strapi
restart: unless-stopped
env_file: .env
environment:
NODE_ENV: ${NODE_ENV}
HOST: ${HOST}
PORT: ${PORT}
volumes:
- ./strapi:/srv/app
- strapi_node_modules:/srv/app/node_modules
ports:
- "1337:1337"
# ----------------- B2C website -----------------
public-front:
build: ./public-front
container_name: public-front
restart: always
command: yarn dev
ports:
- "3009:3000"
networks:
- api_network
- chana_postgres_network
depends_on:
- api
- strapi
volumes:
- ./public-front:/app
- /app/node_modules
- /app/.next
# ----------------- ADMIN website -----------------
admin-front:
build: ./admin-front
container_name: admin-front
restart: always
command: yarn dev
ports:
- "3008:3000"
networks:
- api_network
- chana_postgres_network
depends_on:
- api
volumes:
- ./admin-front:/app
- /app/node_modules
- /app/.next
volumes:
strapi_node_modules:
networks:
api_network:
chana_postgres_network:
Do you have any idea why I cannot see anything on the website pages?
I tried to change the code of the different files that are relevant, especially database.yml, docker-compose.yml, and the dockerfiles of each app.
Also, I tried to look into the api container (Rails) with the command docker exec -it api /bin/sh to check the database through the Rails console, and I get this error message:
activeRecord::ConnectionNotEstablished could not connect to server: No such file or directory. Is the server running locally and accepting connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
instead of localhost press ctrl and click on the url that it passes sometimes does not open on the localhost port of your website
looks like your host in docker is 127.0.0.1:3000 normally this is fine but in docker when you want to expose the app to your host machine you need to change the app to run on 0.0.0.0:3000 and then docker will be able to pass the app through to your host machine. Without specific Dockerfiles this is the best I can do. I have run into this issue with strapi and some other apps before so hopefully it helps.
It will still be localhost:3000 on the host machine if i wasn't clear.

Unable to reach RabbitMQ broker from Phoenix Elixir App to RabbitMQ Container Docker

I built a service orchestration with docker-compose that connects an Elixir app that uses BroadayRabbitMQ to another container that uses RabbitMQ-3-Management Docker image. The problem is, even though these services are on the same network (as in I built a docker network to support them) & set the env variables. I receive "[error] Cannot connect to a RabbitMQ broker: :unknown_host" How do I get RabbitMQ to connect to my Elixir release container?
docker-compose.yml
version: "3.8"
services:
poll_workers_app:
container_name: coder_workers_ex
build:
context: .
dockerfile: CoderWorkersProd.Dockerfile
volumes:
- .:/app
depends_on:
- db
- rabbitmq
env_file:
- config/docker.env
ports:
- '4000:4000'
tty: true
networks:
- rabbitmq_network
db:
image: 'postgres:12'
container_name: coder_workers_db
environment:
PGDATA: /var/lib/postgresql/data/pgdata
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_HOST_AUTH_METHOD: trust
restart: always
volumes:
- 'pgdata:/var/lib/postgresql/data'
ports:
- "5432:5432"
networks:
- rabbitmq_network
rabbitmq:
hostname: rabbitmq
image: rabbitmq:3-management
container_name: coder_workers_rabbitmq
env_file:
- config/docker.env
ports:
- '5672:5672'
- '15672:15672'
- '15692:15692'
volumes:
- rabbitmq-data:/var/lib/rabbitmq
networks:
- rabbitmq_network
volumes:
pgdata:
rabbitmq-data: {}
networks:
rabbitmq_network:
external:
name: rabbitmq_network
rabbit_report_pipeline.ex
defmodule CoderWorkers.Pipelines.RabbitReportPipeline do
use Broadway
require Logger
alias CoderWorkers.Responses.RabbitResponse
alias CoderWorkers.Cache.Responses
#producer BroadwayRabbitMQ.Producer
#queue "coder.worker.rabbit_report.status"
#connection [
username: System.get_env("RABBITMQ_USERNAME"),
password: System.get_env("RABBITMQ_PASSWORD"),
host: System.get_env("RABBITMQ_HOST")
]
.env
RABBITMQ_DEFAULT_USER=guest
RABBITMQ_DEFAULT_PASS=guest
RABBITMQ_DEFAULT_VHOST=rabbitmq
RABBITMQ_USERNAME=guest
RABBITMQ_PASSWORD=guest
RABBITMQ_HOST=rabbitmq
error
coder_workers_ex | 20:03:25.676 [error] Cannot connect to RabbitMQ broker: :unknown_host
coder_workers_ex | 20:03:41.300 [error] Cannot connect to RabbitMQ broker: :unknown_host
coder_workers_ex | 20:04:10.856 [error] Cannot connect to RabbitMQ broker: :unknown_host
coder_workers_ex | 20:04:39.142 [error] Cannot connect to RabbitMQ broker: :unknown_host
I can not see how you connect from the coder_workers_ex to the rabbit_mq. So just in case:
You can connect inside the network by using the container name as the host, not the host name. That is only used inside the container, as you can read also here.
For instance: attaching a terminal session to coder_workers_ex container and executing: ping rabbitmq will not work, but ping coder_workers_rabbitmq will.
If this does not help, then maybe show the code that connects the containers so we can try to help you better.
EDIT: as pointed out in the comments by David Maze: Connecting can be done using either: comtainer_name, hostname, or service-block-name. So while this answer gives you a correct working solution, it is not the correct answer, because this was not your problem in the first place.
Looks like your docker setup is correct, it won't work because you have to use runtime configuration (read about config/runtime.exs).
When you are using a attribute, in your case #connection, it will always get evaluated at compile-time.
To avoid this, there is a rule of thumb:
If your configuration is not a compile-time configuration, always use a function to
fetch the configuration.
Example:
def connection() do
[
username: System.get_env("RABBITMQ_USERNAME"),
password: System.get_env("RABBITMQ_PASSWORD"),
host: System.get_env("RABBITMQ_HOST")
]
end
This should work as-is, however it is recommended to store all your configuration inside of config files. So all you have to do is create your runtime.exs file:
import Config
if config_env() == :prod do
config :my_app, :rabbitmq_config,
username: System.get_env("RABBITMQ_USERNAME"),
password: System.get_env("RABBITMQ_PASSWORD"),
host: System.get_env("RABBITMQ_HOST")
end
Then, you can get the configuration using Application.get_env/3 or Application.fetch_env/2, remember to fetch configurations using functions.
To make this easier, I would recommend using credo, it enforces you to use Applicaiton.compile_env/3 in situations where you are trying to call runtime configurations at compile-time.

Sending Logs to Logstash in ELK using Ruby on Rails Docker Image

I've been following these tutorials, repos, docs, and everything:
https://medium.com/#AnjLab/how-to-set-up-elk-for-rails-log-management-using-docker-and-docker-compose-a6edc290669f
https://github.com/OrthoDex/Docker-ELK-Rails
http://ericlondon.com/2017/01/26/integrate-rails-logs-with-elasticsearch-logstash-kibana-in-docker-compose.html
https://manas.tech/blog/2015/12/15/logging-for-rails-apps-in-docker.html
https://logz.io/blog/docker-logging/
https://www.youtube.com/watch?v=KR2FZiqu57I
https://dzone.com/articles/docker-logging-with-the-elk-stack-part-i
Docker container cannot send log to docker ELK stack
Well, you got the point. I have several more which I've omitted due to brevity
Unfortunately, either they're outdated, don't show the whole picture, or my config is bad (I think it's the latter one).
I honestly don't know if I'm missing anything.
I'm currently using a docker-compose version 3. I'm currently using this image (sebp/elk). I'm able to boot everything, and access Kibana, but I am not able to send the logs to Logstash so it gets processed and sent to ElasticSearch.
I've tried these approaches to no avail:
Inside application.rb
1) Use Lograge and send it to port 5044 (which apparently is the one which Logstash is listening to)
config.lograge.enabled = true
config.lograge.logger = LogStashLogger.new(type: :udp, host: 'localhost', port: 5044)
2) Setting it to STDOUT, and let Docker process it as gelf and send it to Logstash:
logger = ActiveSupport::Logger.new(STDOUT)
logger.formatter = config.log_formatter
config.logger = ActiveSupport::TaggedLogging.new(logger)
And mapping it back to the compose file:
rails_app:
logging:
driver: gelf
options:
gelf-address: "tcp://localhost:5044"
3) I've tried
Other Errors I've encountered:
Whenever I try to use the config.lograge.logger = LogStashLogger.new(type: :udp, host: 'localhost', port: 5044) I get:
- Errno::EADDRNOTAVAIL - Cannot assign requested address - connect(2) for "localhost port 5044.
Funny thing this error may disappear from time to time.
Other problem is that whenever I try to create a dummy log entry I receive a Elasticsearch Unreachable: [http://localhost:9200...] This is inside the container... I don't know if it can't connect because the URL is exposed outside from it... or there's another error. I can curl to localhost:9200 and receive a positive response.
I was checking the sebp/ELK image, and I see that it's using Filebeat. Could that be the reason why I am not able to send the logs?
I'd appreciate any kind of help!
Just in case, here's my docker-compose.yml file:
version: '3'
services:
db:
image: mariadb:10.3.5
restart: always
environment:
MYSQL_ROOT_PASSWORD: "rootPassword"
MYSQL_USER: "ruby"
MYSQL_PASSWORD: "userPassword"
MYSQL_DATABASE: "dev"
ports:
- "3306:3306"
volumes:
- db-data:/var/lib/mysql/data
- ./db/rails_cprint.sql:/docker-entrypoint-initdb.d/rails_cprint.sql
elk:
image: sebp/elk:623
ports:
- 5601:5601
- 9200:9200
- 5044:5044
- 2020:2020
volumes:
- elasticsearch:/var/lib/elasticsearch
app:
build: .
depends_on:
- db
- elk
environment:
RAILS_ENV: development
LOGSTASH_HOST: localhost
SECRET_MYSQL_HOST: 'db'
SECRET_MYSQL_DATABASE: 'dev'
SECRET_MYSQL_USERNAME: 'ruby'
SECRET_MYSQL_PASSWORD: 'userPassword'
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3001 -b '0.0.0.0'"
stdin_open: true
tty: true
logging:
driver: gelf
options:
gelf-address: "tcp://localhost:5044"
links:
- db
volumes:
- "./:/var/www/cprint"
ports:
- "3001:3001"
expose:
- "3001"
volumes:
db-data:
driver: local
elasticsearch:
driver: local
Don't use "localhost" in your configs. In the docker network, the service name is the host name. Use "elk" instead, for example:
config.lograge.logger = LogStashLogger.new(type: :udp, host: 'elk', port: 5044)

Sidekiq using the incorrect url for redis

I'm setting up my Docker environment and trying to get sidekiq to start along with my other services with docker-compose up, yet sidekiq is throwing an error in an attempt to connect to the wrong redis URL:
redis_1 | 1:M 19 Jun 02:04:35.137 * The server is now ready to accept connections on port 6379
sidekiq_1 | Error connecting to Redis on 127.0.0.1:6379 (Errno::ECONNREFUSED)
I'm pretty confident that there are no references in my Rails app that would have Sidekiq connecting to localhost instead of the created redis service in docker-compose.yml:
version: '3'
services:
db:
image: postgres
web:
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
volumes:
- .:/app
ports:
- 3000:3000
depends_on:
- db
redis:
image: redis:3.2-alpine
command: redis-server
ports:
- 6379:6379
volumes:
- redis:/var/lib/redis/data
sidekiq:
depends_on:
- db
- redis
build: .
command: bundle exec sidekiq -C config/sidekiq.yml
volumes:
- .:/app
env_file:
- .env
volumes:
redis:
postgres:
And in config/initializers/sidekiq.rb I have hardcoded the redis url:
Sidekiq.configure_server do |config|
config.redis = { url: 'redis://redis:6379/0' }
end
Sidekiq.configure_client do |config|
config.redis = { url: 'redis://redis:6379/0' }
end
At this point I'm stumped. I have completly removed any existing containers ran docker-compose build then docker-compose up multiple times with no change.
I've done a global search within my app folder looking for any remaining references to 127.0.0.1:6379 and localhost:6379 and get no hits, so I'm not sure why sidekiq is stuck looking for redis on 127.0.0.1 at this point.
I could not find an explanation for why this is happening. But I did notice this in the sidekiq source code:
def determine_redis_provider
ENV[ENV['REDIS_PROVIDER'] || 'REDIS_URL']
end
In the event that :url is not defined in config, sidekiq looks at the environment variable REDIS_URL. You could try setting that to your url for an easy workaround. To make it work with docker, you should simply be able to add REDIS_URL='redis://redis:6379/0' to your compose file. Details can be found here

Docker Rails app with searchkick/elasticsearch

Im porting my rails app from my local machine into a docker container and running into an issue with elasticsearch/searchkick. I can get it working temporarily but Im wondering if there is a better way. So basically the port for elasticsearch isnt matching up with the default localhost:9200 that searchkick uses. Now I have used "docker inspect" on the elasticsearch container and got the actual IP and then set the ENV['ELASTICSEARCH_URL'] variable like the searchkick docs say and it works. The problem Im having is that is a pain if I restart/change the containers the IP changes sometimes and I have to go through the whole process again. Here is my docker-compose.yml:
version: '2'
services:
web:
build: .
command: rails server -p 3000 -b '0.0.0.0'
volumes:
- .:/living-recipe
ports:
- '3000:3000'
env_file:
- .env
depends_on:
- postgres
- elasticsearch
postgres:
image: postgres
elasticsearch:
image: elasticsearch
use elasticsearch:9200 instead of localhost:9200. docker compose exposes the container via it's name.
Here is the docker-compose.yml that is working for me
docker compose will expose the container vaia it's name, so you can set
ELASTICSEARCH_URL: http://elasticsearch:9200 ENV variable in your rails application container
version: "3"
services:
db:
image: postgres:9.6
restart: always
volumes:
- /tmp/db:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: password
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.9.2
volumes:
- .:/app
ports:
- 9200:9200
environment:
- discovery.type=single-node
ulimits:
memlock:
soft: -1
hard: -1
api:
build: .
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- ".:/app"
ports:
- "3001:3000"
depends_on:
- db
environment:
DB_HOST: db
DB_PASSWORD: password
ELASTICSEARCH_URL: http://elasticsearch:9200
You don't want to try to map the IP address for elasticsearch manually, as it will change.
Swap out depends_on for links. This will create the same dependency, but also allows the containers to be reached via service name.
Containers for the linked service will be reachable at a hostname identical to the alias, or the service name if no alias was specified.
Links also express dependency between services in the same way as depends_on, so they determine the order of service startup.
Docker Compose File Reference - Links
Then in your rails app where you're setting ENV['ELASTICSEARCH_URL'], use elasticsearch instead.

Resources