I'm setting up my Docker environment and trying to get sidekiq to start along with my other services with docker-compose up, yet sidekiq is throwing an error in an attempt to connect to the wrong redis URL:
redis_1 | 1:M 19 Jun 02:04:35.137 * The server is now ready to accept connections on port 6379
sidekiq_1 | Error connecting to Redis on 127.0.0.1:6379 (Errno::ECONNREFUSED)
I'm pretty confident that there are no references in my Rails app that would have Sidekiq connecting to localhost instead of the created redis service in docker-compose.yml:
version: '3'
services:
db:
image: postgres
web:
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
volumes:
- .:/app
ports:
- 3000:3000
depends_on:
- db
redis:
image: redis:3.2-alpine
command: redis-server
ports:
- 6379:6379
volumes:
- redis:/var/lib/redis/data
sidekiq:
depends_on:
- db
- redis
build: .
command: bundle exec sidekiq -C config/sidekiq.yml
volumes:
- .:/app
env_file:
- .env
volumes:
redis:
postgres:
And in config/initializers/sidekiq.rb I have hardcoded the redis url:
Sidekiq.configure_server do |config|
config.redis = { url: 'redis://redis:6379/0' }
end
Sidekiq.configure_client do |config|
config.redis = { url: 'redis://redis:6379/0' }
end
At this point I'm stumped. I have completly removed any existing containers ran docker-compose build then docker-compose up multiple times with no change.
I've done a global search within my app folder looking for any remaining references to 127.0.0.1:6379 and localhost:6379 and get no hits, so I'm not sure why sidekiq is stuck looking for redis on 127.0.0.1 at this point.
I could not find an explanation for why this is happening. But I did notice this in the sidekiq source code:
def determine_redis_provider
ENV[ENV['REDIS_PROVIDER'] || 'REDIS_URL']
end
In the event that :url is not defined in config, sidekiq looks at the environment variable REDIS_URL. You could try setting that to your url for an easy workaround. To make it work with docker, you should simply be able to add REDIS_URL='redis://redis:6379/0' to your compose file. Details can be found here
Related
As the title says, i have 3 containers running in docker, 1 for rails, 1 for a postgres db and 1 for redis. I'm able to enqueue jobs doing Job.perform_async but for some reason my jobs stay on the enqueued indefinitely. I checked and my Redis container is up and running.
My Job:
class HardJob
include Sidekiq::Job
def perform(*args)
puts 'HardJob'
end
end
The initializer for sidekiq:
Sidekiq.configure_server do |config|
config.redis = { url: (ENV["REDIS_URL"] || 'redis://localhost:6379') }
end
Sidekiq.configure_client do |config|
config.redis = { url: (ENV["REDIS_URL"] || 'redis://localhost:6379') }
end
My docker-compose:
version: '3.0'
services:
web:
build: .
entrypoint: >
bash -c "
rm -f tmp/pids/server.pid
&& bundle exec rails s -b 0.0.0.0 -p 3000"
ports:
- 3000:3000
volumes:
- .:/src/myapp
depends_on:
- db
- redis
links:
- "db:db"
environment:
REDIS_URL: 'redis://redis:6379'
db:
image: postgres:11
environment:
POSTGRES_PASSWORD: 'postgres'
volumes:
- db_data:/var/lib/postgresql/data
ports:
- 5432:5432
redis:
image: "redis"
volumes:
db_data:
redis:
driver: local
And i also set config.active_job.queue_adapter = :sidekiq in my 3 environments.
Any hint of what could be happening here? Thanks in advance
Update
Seems that running sidekiq -q default in my rails terminal worked. How can i configure Docker to always run sidekiq?
Sidekiq is process on it own and needs to be started on it own, just like the web server process. Add something like the following to docker-compose:
sidekiq:
depends_on:
- 'db'
- 'redis'
build: .
command: bundle exec sidekiq
volumes:
- .:/src/myapp
environment:
- REDIS_URL_SIDEKIQ=redis://redis:6379/1
Or – when you are able to use the latest version of Sidekiq (>= 7.0) – you might want try out the new Sidekiq embedded mode that runs Sidekiq in together with your puma webserver.
Sidekiq is looking for the wrong queue name for some reason. Try adding this
to your config/sidekiq.yml file.
:queues:
- default
I'm running a docker compose which consists of a web worker, a postgres database and a redis sidekiq worker. I created a background job to process images after uploading user images. ActiveStorage is used to store images. Normally without docker, in local development, the images are stored in a temporary storage folder to simulate a cloud storage. I'm fairly new to Docker, so I'm not sure how storage works. I believe storage in Docker works a bit differently. The sidekiq worker seems fine, it just seems like it's complaining about not able to find a place to store images. Below is the error that I get from the sidekiq worker.
WARN: Errno::ENOENT: No such file or directory # rb_sysopen - /myapp/storage
And here is my docker-compose.yml
version: '3'
services:
setup:
build: .
depends_on:
- postgres
environment:
- RAILS_ENV=development
command: "bin/rails db:migrate"
postgres:
image: postgres:10-alpine
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=mysecurepass
- POSTGRES_DB=myapp_development
- PGDATA=/var/lib/postgresql/data
postgres_data:
image: postgres:10-alpine
volumes:
- /var/lib/postgresql/data
command: /bin/true
sidekiq:
build: .
environment:
- REDIS_URL=redis://redis:6379
depends_on:
- redis
command: "bin/bundle exec sidekiq -C config/sidekiq.yml"
redis:
image: redis:4-alpine
ports:
- "6379:6379"
web:
build: .
depends_on:
- redis
- postgres
- setup
command: bundle exec rails s -p 3000 -b '0.0.0.0'
environment:
- REDIS_URL=redis://localhost:6379
volumes:
- .:/myapp
ports:
- "3000:3000"
depends_on:
- postgres
Perhaps you need to add myapp volume for sidekiq as well like this:
sidekiq:
volumes:
- .:/myapp
I am trying to move a working Rails app to docker environment.
Following the UNIX(/docker) philosophy I would like to have each service in its own container.
I managed to get redis and postgres working fine, but I am struggling to get slor and rails talking to each other.
In file app/models/spree/sunspot/search_decorator.rb when the line executes
#solr_search.execute
the following error appear on the console:
Errno::EADDRNOTAVAIL (Cannot assign requested address - connect(2) for "localhost" port 8983):
While researching for a solution I have found people just installing solr in the same container as their rails app. But I would rather have it in a separate container.
Here are my config/sunspot.yml
development:
solr:
hostname: localhost
port: 8983
log_level: INFO
path: /solr/development
and docker-compose.yml files
version: '2'
services:
db:
(...)
redis:
(...)
solr:
image: solr:7.0.1
ports:
- "8983:8983"
volumes:
- solr-data:/opt/solr/server/solr/mycores
entrypoint:
- docker-entrypoint.sh
- solr-precreate
- mycore
networks:
- backend
app:
build: .
env_file: .env
environment:
RAILS_ENV: $RAILS_ENV
depends_on:
- db
- redis
- solr
ports:
- "3000:3000"
tty: true
networks:
- backend
volumes:
solr-data:
redis-data:
postgres-data:
networks:
backend:
driver: bridge
Any suggestions?
Your config/sunspot.yml should have the following:
development:
solr:
hostname: solr # since our solr instance is linked as solr
port: 8983
log_level: WARNING
solr_home: solr
path: /solr/mycore
# this path comes from the last command of our entrypoint as
# specified in the last parameter for our solr container
If you see
Solr::Error::Http (RSolr::Error::Http - 404 Not Found
Error: Not Found
URI: http://localhost:8982/solr/development/select?wt=json
Create a new core using the admin interface at:
http://localhost:8982/solr/#/~cores
or using the following command:
docker-compose exec solr solr create_core -c development
I wrote a blog post on this: https://gaurav.koley.in/2018/searching-in-rails-with-solr-sunspot-and-docker
Hopefully that helps those who come here at later stage.
When you declare services in a docker-compose file, containers will have their name as hostname. So your solr service will be available, inside the backend network, as solr.
What I'm seeing from your error is that the ruby code is trying to connect at localhost:8983, while it should connect to solr:8983.
Probably you'll need also to change your hostname inside config/sunspot.yml, but I don't work with solr so I'm not sure about this.
i have an Ruby on Rails project, which i want to place into the containers( there are database, redis and web(includes rails project) containers). I want to add search feature, so i added a sphinx container in my compose file
docker-compose.yml
web:
dockerfile: Dockerfile-rails
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
ports:
- "3000:3000"
links:
- redis
- db
**- sphinx**
environment:
- REDISTOGO_URL=redis://user#redis:6379/
redis:
image: redis
**sphinx:
image: centurylink/sphinx**
db:
dockerfile: Dockerfile-db
build: .
env_file: .env_db
docker-compose build works fine but when i run docke-compose up i get
ERROR: Cannot start container 096410dafc86666dcf1ffd5f60ecc858760fb7a2b8f2352750f615957072d961: Cannot link to a non running container: /metartaf_sphinx_1 AS /metartaf_web_1/sphinx_1
How can i fix this ?
According to https://hub.docker.com/r/centurylink/sphinx/ the Sphinx container runs needs some amount of configuration files to run properly. See the *Daemonized usage (2). You need data source files and a configuration.
In my test, it fails to start as is with error:
FATAL: no readable config file (looked in /usr/local/etc/sphinx.conf, ./sphinx.conf)
Your docker-compose.yml shouldn't have these * in it.
If you want sphinx latest version you can do this:
web:
dockerfile: Dockerfile-rails
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
ports:
- "3000:3000"
links:
- redis
- db
- sphinx
environment:
- REDISTOGO_URL=redis://user#redis:6379/
redis:
image: redis
sphinx:
image: centurylink/sphinx:latest
db:
dockerfile: Dockerfile-db
build: .
env_file: .env_db
If you want a specific version you write this way : centurylink/sphinx:2.1.8
I am following https://semaphoreci.com/community/tutorials/dockerizing-a-ruby-on-rails-application to create a sample rails app from rails docker image. Idea is to dockerize a rail application. I have creates a .drkiq.env file in rails app root directory in docker's recommended format of KEY=value as given below
SECRET_TOKEN=asecuretokenwouldnormallygohere
WORKER_PROCESSES=1
LISTEN_ON=0.0.0.0:8000
DATABASE_URL=postgresql://drkiq:yourpassword#postgres:5432/drkiq?encoding=utf8&pool=5&timeout=5000
CACHE_URL=redis://redis:6379/0
JOB_WORKER_URL=redis://redis:6379/0
I am reading the environment file from my docker=compose.yml file (also residing in app root directory)
postgres:
image: postgres:9.4.5
environment:
POSTGRES_USER: drkiq
POSTGRES_PASSWORD: yourpassword
ports:
- '5432:5432'
volumes:
- drkiq-postgres:/var/lib/postgresql/data
redis:
image: redis:3.0.5
ports:
- '6379:6379'
volumes:
- drkiq-redis:/var/lib/redis/data
drkiq:
build: .
links:
- postgres
- redis
volumes:
- .:/drkiq
ports:
- '8000:8000'
env_file:
- .drkiq.env
sidekiq:
build: .
command: bundle exec sidekiq -C config/sidekiq.yml
links:
- postgres
- redis
volumes:
- .:/drkiq
env_file:
- .drkiq.env
Inside my Dockerfile (residing in app root directory), I am running unicorn server
CMD bundle exec unicorn -c config/unicorn.rb
But when I run command
docker-compose up
and access http://my-host:8000/ It gives me "RuntimeError at /
Missing secret_token and secret_key_base for 'development' environment, set these values in config/secrets.yml" error. I am not sure what I am missing here.
My bad. in my rails app I was actually looking for SECRET_KEY_BASE variable but in .drkiq.env file (as pasted above) I was setting SECRET_TOKEN. I replaced SECRET_TOKEN with SECRET_KEY_BASE and restarted docker and every thing was shiny and warm.