Sidekiq not running enqueued jobs (Rails 7 + Redis + Sidekiq + Docker) - ruby-on-rails

As the title says, i have 3 containers running in docker, 1 for rails, 1 for a postgres db and 1 for redis. I'm able to enqueue jobs doing Job.perform_async but for some reason my jobs stay on the enqueued indefinitely. I checked and my Redis container is up and running.
My Job:
class HardJob
include Sidekiq::Job
def perform(*args)
puts 'HardJob'
end
end
The initializer for sidekiq:
Sidekiq.configure_server do |config|
config.redis = { url: (ENV["REDIS_URL"] || 'redis://localhost:6379') }
end
Sidekiq.configure_client do |config|
config.redis = { url: (ENV["REDIS_URL"] || 'redis://localhost:6379') }
end
My docker-compose:
version: '3.0'
services:
web:
build: .
entrypoint: >
bash -c "
rm -f tmp/pids/server.pid
&& bundle exec rails s -b 0.0.0.0 -p 3000"
ports:
- 3000:3000
volumes:
- .:/src/myapp
depends_on:
- db
- redis
links:
- "db:db"
environment:
REDIS_URL: 'redis://redis:6379'
db:
image: postgres:11
environment:
POSTGRES_PASSWORD: 'postgres'
volumes:
- db_data:/var/lib/postgresql/data
ports:
- 5432:5432
redis:
image: "redis"
volumes:
db_data:
redis:
driver: local
And i also set config.active_job.queue_adapter = :sidekiq in my 3 environments.
Any hint of what could be happening here? Thanks in advance
Update
Seems that running sidekiq -q default in my rails terminal worked. How can i configure Docker to always run sidekiq?

Sidekiq is process on it own and needs to be started on it own, just like the web server process. Add something like the following to docker-compose:
sidekiq:
depends_on:
- 'db'
- 'redis'
build: .
command: bundle exec sidekiq
volumes:
- .:/src/myapp
environment:
- REDIS_URL_SIDEKIQ=redis://redis:6379/1
Or – when you are able to use the latest version of Sidekiq (>= 7.0) – you might want try out the new Sidekiq embedded mode that runs Sidekiq in together with your puma webserver.

Sidekiq is looking for the wrong queue name for some reason. Try adding this
to your config/sidekiq.yml file.
:queues:
- default

Related

Using Sidekiq with multi APIs, but Sidekiq server executed wrong API code

I've built a rails app with docker-compose like below.
For example,
API A created job A1, that pushed to redis by Sidekiq Client SA.
And API B created job B1, that pushed to redis by Sidekiq Client SB.
But when these jobs executed, It pointed to application code in only API A.
So job B1 was failed because it was executed by API A.
I know that because the uninitialized constant error was raised.
I also used redis-namespace, but it still pointed to wrong API.
Can you help me explain how Sidekiq Server executed jobs.
And how it point to the right API that the job belongs to.
Many thanks.
config_redis = {
url: ENV.fetch('REDIS_URL_SIDEKIQ', 'redis://localhost:6379/0'),
namespace: ENV.fetch('REDIS_NAMESPACE_SIDEKIQ', 'super_admin')
}
Sidekiq.configure_server do |config|
config.redis = config_redis
end
Sidekiq.configure_client do |config|
config.redis = config_redis
end
initializer/sidekiq.rb
config_redis = {
url: ENV.fetch('REDIS_URL_SIDEKIQ', 'redis://localhost:6379/0'),
namespace: ENV.fetch('REDIS_NAMESPACE_SIDEKIQ', 'ignite')
}
Sidekiq.configure_server do |config|
config.redis = config_redis
end
Sidekiq.configure_client do |config|
config.redis = config_redis
end
docker-compose.yml
version: "3.9"
services:
ccp-ignite-api-gmv: # ----------- IGNITE SERVER
build: ../ccp-ignite-api-gmv/.
entrypoint: ./entrypoint.sh
command: WEB 3001
# command: MIGRATE # Uncomment this if you want to run db:migrate only
ports:
- "3001:3001"
volumes:
- ../ccp-ignite-api-gmv/.:/src
depends_on:
- db
- redis
links:
- db
- redis
tty: true
stdin_open: true
environment:
RAILS_ENV: ${RAILS_ENV}
REDIS_URL_SIDEKIQ: redis://redis:6379/ignite
REDIS_NAMESPACE_SIDEKIQ: ignite
ccp-super-admin-api-gmv: # ----------- SUPER ADMIN API SERVER
build: ../ccp-super-admin-api-gmv/.
entrypoint: ./entrypoint.sh
command: WEB 3005
# command: MIGRATE # Uncomment this if you want to run db:migrate only
ports:
- "3005:3005"
volumes:
- ../ccp-super-admin-api-gmv/.:/src
depends_on:
- db
- redis
links:
- db
- redis
tty: true
stdin_open: true
environment:
RAILS_ENV: ${RAILS_ENV}
REDIS_URL_SIDEKIQ: redis://redis:6379/super_admin
REDIS_NAMESPACE_SIDEKIQ: super_admin
db:
image: mysql:8.0.22
volumes:
- ~/docker/mysql:/var/lib/mysql
command: --default-authentication-plugin=mysql_native_password
environment:
MYSQL_ROOT_PASSWORD: password
ports:
- "3307:3306"
redis:
image: redis:5-alpine
command: redis-server
ports:
- 6379:6379
volumes:
- ~/docker/redis:/data
sidekiq_ignite:
depends_on:
- db
- redis
build: .
command: bundle exec sidekiq
volumes:
- .:/src
environment:
- REDIS_URL_SIDEKIQ=redis://redis:6379/0
- REDIS_NAMESPACE_SIDEKIQ=ignite
sidekiq_super_admin:
depends_on:
- db
- redis
build: .
command: bundle exec sidekiq
volumes:
- .:/src
environment:
- REDIS_URL_SIDEKIQ=redis://redis:6379/0
- REDIS_NAMESPACE_SIDEKIQ=super_admin
Thanks st.huber for reminding me,
That confused came from my wrong docker-compose config in 2 sidekiq service.
I've pointed to wrong folder in "build" and "command"
Before:
sidekiq_ignite:
depends_on:
- db
- redis
build: .
command: bundle exec sidekiq
volumes:
- .:/src
environment:
- REDIS_URL_SIDEKIQ=redis://redis:6379/0
- REDIS_NAMESPACE_SIDEKIQ=ignite
sidekiq_super_admin:
depends_on:
- db
- redis
build: .
command: bundle exec sidekiq
volumes:
- .:/src
environment:
- REDIS_URL_SIDEKIQ=redis://redis:6379/0
- REDIS_NAMESPACE_SIDEKIQ=super_admin
Fixed:
sidekiq_ignite:
depends_on:
- db
- redis
build: ../ccp-ignite-api-gmv/.
command: bundle exec sidekiq
volumes:
- ../ccp-ignite-api-gmv/.:/src
environment:
- REDIS_URL_SIDEKIQ=redis://redis:6379/0
- REDIS_NAMESPACE_SIDEKIQ=ignite
sidekiq_super_admin:
depends_on:
- db
- redis
build: ../ccp-super-admin-api-gmv/.
command: bundle exec sidekiq
volumes:
- ../ccp-super-admin-api-gmv/.:/src
environment:
- REDIS_URL_SIDEKIQ=redis://redis:6379/0
- REDIS_NAMESPACE_SIDEKIQ=super_admin

How to setup anycable with docker(ruby on rails)?

How can i setup anycable(action cable) port on docker?
this is my Dockerfile for anycable
FROM ruby:2.6.3-alpine3.10
WORKDIR /home/app
COPY . /home/app/
EXPOSE 50051
CMD ["anycable"]
and this is my docker-compose
version: "3"
services:
app:
build:
context: .
dockerfile: ./dockers/app/Dockerfile
container_name: out_app
restart: unless-stopped
volumes:
- .:/app
- /app/node_modules
- /app/public/assets
- /app/public/packs
ports:
- 3000:3000
db:
build:
context: .
dockerfile: ./dockers/postgis/Dockerfile
container_name: out_db
environment:
POSTGRES_USER: ${DOCKER_DB_USER}
POSTGRES_PASSWORD: ${DOCKER_DB_PASSWORD}
POSTGRES_DB: ${DOCKER_DB_NAME}
volumes:
- /docker_data/giggle/postgres:/var/lib/postgresql/data
ports:
- 5435:5432
nginx:
build:
context: .
dockerfile: ./dockers/web/Dockerfile
container_name: out_web
restart: unless-stopped
ports:
- 80:80
- 443:443
depends_on:
- app
volumes:
- ./dockers/web/nginx.conf:/etc/nginx/conf.d/default.conf
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
certbot:
image: certbot/certbot
restart: unless-stopped
volumes:
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
redis:
image: redis
volumes:
- ../../tmp/db:/var/lib/redis/data
delayed_job:
build:
context: .
dockerfile: ./dockers/delayed_job/Dockerfile
container_name: out_delayed_job
command: bundle exec rails jobs:work
depends_on:
- db
volumes:
- .:/app
anycable:
image: 'anycable/anycable-go:edge-mrb'
ports:
- "3334"
environment:
ANYCABLE_HOST: 0.0.0.0
REDIS_URL: redis://redis:6379/1
ANYCABLE_RPC_HOST: 0.0.0.0:3334
ANYCABLE_DEBUG: 1
command: bundle exec anycable
anycable:
build:
context: .
dockerfile: ./dockers/anycable/Dockerfile
container_name: anycable
command: bundle exec anycable
depends_on:
- redis
You provided anycable-go configuration. To set custom port for anycable-go server add ANYCABLE_PORT: <your port> to anycable-go image environment or expose image port like ports: ['<your_port>:8080'].
Check anycable configuration page (contains env variables info): https://docs.anycable.io/#/anycable-go/configuration
You need to setup anycable-rails by adding anycable-rails gem to your Gemfile:
gem "anycable-rails", "~> 1.1"
when using Redis broadcast adapter
gem "redis", ">= 4.0"
(and don't forget to run bundle install).
Then, run the interactive configuration wizard via Rails generators:
bundle exec rails g anycable:setup
Configuration
Next, update your Action Cable configuration:
# config/cable.yml
production:
# Set adapter to any_cable to activate AnyCable
adapter: any_cable
Install WebSocket server and specify its URL in the configuration:
For development it's likely the localhost
# config/environments/development.rb
config.action_cable.url = "ws://localhost:8080/cable"
For production it's likely to have a sub-domain and secure connection
# config/environments/production.rb
config.action_cable.url = "wss://ws.example.com/cable"
Now you can start AnyCable RPC server for your application:
$ bundle exec anycable
#> Starting AnyCable gRPC server (pid: 48111)
#> Serving Rails application from ./config/environment.rb
Don't forget to provide Rails env in production
$ RAILS_ENV=production bundle exec anycable
NOTE: you don't need to specify `-r option (see CLI docs), your application would be loaded from config/environment.rb.
And, finally, run AnyCable WebSocket server, e.g. anycable-go:
$ anycable-go --host=localhost --port=8080
INFO 2019-08-07T16:37:46.387Z context=main Starting AnyCable v0.6.2-13-gd421927 (with mruby 1.2.0 (2015-11-17)) (pid: 1362)
INFO 2019-08-07T16:37:46.387Z context=main Handle WebSocket connections at /cable
INFO 2019-08-07T16:37:46.388Z context=http Starting HTTP server at localhost:8080
You can store AnyCable-specific configuration in YAML file (similar to Action Cable one):
# config/anycable.yml
development:
redis_url: redis://localhost:6379/1
production:
redis_url: redis://my.redis.io:6379/1

Why is Guard not detecting file changes after dockerizing Rails app?

I am trying to "dockerize" an existing Rails development app. This is my first time experimenting with Docker.
I want to set up Guard to listen for file changes and run relevant specs.
The Guard service appears to be running correctly, and the logs show:
guard_1 | 16:35:12 - INFO - Guard is now watching at '/app'
But when I edit/save spec files Guard is not running any tests.
This is an existing app that I'm moving into Docker. It has a guardfile that works outside of Docker.
I've searched and read a number of posts (e.g. this one), but I'm not sure where to start debugging this. Can anyone point me in the right direction and get Guard listening to file changes.
My docker-compose.yml looks like this:
version: '3'
services:
postgres:
ports:
- "5432:5432"
volumes:
- $HOME/postgres-data:/var/lib/postgresql
image: postgres:9.6.9
redis:
ports:
- "6379:6379"
depends_on:
- postgres
image: redis:5.0-rc
web:
build: .
ports:
- "3000:3000"
command: /bin/sh -c "rails s -b 0.0.0.0 -p 3000"
depends_on:
- postgres
- redis
env_file:
- .env
guard:
build: .
env_file:
- .env
command: bundle exec guard --no-bundler-warning --no-interactions
sidekiq:
build: .
command: bundle exec sidekiq -C config/sidekiq.yml
depends_on:
- postgres
- redis
env_file:
- .env
volumes:
redis:
postgres:
sidekiq:
guard:
Guardfile
guard 'spring', bundler: true do
watch('Gemfile.lock')
watch(%r{^config/})
watch(%r{^spec/(support|factories)/})
watch(%r{^spec/factory.rb})
end
guard :rspec, cmd: "bundle exec rspec" do
require "guard/rspec/dsl"
dsl = Guard::RSpec::Dsl.new(self)
# RSpec files
rspec = dsl.rspec
watch(rspec.spec_files)
# Ruby files
ruby = dsl.ruby
dsl.watch_spec_files_for(ruby.lib_files)
# Rails files
rails = dsl.rails(view_extensions: %w(erb haml slim))
dsl.watch_spec_files_for(rails.app_files)
dsl.watch_spec_files_for(rails.views)
watch(rails.controllers) do |m|
[
rspec.spec.call("routing/#{m[1]}_routing"),
rspec.spec.call("controllers/#{m[1]}_controller"),
rspec.spec.call("acceptance/#{m[1]}")
]
end
# Rails config changes
watch(rails.spec_helper) { rspec.spec_dir }
watch(rails.routes) { "#{rspec.spec_dir}/routing" }
watch(rails.app_controller) { "#{rspec.spec_dir}/controllers" }
# Capybara features specs
watch(rails.view_dirs) { |m| rspec.spec.call("features/#{m[1]}") }
watch(rails.layouts) { |m| rspec.spec.call("features/#{m[1]}") }
# Turnip features and steps
watch(%r{^spec/acceptance/(.+)\.feature$})
watch(%r{^spec/acceptance/steps/(.+)_steps\.rb$}) do |m|
Dir[File.join("**/#{m[1]}.feature")][0] || "spec/acceptance"
end
ignore %r{^spec/support/concerns/}
end
guard 'brakeman', :run_on_start => true do
watch(%r{^app/.+\.(erb|haml|rhtml|rb)$})
watch(%r{^config/.+\.rb$})
watch(%r{^lib/.+\.rb$})
watch('Gemfile')
end
I'm assuming you're making changes on your local filesystem and expected guard, inside the container, to trigger.
If so, the missing link is your docker-compose.yml file.
guard:
build: .
env_file:
- .env
command: bundle exec guard --no-bundler-warning --no-interactions
volumes:
- .:/app
You need to mount the volume of your root (Rails root) directory inside the container so that the changes are reflected. Without this line, your container(s) only sees what was available at build time, and not the changes.
I faced the exact same problem and have been wondering about existing solutions that didn't really work. The key solution to your problem (if you are on OSX of course) is to understand the difference between "Docker Toolbox" and "Docker for Mac".
This article gives a lot of insight: https://docs.docker.com/docker-for-mac/docker-toolbox/
TL;DR
If you are on Mac, you need to use Docker for Mac to have the benefits of osxfs. If you do this you will not need docker-sync!

Rails Active Storage in Docker

I'm running a docker compose which consists of a web worker, a postgres database and a redis sidekiq worker. I created a background job to process images after uploading user images. ActiveStorage is used to store images. Normally without docker, in local development, the images are stored in a temporary storage folder to simulate a cloud storage. I'm fairly new to Docker, so I'm not sure how storage works. I believe storage in Docker works a bit differently. The sidekiq worker seems fine, it just seems like it's complaining about not able to find a place to store images. Below is the error that I get from the sidekiq worker.
WARN: Errno::ENOENT: No such file or directory # rb_sysopen - /myapp/storage
And here is my docker-compose.yml
version: '3'
services:
setup:
build: .
depends_on:
- postgres
environment:
- RAILS_ENV=development
command: "bin/rails db:migrate"
postgres:
image: postgres:10-alpine
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=mysecurepass
- POSTGRES_DB=myapp_development
- PGDATA=/var/lib/postgresql/data
postgres_data:
image: postgres:10-alpine
volumes:
- /var/lib/postgresql/data
command: /bin/true
sidekiq:
build: .
environment:
- REDIS_URL=redis://redis:6379
depends_on:
- redis
command: "bin/bundle exec sidekiq -C config/sidekiq.yml"
redis:
image: redis:4-alpine
ports:
- "6379:6379"
web:
build: .
depends_on:
- redis
- postgres
- setup
command: bundle exec rails s -p 3000 -b '0.0.0.0'
environment:
- REDIS_URL=redis://localhost:6379
volumes:
- .:/myapp
ports:
- "3000:3000"
depends_on:
- postgres
Perhaps you need to add myapp volume for sidekiq as well like this:
sidekiq:
volumes:
- .:/myapp

Sidekiq using the incorrect url for redis

I'm setting up my Docker environment and trying to get sidekiq to start along with my other services with docker-compose up, yet sidekiq is throwing an error in an attempt to connect to the wrong redis URL:
redis_1 | 1:M 19 Jun 02:04:35.137 * The server is now ready to accept connections on port 6379
sidekiq_1 | Error connecting to Redis on 127.0.0.1:6379 (Errno::ECONNREFUSED)
I'm pretty confident that there are no references in my Rails app that would have Sidekiq connecting to localhost instead of the created redis service in docker-compose.yml:
version: '3'
services:
db:
image: postgres
web:
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
volumes:
- .:/app
ports:
- 3000:3000
depends_on:
- db
redis:
image: redis:3.2-alpine
command: redis-server
ports:
- 6379:6379
volumes:
- redis:/var/lib/redis/data
sidekiq:
depends_on:
- db
- redis
build: .
command: bundle exec sidekiq -C config/sidekiq.yml
volumes:
- .:/app
env_file:
- .env
volumes:
redis:
postgres:
And in config/initializers/sidekiq.rb I have hardcoded the redis url:
Sidekiq.configure_server do |config|
config.redis = { url: 'redis://redis:6379/0' }
end
Sidekiq.configure_client do |config|
config.redis = { url: 'redis://redis:6379/0' }
end
At this point I'm stumped. I have completly removed any existing containers ran docker-compose build then docker-compose up multiple times with no change.
I've done a global search within my app folder looking for any remaining references to 127.0.0.1:6379 and localhost:6379 and get no hits, so I'm not sure why sidekiq is stuck looking for redis on 127.0.0.1 at this point.
I could not find an explanation for why this is happening. But I did notice this in the sidekiq source code:
def determine_redis_provider
ENV[ENV['REDIS_PROVIDER'] || 'REDIS_URL']
end
In the event that :url is not defined in config, sidekiq looks at the environment variable REDIS_URL. You could try setting that to your url for an easy workaround. To make it work with docker, you should simply be able to add REDIS_URL='redis://redis:6379/0' to your compose file. Details can be found here

Resources