Docker stuck on 'Attaching to' when running rake task that uses EventMachine - ruby-on-rails

I'm trying to dockerize a perfectly fine running rake task that creates subscribers to a pub/sub endpoint. If I just use a bundle exec rake mytask:torun it works just fine.
But as soon as I run it in a docker container, it breaks. If I remove the EventMachine.run part, it does print the "Initialize task" and exits the container. If I put it back, it's just not doing anything.
Here's the task :
task torun: :environment do
puts "Initializing task"
EventMachine.run do
client = Restforce.new(obfuscated, not useful in context)
client.subscription "/topic/TestTopic", replay: -1 do |message|
puts "EM Received message: #{message.inspect}"
end
end
end
end
My docker-compose :
runevents:
# restart: unless-stopped
build: .
command: rails mytask:torun
env_file:
- "dev.env"
volumes:
- '.:/app'
Because I'm running on a mac and I know there are volume performance issues, I've also tried to remove the volumes: part in the docker-compose, but no luck.
Is there anything I'm missing ? Why does the container just not start ?

Related

Does docker-compose support init container?

init container is a great feature in Kubernetes and I wonder whether docker-compose supports it? it allows me to run some command before launch the main application.
I come cross this PR https://github.com/docker/compose-cli/issues/1499 which mentions to support init container. But I can't find related doc in their reference.
This was a discovery for me but yes, it is now possible to use init containers with docker-compose since version 1.29 as can be seen in the PR you linked in your question.
Meanwhile, while I write those lines, it seems that this feature has not yet found its way to the documentation
You can define a dependency on an other container with a condition being basically "when that other container has successfully finished its job". This leaves the room to define containers running any kind of script and exit when they are done before an other dependent container is launched.
To illustrate, I crafted an example with a pretty common scenario: spin up a db container, make sure the db is up and initialize its data prior to launching the application container.
Note: initializing the db (at least as far as the official mysql image is concerned) does not require an init container so this example is more an illustration than a rock solid typical workflow.
The complete example is available in a public github repo so I will only show the key points in this answer.
Let's start with the compose file
---
x-common-env: &cenv
MYSQL_ROOT_PASSWORD: totopipobingo
services:
db:
image: mysql:8.0
command: --default-authentication-plugin=mysql_native_password
environment:
<<: *cenv
init-db:
image: mysql:8.0
command: /initproject.sh
environment:
<<: *cenv
volumes:
- ./initproject.sh:/initproject.sh
depends_on:
db:
condition: service_started
my_app:
build:
context: ./php
environment:
<<: *cenv
volumes:
- ./index.php:/var/www/html/index.php
ports:
- 9999:80
depends_on:
init-db:
condition: service_completed_successfully
You can see I define 3 services:
The database which is the first to start
The init container which starts only once db is started. This one only runs a script (see below) that will exit once everything is initialized
The application container which will only start once the init container has successfuly done its job.
The initproject.sh script run by the db-init container is very basic for this demo and simply retries to connect to the db every 2 seconds until it succeeds or reaches a limit of 50 tries, then creates a db/table and insert some data:
#! /usr/bin/env bash
# Test we can access the db container allowing for start
for i in {1..50}; do mysql -u root -p${MYSQL_ROOT_PASSWORD} -h db -e "show databases" && s=0 && break || s=$? && sleep 2; done
if [ ! $s -eq 0 ]; then exit $s; fi
# Init some stuff in db before leaving the floor to the application
mysql -u root -p${MYSQL_ROOT_PASSWORD} -h db -e "create database my_app"
mysql -u root -p${MYSQL_ROOT_PASSWORD} -h db -e "create table my_app.test (id int unsigned not null auto_increment primary key, myval varchar(255) not null)"
mysql -u root -p${MYSQL_ROOT_PASSWORD} -h db -e "insert into my_app.test (myval) values ('toto'), ('pipo'), ('bingo')"
The Dockerfile for the app container is trivial (adding a mysqli driver for php) and can be found in the example repo as well as the php script to test the init was succesful by calling http://localhost:9999 in your browser.
The interesting part is to observe what's going on when launching the service with docker-compose up -d.
The only limit to what can be done with such a feature is probably your imagination ;) Thanks for making me discovering this.

Docker wait untill a service is completely ready

I'm dockerizing my existing Django application.
I have an entrypoint.sh script which run as entrypoint by the Dockerfile
ENTRYPOINT ["/app/scripts/docker/entrypoint.sh"]
It's content contains script to run migration when environment variable is set to migrate
#!/bin/sh
#set -e
# Run the command and exit with the custom message when the comamnd fails to run
safeRunCommand() {
cmnd="$*"
echo cmnd="$cmnd"
eval "$cmnd"
ret_code=$?
if [ $ret_code != 0 ]; then
printf "Error : [code: %d] when executing command: '$cmnd'\n" $ret_code
exit $ret_code
else
echo "Command run successfully: $cmnd"
fi
}
runDjangoMigrate() {
echo "Migrating database"
cmnd="python manage.py migrate --noinput"
safeRunCommand "$cmnd"
echo "Done: Migrating database"
}
# Run Django migrate command.
# The command is run only when environment variable `DJANGO_MANAGE_MIGRATE` is set to `on`.
if [ "x$DJANGO_MANAGE_MIGRATE" = 'xon' ] && [ ! "x$DEPLOYMENT_MODE" = 'xproduction' ]; then
runDjangoMigrate
fi
# Accept other commands
exec "$#"
Now, in the docker-compose file, I have the services like
version: '3.7'
services:
database:
image: mysql:5.7
container_name: 'qcg7_db_mysql'
restart: always
web:
build: .
command: ["./wait_for_it.sh", "database:3306", "--", "./docker_start.sh"]
volumes:
- ./src:/app
depends_on:
- database
environment:
DJANGO_MANAGE_MIGRATE: 'on'
But when I build the image using
docker-compose up --build
It fails to run the migration command from entrypoint script with error
(2002, "Can't connect to MySQL server on 'database' (115)")
This is due to the fact that the database server has not still started.
How can I make web service to wait untill the database service is completely started and is ready to accept connections?
Unfortunately, there is not a native way in Docker to wait for the database service to be ready before Django web app attempts to connect. Depends_on will only ensure that the web app is started after the database container is launched.
Because of this limitation you will need to solve this problem in how your container runs. The easiest solution is to modify the entrypoint.sh to sleep for 10-30 seconds so that your database has time to initialize before executing any additional commands. This official MySQL entrypoint.sh shows an example of how to block until the database is ready.

Gitlab CI Config for Rails System Tests with Selenium and Headless Chrome

I'm trying to set up continuous Gitlab integration for a very simple Rails project and, despite all my searching, cannot find any workable solution for getting system tests to work using headless Chrome.
Here's my .gitlab-ci.yml file:
image: 'ruby:2.6.3'
before_script:
- curl -sL https://deb.nodesource.com/setup_11.x | bash -
- apt-get install -y nodejs
- apt-get install -y npm
- gem install bundler --conservative
- bundle install
- npm install -g yarn
- yarn install
stages:
- test
test:
stage: test
variables:
MYSQL_HOST: 'mysql'
MYSQL_DATABASE: 'cwrmb_test'
MYSQL_ALLOW_EMPTY_PASSWORD: 'yes'
SYSTEM_EMAIL: 'test#example.com'
REDIS_URL: 'redis://redis:6379/'
SELENIUM_URL: "http://selenium__standalone-chrome:4444/wd/hub"
services:
- redis:latest
- selenium/standalone-chrome:latest
- name: mysql:latest
command: ['--default-authentication-plugin=mysql_native_password']
script:
- RAILS_ENV=test bin/rails db:setup
- bin/rails test:system
Here's my application_system_test_case.rb:
require 'test_helper'
def selenium_options
driver_options = {
desired_capabilities: {
chromeOptions: {
args: %w[headless disable-gpu no-sandbox disable-dev-shm-usage]
}
}
}
driver_options[:url] = ENV['SELENIUM_URL'] if ENV['SELENIUM_URL']
driver_options
end
class ApplicationSystemTestCase < ActionDispatch::SystemTestCase
driven_by :selenium, using: :chrome, screen_size: [1400, 1400], options: selenium_options
end
However, this configuration yields the following error for every system test:
Selenium::WebDriver::Error::UnknownError: java.net.ConnectException: Connection refused (Connection refused)
I don't believe there are any other errors (to do with Redis or MySQL) in this configuration file, because as soon as I omit system tests, everything works perfectly.
By the way, if anyone has any better configuration files for achieving the same goal, I would love to see what others do. Thanks in advance.
In how services are linked to the job and accessing the services it says that if you start a tutum/wordpress container (via a service stanza);
tutum/wordpress will be started and you will have access to it from your build container under two hostnames to choose from:
tutum-wordpress
tutum__wordpress
Note: Hostnames with underscores are not RFC valid and may cause problems in 3rd party applications
So here's how I'd proceed:
try with http://selenium-standalone-chrome:4444/wd/hub although this seems like a low probability solution..
output SELENIUM_URL in your test driver. Is it getting set correctly?
review the logs as in how the health check of services works. Is standalone-chrome coming up?
add ping or nslookup in there somewhere. Is selenium-standalone-chrome (or the alternative) resolving? It seems like it does otherwise we'd get a "hostname unknown" rather than the "connection refused", but you can never be too sure.

Set up Mina to use Passenger and Docker

I am trying to set up Mina's deploy.rb file. However, I am running into several hurdles.
My Rails 5 app uses Docker for the database (PostgreSQL) and Redis for background jobs. I am also using Phusion Passenger (Nginx) for the webserver.
This is what my deploy.rb file looks like at the moment:
require 'mina/rails'
require 'mina/git'
require 'mina/rvm'
set :application_name, 'App'
set :domain, 'my_app.com'
set :deploy_to, '/var/www/my_app.com'
set :repository, 'git#github.com:MyUser/my_app.git'
set :branch, 'master'
# Username in the server to SSH to.
set :user, 'myuser'
# shared dirs and files will be symlinked into the app-folder by the
# Not sure if this is necessary
'deploy:link_shared_paths' step.
set :shared_dirs, fetch(:shared_dirs, []).push('log', 'tmp/pids', 'tmp/sockets', 'public/uploads')
set :shared_files, fetch(:shared_files, []).push('config/database.yml', 'config/secrets.yml', 'config/puma.rb')
task :environment do
invoke :'rvm:use', 'ruby-2.4.1#default'
end
task :setup do
%w(database.yml secrets.yml puma.rb).each { |f| command %[touch "#{fetch(:shared_path)}/config/#{f}"] }
comment "Be sure to edit #{fetch(:shared_path)}/config/database.yml, secrets.yml and puma.rb."
end
desc "Deploys the current version to the server."
task :deploy do
deploy do
comment "Deploying #{fetch(:application_name)} to #{fetch(:domain)}:#{fetch(:deploy_to)}"
# Put things that will set up an empty directory into a fully set-up
# instance of your project.
invoke :'git:clone'
invoke :'deploy:link_shared_paths'
invoke :'bundle:install'
comment 'Cleaning up Docker builds'
command 'docker stop $(docker ps -qa)'
command 'docker rm $(docker ps -qa)'
comment 'Stopping Docker'
command 'docker-compose stop'
comment 'Starting Docker'
command 'docker-compose up -d; sleep 5'
invoke :'rails:db_migrate'
invoke :'rails:assets_precompile'
invoke :'deploy:cleanup'
end
end
I came up with this file looking a bit here a bit there. It seems to run properly. However, here are the problems I am facing:
I am not able to run passenger-config restart-app $(pwd) or passenger-config restart-app $(#{fetch(:current_path)}) so I am having to restart Passenger by logging into the server and run the command manually for some reason. This kind of defeats the purpose of using Mina, which should automate the deploy process.
Once I start Passenger, I am seeing a database error like:
F, [2017-08-21T08:42:40.145292 #29048] FATAL -- : [28d9fb0f-f187-4d16-b3bc-f947c4ec726f]
F, [2017-08-21T08:42:40.145378 #29048] FATAL -- : [28d9fb0f-f187-4d16-b3bc-f947c4ec726f] ActiveRecord::StatementInvalid (PG::UndefinedTable: ERROR: relation "subscriptions" does not exist
LINE 8: WHERE a.attrelid = '"subscriptions"'::regclas...
But I am sure that this is not erring in dev so I think it might have to do with how I am deploying Docker in production.
Does anyone have an idea of how I can get a proper Docker + Passenger setup with Mina?
Just for the extra info, my docker-compose.yml file looks like this:
version: "2"
services:
postgres:
image: postgres:9.6
ports:
- "5432:5432"
environment:
POSTGRES_DB: "${DATABASE_NAME}"
POSTGRES_PASSWORD: "${DATABASE_PASSWORD}"
volumes:
- postgres-data:/var/lib/postgresql/data
redis:
image: redis:latest
ports:
- "6379:6379"
volumes:
postgres-data:
driver: local
Thanks in advance!

Docker-compose - Redis at 0.0.0.0 instead of 127.0.0.1

I havs migrated my Rails app (local dev machine) to Docker-Compose. All is working except the Worker Rails instance (batch) cannot connect to Redis.
Completed 500 Internal Server Error in 40ms (ActiveRecord: 2.3ms)
Redis::CannotConnectError (Error connecting to Redis on 127.0.0.1:6379 (Errno::ECONNREFUSED)):
In my docker-compose.yml
redis:
image: redis
ports:
- "6379:6379"
batch:
build: .
command: bundle exec rake environment resque:work QUEUE=*
volumes:
- .:/app
links:
- db
- redis
environment:
- REDIS_URL=redis://redis:6379
I think the Redis instance is available via the IP of the Docker host.
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
default * virtualbox Running tcp://192.168.99.100:2376 v1.10.0
Accessing via 0.0.0.0 doesn't work
$ curl 0.0.0.0:6379
curl: (7) Failed to connect to 0.0.0.0 port 6379: Connection refused
Accessing via the docker-machine IP I think works:
$ curl http://192.168.99.100:6379
-ERR wrong number of arguments for 'get' command
-ERR unknown command 'Host:'
EDIT
After installing redis-cli in the batch instance, I was able to hit the redis server using the 'redis' hostname. I think the problem is possibly in the Rails configuration itself.
Facepalm!!!
The docker containers were communicating just fine, the problem was I hadn't told Resque (the app using Redis) where to find it. Thank you to "The Real Bill" for pointing out I should be using docker-cli.
For anyone else using Docker and Resque, you need this in your config/initializers/resque.rb file:
Resque.redis = Redis.new(host: 'redis', port: 6379)
Resque.after_fork = Proc.new { ActiveRecord::Base.establish_connection }
If you run
docker-compose run --rm batch env | grep REDIS
you will get the env variables that your container has (the link line in the compose will auto-generate some).
Then all you need to do is look for one along the lines of _REDIS_1_PORT... and use the correct one. I have never had luck connecting my rails to another service in any other way. But luckily these env variables are always generated on start so they will be up to date even if the container IP happens to change between startups.
You should use the hostname redis to connect to the service, although you may need to wait for redis to start.

Resources