How to run elasticsearch server in rspec - ruby-on-rails

Running on Linux Mint 16, I followed this guide here: http://pivotallabs.com/rspec-elasticsearchruby-elasticsearchmodel/ to setup elasticsearch with Ruby on Rails application.
When I run rspec, when it hits this line spec_helper.rb:
config.before :each, elasticsearch: true do
Elasticsearch::Extensions::Test::Cluster.start(port: 9200) unless Elasticsearch::Extensions::Test::Cluster.running?
end
I received the following error:
Starting 2 Elasticsearch nodes..sh: 1: elasticsearch: not found
I thought it might be a path issue....
So I added the following to ~/.bashrc:
export PATH="/etc/init.d:$PATH" since sudo /etc/init.d/elasticsearch start starts the elasticsearch service.
Then I issued command source ~/.bashrc
This got rid of the sh: 1: elasticsearch: not found message and instead the message from the error triggered in spec_helper.rb was:
............Starting 2 Elasticsearch nodes..
[!!!] Process failed to start (see output above)
F........
Below is the config block in my spec_helper.rb file:
config.before :each, elasticsearch: true do
Article.__elasticsearch__.client = Elasticsearch::Client.new host: 'http://localhost:9200'
Article.__elasticsearch__.create_index!(force: true)
Article.__elasticsearch__.refresh_index!
Elasticsearch::Extensions::Test::Cluster.start(port: 9200) unless Elasticsearch::Extensions::Test::Cluster.running?
end
config.after :suite do
Elasticsearch::Extensions::Test::Cluster.stop(port: 9200) if Elasticsearch::Extensions::Test::Cluster.running?
end
Any ideas on what the issue might be?
EDIT: If I change to port 9250 as suggested by commenter below:
config.before :each, elasticsearch: true do
Article.__elasticsearch__.client = Elasticsearch::Client.new host: 'http://localhost:9250'
Article.__elasticsearch__.create_index!(force: true)
Article.__elasticsearch__.refresh_index!
Elasticsearch::Extensions::Test::Cluster.start(port: 9250) unless Elasticsearch::Extensions::Test::Cluster.running?
end
config.after :suite do
Elasticsearch::Extensions::Test::Cluster.stop(port: 9250) if Elasticsearch::Extensions::Test::Cluster.running?
end
I get this new error:
An error occurred in an after hook
Faraday::ConnectionFailed: Connection refused - connect(2) for "localhost" port 9250
occurred at /home/nona/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http.rb:879:in `initialize'
F........

I remember having a similar problem, and I solved it with the following line in my test_helper:
ENV["TEST_CLUSTER_NODES"] = "1" # need to set so we trigger correct ES defaults
I figured it out by looking at the Elasticsearch::Extensions::Test::Cluster source and realizing that Elasticsearch::Extensions::Test::Cluster.running? was returning true (and therefore not starting the cluster) unless that TEST_CLUSTER_NODES value is set.

Related

bad URI(is not URI?): 'redis://redis.xxx.ng.0001.apse1.cache.amazonaws.com:6379/1' URI::InvalidURIError Rails with aws redis

I am experiencing a strange issue from Passenger docker ruby 2.3 with aws redis:
bad URI(is not URI?): 'redis://redis.xxx.ng.0001.apse1.cache.amazonaws.com:6379/1' (URI::InvalidURIError)
/usr/local/rvm/rubies/ruby-2.3.8/lib/ruby/2.3.0/uri/rfc3986_parser.rb:67:in `split'
/usr/local/rvm/rubies/ruby-2.3.8/lib/ruby/2.3.0/uri/rfc3986_parser.rb:73:in `parse'
/usr/local/rvm/rubies/ruby-2.3.8/lib/ruby/2.3.0/uri/common.rb:227:in `parse'
/usr/local/rvm/gems/ruby-2.3.8/gems/sidekiq-5.2.7/lib/sidekiq/redis_connection.rb:97:in `log_info'
/usr/local/rvm/gems/ruby-2.3.8/gems/sidekiq-5.2.7/lib/sidekiq/redis_connection.rb:31:in `create'
/usr/local/rvm/gems/ruby-2.3.8/gems/sidekiq-5.2.7/lib/sidekiq.rb:126:in `redis_pool'
/usr/local/rvm/gems/ruby-2.3.8/gems/sidekiq-5.2.7/lib/sidekiq.rb:94:in `redis'
My sidekiq config:
# ENV['REDIS_URL']= redis://redis.xxx.ng.0001.apse1.cache.amazonaws.com:6379/1
Sidekiq.configure_server do |config|
config.redis = { url: ENV['REDIS_URL'] }
end
Sidekiq.configure_client do |config|
config.redis = { url: ENV['REDIS_URL'] }
end
However if I run:
docker exec -it container_id bash
and then rails console everything seems to work just fine.
I also tried this:
redis_url = ENV['REDIS_URL']
# The statement below parsed successfully thus the redis_url is correct
uri = URI.parse(redis_url)
redis_options = {
host: uri.host,
port: 6379,
db: uri.path[1..-1]
}
Sidekiq.configure_server do |config|
config.redis = redis_options
end
Sidekiq.configure_client do |config|
config.redis = redis_options
end
But it raised the exact same error. I could run the docker locally connected to local redis just fine. I am wondering there might be something wrong with the ENV['REDIS_URL'] value.
Is there anyone experiencing this issue or any clues?
My env is
- passenger-docker ruby 2.3.8
- aws elastic cache redis: 5.0.5
- sidekiq 5.2.7
After many hours, I just realize that the redis_url from the container was pointed to the deleted Redis cluster in aws. My first launch of Redis was set to default to r5 16GM instance type which is very big and costly for testing. I decided to launch a new one with just t2 small and deleted the r5 one to save some money, however in passenger-config I have the REDIS_URL set and Nginx overridden the env value over the one I set in the ECS task.
The error raised by ruby Redis is not straightforward and having two places to set ENV (in Nginx config and in ECS-task ) make the debugging painful.

Sentry InvalidURIError with DSN url

O configure my Ruby on Rails application to use sentry with error report, but it show-me this error:
URI::InvalidURIError:
bad URI(is not URI?): 'http://9ba0c50c55c94603a488a55516d5xxx:xxxx6d6468a4cb892140c1f86a9f228#sentry.myaddres.com/24'
When I remove 9ba0c50c55c94603a488a55516d5xxx:xxxx6d6468a4cb892140c1f86a9f228# this part of addresss all works fine, but in sentry documentation is:
Raven.configure do |config|
config.dsn = 'http://public:secret#example.com/project-id'
end
How can I solve this problem?
I was using ENV var to set sentry DSN:
# .env
SENTRY_DSN_URL='http://public:secret#example.com/project-id'
and in initializer
Raven.configure do |config|
config.dsn = ENV['SENTRY_DSN']
end
This problem is the quotation marks. To solve just remove them.
# .env
SENTRY_DSN_URL=http://public:secret#example.com/project-id
Works fine.

"Refused to connect" using ChromeDriver, Capybara & Docker Compose

I'm trying to make the move from PhantomJS to Headless Chrome and have run into a bit of a snag. For local testing, I'm using Docker Compose to get all dependent services up and running. To provision Google Chrome, I'm using an image that bundles both it and ChromeDriver together while serving it on port 4444. I then link it to the my app container as follows in this simplified docker-compose.yml file:
web:
image: web/chrome-headless
command: [js-specs]
stdin_open: true
tty: true
environment:
- RACK_ENV=test
- RAILS_ENV=test
links:
- "chromedriver:chromedriver"
chromedriver:
image: robcherry/docker-chromedriver:latest
ports:
- "4444"
cap_add:
- SYS_ADMIN
environment:
CHROMEDRIVER_WHITELISTED_IPS: ""
Then, I have a spec/spec_helper.rb file that bootstraps the testing environment and associated tooling. I define the :headless_chrome driver and point it to ChromeDriver's local binding; http://chromedriver:4444. I'm pretty sure the following is correct:
Capybara.javascript_driver = :headless_chrome
Capybara.register_driver :chrome do |app|
Capybara::Selenium::Driver.new(app, browser: :chrome)
end
Capybara.register_driver :headless_chrome do |app|
capabilities = Selenium::WebDriver::Remote::Capabilities.chrome(
chromeOptions: { args: %w[headless disable-gpu window-size=1440,900] },
)
Capybara::Selenium::Driver.new app,
browser: :chrome,
url: "http://chromedriver:4444/",
desired_capabilities: capabilities
end
We also use VCR, but I've configured it to ignore any connections to the port used by ChromeDriver:
VCR.configure do |c|
c.cassette_library_dir = 'spec/vcr_cassettes'
c.default_cassette_options = { record: :new_episodes }
c.ignore_localhost = true
c.allow_http_connections_when_no_cassette = false
c.configure_rspec_metadata!
c.ignore_hosts 'codeclimate.com'
c.hook_into :webmock, :excon
c.ignore_request do |request|
URI(request.uri).port == 4444
end
end
I start the services with Docker Compose, which triggers the test runner. The command is pretty much this:
$ bundle exec rspec --format progress --profile --tag 'broken' --tag 'js' --tag '~quarantined'
After a bit of waiting, I encounter the first failed test:
1) Beta parents code redemption: redeeming a code on the dashboard when the parent has reached the code redemption limit does not display an error message for cart codes
Failure/Error: fill_in "code", with: "BOOK-CODE"
Capybara::ElementNotFound:
Unable to find field "code"
# ./spec/features/beta_parents_code_redemption_spec.rb:104:in `block (4 levels) in <top (required)>'
All specs have the same error. So, I shell into the container to run the tests myself manually and capture the HTML it's testing against. I save it locally and open it up in my browser to be welcomed by the following Chrome error page. It would seem ChromeDriver isn't evaluating the spec's HTML because it can't reach it, so it attempts to run the tests against this error page.
Given the above information, what am I doing wrong here? I appreciate any and all help as moving away from PhantomJS would solve so many headaches for us.
Thank you so much in advance. Please, let me know if you need extra information.
The issue you're having is that Capybara, by default, starts the AUT bound to 127.0.0.1 and then tells the driver to have the browser request from the same. In your case however 127.0.0.1 isn't where the app is running (From the browsers perspective), since it's on a different container than the browser. To fix that, you need to set Capybara.server_host to whatever the external interface of the "web" container is (which is reachable from the "chromedriver" container). That will cause Capybara to bind the AUT to that interface and tell the driver to have the browser make requests to it.
In your case that probably means you can specify 'web'
Capybara.server_host = 'web'

Seahorse::Client::NetworkingError: Connection refused - connect(2) for "localhost" port 8000

I am building a rails application and I use dynamodb (using dynamoid) as the database. For testing, I use dynamodb-local.
When I get into the test database from command line , I get the following error.
Seahorse::Client::NetworkingError: Connection refused - connect(2) for
"localhost" port 8000
config/initializers/dynamoid.rb
AWS_CONFIG = YAML.load_file("#{Rails.root}/config/aws.yml")[Rails.env]
Dynamoid.configure do |config|
config.adapter = 'aws_sdk_v2'
config.namespace = AWS_CONFIG['namespace']
config.warn_on_scan = false # Output a warning to the logger when you perform a scan rather than a query on a table.
config.read_capacity = 5 # Read capacity for tables, setting low
config.write_capacity = 5 # Write capacity for your tables
end
if Rails.env.test? || ENV['ASSET_PRECOMPILE'].present?
p "Comes here"
Aws.config[:ssl_verify_peer] = false
Aws.config.update({
credentials: Aws::Credentials.new('xxx','xxx'),
endpoint: 'https://localhost:8000',
region: 'us-west-2'
})
Rakefile:
task :start_test_dynamo do
FileUtils.cd('rails-root') do
sh "rake dynamodb:local:test:start"
sh "rake dynamodb:seed"
end
end
Check two things
Your dynamodb-local already load ?
Your endpoint correct or not ?
change https://localhost:8000 to http://localhost:8000 (remove s)

FATAL: You must set node['postgresql']['password']['postgres'] in chef-solo mode when using posgtresql cookbook

So I included the cookbook 'postgresql', {} in my Cheffile. Now I have the box downloaded and installed with Vagrant, but when I run vagrant provision, it gives me the error:
FATAL: You must set node['postgresql']['password']['postgres'] in chef-solo mode
I saw somewhere that I should add this line:
default['postgresql']['password']['postgres'] = "myPassword"
in my default.rb file which is in the postgresql cookbook.
But if I add this and do vagrant provision again, the line gets deleted and I run into the same error again.
What is the problem here?
You can set node data in Vagrantfile using chef.json. For example:
Vagrant.configure("2") do |config|
# ...
config.vm.provision "chef_solo" do |chef|
# ...
chef.json = {
postgresql: {
password: {
postgres: "myPassword"
}
}
}
end
end
See the Vagrant docs for more information.
I fixed the problem. I had two chef.json defined in my Vagrantfile. Stacking everything under one fixed the problem.

Resources