Rails 6.0 Resque Redis and Auth - ruby-on-rails

Im using rails 6.0 with resque, redis and I can't get rails server to start because i've got AUTH issues and get the following when i start rails server
vendor/bundle/ruby/2.7.0/gems/redis-4.6.0/lib/redis/client.rb:162:in `call': NOAUTH Authentication required. (Redis::CommandError)
i've tried putting the following into config/initializers/resque.rb and i've also tried it in environment.rb
Resque.redis = Redis.new(:password => 'myresquepassword')
If i fire up cli and put the the password in I can run commands no bother
Any ideas ?

The answeris that resque does not have support for authentication in 2.2.1, so its not possible

Related

Understanding rails s conflict with Puma and Capybara. How do I properly install puma in the Gemfile?

I am inheriting a code base where the tests have been abandoned for a little over a year. I am trying to clean up the test suite as much as possible but I am having a difficult time teasing out my integration tests with 1. Capybara 2. Puma, 3. Selenium, 4. Starting the Rails Server with rails s.
Here is my initial setup and problem with the Rails 4.2 app. So without doing anything from the get go, when I execute rails testing, I get the following error:
Capybara is unable to load puma for its server, please add puma to your project or specify a different server via something like Capybara.server = :webrick. (LoadError).
I want to point out that in a separate file named start-dev, I have the following in its contents:
rails s -b 0.0.0.0
When I execute this comand with ./start-dev, I am able to view my development app with this url defined in my /etc/hosts 127.0.0.1 secure.ssl.local
Now here is where I start to run into trouble, Through reading some github forums regarding capybara and puma, I start by adding puma into my Gemfile and bundle install but now I am unable to see my development app through the browser at secure.ssl.local.
This is the erorr I get:
/usr/lib/ruby/2.6.0/uri/rfc3986_parser.rb:67:in `split': bad URI(is not URI?): "tcp://0.0.0.0\r:3000" (URI::InvalidURIError)
I noticed that in the start up though, when I execute ./start-dev which if you remeber has rails s -b 0.0.0.0, I see this:
Booting Puma rails 4.2.11.1 application starting in development on https://0.0.0.0:3000
So I am confused by this error. Is Puma blocking my port of 3000 meaning that I have to change the port of Puma? And what makes this even more confusing is that instead of running ./start-dev in the terminal and I simply run rails s -b 0.0.0.0 it magically works except it only works if I navigate to localhost:3000 and not secure.ssl.localhost. This is important because on secure.ssl.localhost I have and need a certificate and localhost I don't.
And finally to add one more layer of confusion, when I run the tests with the puma gem installed and run rake test I get this with Puma:
Capybara starting Puma...
* Version 4.2.1 , codename: Distant Airhorns
* Min threads: 0, max threads: 4
* Listening on tcp://127.0.0.1:36608
And the test takes ages to load. There is a lot going on but I guess my question can be summarised by the following? When I install puma into my rails application, do I need to specify it on a specific port so it does not conflict with my app? Without puma, my tests can't run and my ./start-dev file works. With puma, my tests are kind of working but my ./start-dev file isn't anymore. Surely there must be a standard to configure puma. Thank you.
Install puma only for test environment by putting it into test group:
group :test do
gem "puma"
end
Or do as Capybara proposes and put Capybara.server = :webrick into your spec/rails_helper.rb
FYI: Rails uses puma as default web server if puma is installed

Rails 5: Using Capybara with Phusion Passenger

I am trying to run specs on a Rails 5 app. However, I am seeing the following LoadError:
Failure/Error: raise LoadError, 'Capybara is unable to load puma for
its server, please add puma to your project or specify a different
server via something like Capybara.server = :webrick.'
I am using the passenger gem for the production server. Is there a way in which I can use passenger for the Capybara tests as well?
Thanks in advance
If you really want to use passenger to run your app while testing you will need to startup your app separately with passenger using the rails test environment (so it uses the test DB, etc) and then set
Capybara.run_server = false
Capybara.app_host = "http://<wherever the app is accessible>"
for your tests. This tells Capybara not to bother with running the app itself and to just connect to the app you're already running. You will also need to use something like database_cleaner to handle database resetting between tests, and be very careful to make sure you don't have any leftover requests running at the end of each test.
When running the tests with puma or webrick none of that is required (database_cleaner is generally required for rails < 5.1 but not 5.1+), because the web server is run in the same process (different threads) as the tests which allows Capybara to know when requests are still being processed and for Rails to share the DB connection with the tests. Overall you're just going to have a much smoother experience if you stick with puma or webrick for your tests.
I don't think you can do this:
The capybara documentation:
This block takes a rack app and a port and returns a
rack server listening on that port
From the passenger repo:
This is fundamentally incompatible with Phusion Passenger's model.
Such a Rack handler implies a single application and a single process.
Phusion Passenger runs multiple processes which can even serve multiple
applications.

Michael Hartl's tutorial - ssl configuration - using elastic beanstalk - how to revert back to http?

I am going over Michael Hartl's amazing tutorial, however I am using Elastic Beanstalk instead of Heroku.
On chapter 7 - we change the production.rb file as so:
SampleApp::Application.configure do
# Force all access to the app over SSL, use Strict-Transport-Security,
# and use secure cookies.
config.force_ssl = true
end
Which then doesn't work with Elastic Beanstalk. The browser cannot connect to server. I have tried to comment out the line again. I have also tried to set
config.force_ssl = false
And now I cannot get the app to work anymore. Even reverting back to a version prior to the ssl change does not work.
Clearly some other file has changed. How do I get it back to a working app? I do not want to add the ssl certificate at this time (maybe later).
Thanks,
Sam
EDIT -- I can totally get to the app on my local machine. I cannot get to the version deployed on Elastic Beanstalk
I'm assuming that you are unable to get to the app on your local machine through rails s. Some people have overcome this issue through clearing their browsers cache. I had this issue and the only fix is to use a thin client instead.
Add thin to your Gemfile:
group :development, :test do
....
gem 'thin'
end
Install it:
bundle install
And then instead of running rails s use:
thin start --ssl
You should be able to access your app on your local machine again.
I fixed the issue by:
commenting out
config.force_ssl = true
And rebuilding the environment on Elastic Beanstalk - Rebuilding the environment should be done with caution since it kills the database instance.
Happy to have it fixed!

Dalli Server Not Found When Running Delayed_Job

When running a job with delayed_job, I get errors when trying to access the Dalli Cache.
The Dalli cache is initialized in environment.rb with the following...
CACHE = Dalli::Client.new(['localhost:11211'], {:namespace=>my_namespace_method})
When the code executes CACHE.get 'my_key' in my rails app, everything runs fine.
When it runs in a delayed_job worker, it errors with No Server Available
Other info that might help:
I have confirmed that the memcached server is running and is accessible via telnet on localhost:11211
We're running Ruby 1.9.2 under rvm

Rails authenticate_or_request_with_http_basic not working on SSL + Nginx

I've get an action in my Rails 3 app that I'm pw-protecting with authenticate_or_request_with_http_basic. Working fine on my development machine but it's not prompting for the http_basic user/password on the production server.
The entire production app runs over https/SSL on nginx.
Where do I look to resolve this? Does http basic auth not work over SSL? Or is there an nginx setting I need to look at?
TIA
Not sure if this related to Rails 3.
I just recently had problems running Mongrel 1.1.5 and Rails 2.3.8
Apparently, there is a bug in this set up where our production machine does not prompt for the user name and password (but works locally, because we are using web-brick).
In the mongrel.log we keep getting this error:
Error calling Dispatcher.dispatch #split' called for nil
:NilClass>
/usr/local/rvm/gems/ruby-1.8.7-p174/gems/actionpack-2.3.8/lib/action_controller/cgi_process.rb:52:indispatch_cgi'
/usr/local/rvm/gems/ruby-1.8.7-p174/gems/actionpack-2.3.8/lib/action_controller/dispatcher.rb:101:in `dispatch_cgi'
...
Found the monkey patch needed to fix this Mongrel 1.1.5 and Rails 2.3.8 and it worked for me.
The german site that lead to the solution: http://railssprech.de/ with 2 links for 2.3.8 and 2.3.9.
Here is the 2.3.8 version: http://www.pcoder.net/error-calling-dispatcher-dispatch/#axzz1RknBQso2
The patch explains why this error was occurring. Check the Rails 3 CGIHandler.dispatch_cgi method and see if it the same bug. You may need to extract the Rails 3 out and monkey patch it.
Hope this helps.
BTW: Mongrel 1.1.5 and Rails 2.3.5 works!

Resources