I recently enabled GZIP on my Rails 4 app following this Thoughtbot blog post and I also have added use Rack::Deflater to my config.ru file as suggested by this post. My Rails app seems to be serving compressed content, but when I test for it using RSpec the test fails because response.headers['Content-Encoding'] is nil.
Here is my application.rb:
module MyApp
class Application < Rails::Application
# Turn on GZIP compression
config.middleware.use Rack::Deflater
end
end
Here is my spec:
require 'rails_helper'
describe GeneralController, type: :controller, focus: true do
it "a visitor has a browser that supports compression" do
['deflate', 'gzip', 'deflate,gzip', 'gzip,deflate'].each do |compression_method|
get 'about', {}, {'HTTP_ACCEPT_ENCODING' => compression_method }
binding.pry
expect(response.headers['Content-Encoding']).to be
end
end
it "a visitor's browser does not support compression" do
get 'about'
expect(response.headers['Content-Encoding']).to_not be
end
end
When I run curl --head -H "Accept-Encoding: gzip" http://localhost:3000/ I get the following output:
HTTP/1.1 200 OK
X-Frame-Options: SAMEORIGIN
X-Xss-Protection: 1; mode=block
X-Content-Type-Options: nosniff
X-Ua-Compatible: chrome=1
Content-Type: text/html; charset=utf-8
Vary: Accept-Encoding
Content-Encoding: gzip
Etag: "f7e364f21dbb81b9580cd39e308a7c15"
Cache-Control: max-age=0, private, must-revalidate
X-Request-Id: 3f018f27-40ab-4a87-a836-67fdd6bd5b6e
X-Runtime: 0.067748
Server: WEBrick/1.3.1 (Ruby/2.0.0/2014-02-24)
When I load the site and look at the Network tab of the inspector I can see that the response size is smaller than before, but my test still fails. I'm not sure if I'm missing a step here with my test or if there is an issue with my implementation of Rack::Deflater.
As #andy-waite pointed, RSpec controller specs are not aware of middleware, but that's why, since RSpec 2.6 we have request specs.
Request specs are, according to the docs:
designed to drive behavior through the full stack
Therefore, using RSpec > 2.6 request specs, your code should look like:
require 'rails_helper'
describe GeneralController, type: :request, focus: true do
it "a visitor has a browser that supports compression" do
['deflate', 'gzip', 'deflate,gzip', 'gzip,deflate'].each do |compression_method|
get 'about', {}, {'HTTP_ACCEPT_ENCODING' => compression_method }
binding.pry
expect(response.headers['Content-Encoding']).to be
end
end
it "a visitor's browser does not support compression" do
get 'about'
expect(response.headers['Content-Encoding']).to_not be
end
end
RSpec controller specs are wrapped around Rails functional tests, which are not aware of middleware:
Making Rails tests aware of Rack middleware outside Rails's internal chain
Related
I have a simple Rails 6 app with ActiveStorage. I use local disk storage. When I inspect responses from representation url like this
http://localhost:3000/rails/active_storage/disk/some-long-hash/IMG_0951.jpeg?content_type=image%2Fjpeg&disposition=inline%3B+filename%3D%22IMG_0951.jpeg%22%3B+filename%2A%3DUTF-8%27%27IMG_0951.jpeg
I see headers Cache-Control: max-age=0, private, must-revalidate
The question is how to make Rails to set public caching header with some age?
The #show method for ActiveStorage::DiskController is difficult to override but it can be done.
A simpler approach is to add an after_action callback for the existing #show method to insert the Cache-Control header when it is called:
# config/initializers/active_storage.rb
require 'active_storage/current'
ActiveStorage::Current.url_options = { host: 'localhost', port: 3000 }
require 'active_storage/set_current'
require 'active_storage/base_controller'
require 'active_storage/file_server'
require 'active_storage/disk_controller'
class ActiveStorage::DiskController < ActiveStorage::BaseController
after_action do
response.set_header('Cache-Control', 'max-age=3600, public') if action_name == 'show'
end
end
Requesting an ActiveStorage URL then returns the custom Cache-Control header value in the response:
HTTP/1.1 200 OK
Cache-Control: max-age=3600, public
...
I don't know how to solve the 400 bad request that API service gives back to me. All I need are receiving json responses from API service, and the API service could communicate with many-type devices.
I'm building an all-json-request API service according to tutorial of Grape gem at GitHub.com https://github.com/ruby-grape/grape and GitHub.io http://intridea.github.io/grape/docs/index.html . After I sending a json request to the API service: localhost:3000/specified_vegetables using curl, which is construct with Grape gem version 0.8.0 and located at new empty Rails 4.1.1 project, the API service give back the response:
* Trying ::1...
* connect to ::1 port 3000 failed: Connection refused
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 3000 (#0)
> GET /specified_vegetables/ HTTP/1.1
> Host: localhost:3000
> User-Agent: curl/7.43.0
> Accept: application/json
> Content-Type:application/json
> Content-Length: 55
>
* upload completely sent off: 55 out of 55 bytes
< HTTP/1.1 400 Bad Request
< Content-Type: application/json
< Content-Length: 56
< Cache-Control: no-cache
< X-Request-Id: 87231177-b8f8-4084-b0ef-19445fa42fab
< X-Runtime: 0.220101
< Server: WEBrick/1.3.1 (Ruby/2.2.1/2015-02-26)
< Date: Thu, 01 Oct 2015 13:35:46 GMT
< Connection: Keep-Alive
<
* Connection #0 to host localhost left intact
{"error":"transaction_date is missing, name is missing"}
My curl request command: curl -H 'Accept: application/json' -H 'Content-Type:application/json' -X GET http://localhost:3000/specified_vegetables/ -d '{"transaction_date" : "20131031", "name" : "金針筍"}' -v
In my database, there is one record matched the conditions I've queried.
The log of rails server runs and it also includes the curl request:
=> Booting WEBrick
=> Rails 4.1.1 application starting in development on http://0.0.0.0:3000
=> Run `rails server -h` for more startup options
=> Notice: server is listening on all interfaces (0.0.0.0). Consider using 127.0.0.1 (--binding option)
=> Ctrl-C to shutdown server
[2015-10-01 21:35:43] INFO WEBrick 1.3.1
[2015-10-01 21:35:43] INFO ruby 2.2.1 (2015-02-26) [x86_64-darwin10]
[2015-10-01 21:35:43] INFO WEBrick::HTTPServer#start: pid=27004 port=3000
Started GET "/specified_vegetables/" for 127.0.0.1 at 2015-10-01 21:35:46 +0800
The following are my environment settings and codes.
The directory structure of API files in rails app:
RailsApp.root/app/api/agriculture_transaction/base.rb
RailsApp.root/app/api/agriculture_transaction/v1/specified_vegetables.rb
Code of base.rb:
module AgricultureTransaction
class Base < Grape::API
format :json
version 'v1'
mount AgricultureTransaction::V1::SpecifiedVegetables
mount AgricultureTransaction::V1::OverviewVegetables
end
end
Code of specified_vegetables.rb
module AgricultureTransaction
module V1
class SpecifiedVegetables < Grape::API
format :json
default_format :json
version 'v1', using: :header, vendor: 'twitter'
resource :specified_vegetables do
desc "Get all transaction prices of all items today."
get :today do
SpecifiedVegetable.where(transaction_date: Date.today).find_in_batches do |vegetables|
vegetables
end
end
desc "Get transaction prices of delegated item in a delegated day."
params do
requires :transaction_date, type: Date, desc: 'code'
requires :name, type: String
end
get do
SpecifiedVegetable.where(transaction_date: params[:transaction_date], name: params[:name]).find_in_batches do |vegetables|
vegetables
end
end
end
end
end
end
The only one model in Rails app is SpecifiedVegetable.rb:
class SpecifiedVegetable < ActiveRecord::Base
self.table_name = "specified_vegetable"
end
In config/application.rb file, I add two lines in rails Application class:
config.paths.add "app/api", glob: "**/*.rb"
config.autoload_paths += Dir["#{Rails.root}/app/api/*"]
In config/routes.rb file, I add one line in Rails.application.routes.draw:
mount AgricultureTransaction::Base => '/'
Above of all are information I could provide. Please help me to solve this problem or give me some keywords to search. I've find some articles, but they could not help me.
Good afternoon,
I've run into some issues trying to combine HTTP caching with Rack::Cache and action caching (on my Heroku-hosted app).
Using them individually, it seems to be working. With action caching enabled, the page loading is snappy, and the log would suggest it is caching. With HTTP caching in the controllers (eTag, last_modified and fresh_when) the proper headers appear to be set.
However, when I try to combine the two, it appears to be action caching, but the HTTP headers are always max_age: 0, must_revalidate. Why is this? Am I doing something wrong?
For example, here's the code in my "home" action:
class StaticPagesController < ApplicationController
layout 'public'
caches_action :about, :contact, ......, :home, .....
......
def home
last_modified = File.mtime("#{Rails.root}/app/views/static_pages/home.html.haml")
fresh_when last_modified: last_modified , public: true, etag: last_modified
expires_in 10.seconds, :public => true
end
For all intents and purposes, this should have a public cache-control tag with max-age 10 no?
$ curl -I http://myapp-staging.herokuapp.com/
HTTP/1.1 200 OK
Cache-Control: max-age=0, private, must-revalidate
Content-Type: text/html; charset=utf-8
Date: Thu, 24 May 2012 06:50:45 GMT
Etag: "997dacac05aa4c73f5a6861c9f5a9db0"
Status: 200 OK
Vary: Accept-Encoding
X-Rack-Cache: stale, invalid
X-Request-Id: 078d86423f234da1ac41b418825618c2
X-Runtime: 0.005902
X-Ua-Compatible: IE=Edge,chrome=1
Connection: keep-alive
Config Info:
# Use a different cache store in production
config.cache_store = :dalli_store
config.action_dispatch.rack_cache = {
:verbose => true,
:metastore => "memcached://#{ENV['MEMCACHE_SERVERS']}",
:entitystore => "memcached://#{ENV['MEMCACHE_SERVERS']}"#,
}
In my mind, you should be able to use action caching as well as a reverse proxy correct? I know that they do fairly similar things (if the page changes, both the proxy and the action cache will be invalid and need to be regenerated), but I feel I should be able to have both in there. Or should I get rid of one?
UPDATE
Thanks for the answer below! It seems to work. But to avoid having to write set_XXX_cache_header methods for every controller action, do you see any reason why this wouldn't work?
before_filter :set_http_cache_headers
.....
def set_http_cache_headers
expires_in 10.seconds, :public => true
last_modified = File.mtime("#{Rails.root}/app/views/static_pages/#{params[:action]}.html.haml")
fresh_when last_modified: last_modified , public: true, etag: last_modified
end
When you use action caching, only the response body and content type is cached. Any other changes to the response will not happen on subsequent requests.
However, action caching will run any before filters even when the action itself is cached.
So, you can do something like this:
class StaticPagesController < ApplicationController
layout 'public'
before_filter :set_home_cache_headers, :only => [:home]
caches_action :about, :contact, ......, :home, .....
......
def set_home_cache_headers
last_modified = File.mtime("#{Rails.root}/app/views/static_pages/home.html.haml")
fresh_when last_modified: last_modified , public: true, etag: last_modified
expires_in 10.seconds, public: true
end
I have a method in my controller which uses send_data like this:
def show
expires_in 10.hours, :public => true
send_data my_image_generator, :filename => "image.gif", :type => "image/gif"
end
Using expires_in results in headers being sent like this:
HTTP/1.1 200 OK
Connection: close
Date: Fri, 25 Jun 2010 10:41:22 GMT
ETag: "885d75258e9306c46a5dbfe3de44e581"
Content-Transfer-Encoding: binary
X-Runtime: 143
Content-Type: image/gif
Content-Disposition: inline; filename="image.gif"
Content-Length: 1277
Cache-Control: max-age=36000, public
What I would like to do is add an header like Expires: (some exact date) to keep the user agent from revalidating. But I don't see how to make send_data set that header?
I guess I could set it explicitly in the response.headers hash, but surely there must be a wrapper for that (or something)?
I came across this syntax and I like it :-)
response.headers["Expires"] = 1.year.from_now.httpdate
Apparently there seems to be no way to pass expires to send_data - instead you must set it yourself in response.headers and take care of formatting the date appropriately:
response.headers["Expires"] = CGI.rfc1123_date(Time.now + period)
Note that the max-age directive in the Cache-Control header overrides the Expires header if both are present. See RFC2616 Section 14.9.3 for more details.
The code in your question should actually work on recent Rails:
`expires_in 10.hours, :public => true`
I'm attempting to get Rails to play nice with the Digg API's OAuth. I'm using the oauth gem (ruby one, not the rails one).
My code looks approximately like this:
#consumer = OAuth::Consumer.new(API_KEY, API_SECRET,
:scheme => :header,
:http_method => :post,
:oauth_callback => "http://locahost:3000",
:request_token_url => 'http://services.digg.com/1.0/endpoint?method=oauth.getRequestToken',
:access_token_url => 'http://services.digg.com/1.0/endpoint?method=oauth.getAccessToken',
:authorize_url => 'http://digg.com/oauth/authorize')
#request_token = DiggController.consumer.get_request_token({
:oauth_callback => "http://xx.xxx.xxx.x:3000/digg/callback"
}, {
'Content-Type' => 'application/x-www-form-urlencoded'
})
session[:request_token] = #request_token.token
session[:request_token_secret] = #request_token.secret
redirect_to #request_token.authorize_url
Which is by-the-book in terms of what the gem documentation gave me. However, Digg spits a "400 Bad Request" error back at me when #consumer.get_request_token is called. I can't figure out what I'm doing wrong. Any ideas?
Edit: Code updated and Wireshark output added. My error is now "401 Authorization Required".
Output from Wireshark:
POST /1.0/endpoint?method=oauth.getRequestToken HTTP/1.1
Accept: */*
Connection: close
User-Agent: OAuth gem v0.3.6
Content-Type: application/x-www-form-urlencoded
Authorization: OAuth oauth_nonce="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
oauth_callback="http%3A%2F%2Fxx.xxx.xxx.x%3A3000%2Fdigg%2Fcallback",
oauth_signature_method="HMAC-SHA1",
oauth_timestamp="1268687137",
oauth_consumer_key="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
oauth_signature="xxx%2Bxxxxxxxxxxxxxxx%2Fxxxxxxx%3D", oauth_version="1.0"
Content-Length: 48
Host: services.digg.com
Content-Type=application%2fx-www-form-urlencoded
HTTP/1.1 401 Authorization Required
Date: Mon, 15 Mar 2010 21:05:37 GMT
Server: Apache
X-Powered-By: PHP/5.2.9-digg8
Cache-control: private
X-RateLimit-Current: 1
X-RateLimit-Max: 1000
X-RateLimit-Reset: 3600
X-Digg-Api-Version: 1.0
Accept-Ranges: bytes
Content-Length: 111
Keep-Alive: timeout=5, max=9998
Connection: Keep-Alive
Content-Type: text/xml;charset=utf-8
<?xml version="1.0" encoding="UTF-8"?>
<error code="5001" message="Invalid
signature" timestamp="1268687137"/>
Incidentally, the callback parameter should not be localhost:3000 but rather your public IP address (making sure to also open up port 3000 for external connections in your computer and/or router firewall(s)), or be left to the default (out-of-band.)
Examine the contents of the OAuth::Unauthorized exception which gets thrown (or use a sniffer such as tcpdump or Wireshark) to get additional details about the HTTP 400 error (they are probably having issues with some of your parameters.)