Extremely slow PUT request in Redmine with plugins - ruby-on-rails

I have the following tech stack:
Debian 8
Ruby 2.3
Nginx + Passenger
Redmine 3.3 issue tracker application
Agile plugin and Easy Gantt plugin for Redmine
Both Agile plugin and Easy Gantt plugin contain drag&drop visual UI, and dropping an item results in the PUT request. I don't know whether Redmine has any use of PUT requests anywhere apart from those plugins.
Problem is, these PUT requests are extremely slow for no obvious reason. They take ~120000 ms (120 sec, 2 minutes) to complete.
Here's log from log/production.log:
Started PUT "/agile/board" for 127.0.0.1 at 2016-10-23 20:08:03 +0300
Processing by AgileBoardsController#update as */*
Parameters: {"issue"=>{"status_id"=>"3"}, "positions"=>{"2"=>{"position"=>"0"}}, "id"=>"2"}
Current user: admin (id=1)
Rendered mailer/_issue.text.erb (4.8ms)
Rendered mailer/issue_edit.text.erb within layouts/mailer (8.2ms)
Rendered mailer/_issue.html.erb (1.3ms)
Rendered mailer/issue_edit.html.erb within layouts/mailer (4.3ms)
Rendered plugins/redmine_agile/app/views/agile_boards/_issue_card.html.erb (35.0ms)
Completed 200 OK in 120519ms (Views: 33.0ms | ActiveRecord: 41.0ms)
Here the date is at the moment of initiating the PUT request. So, what RoR did all these 2 minutes, if it rendered view in 33ms and worked with ActiveRecords in 41ms? What other activity could it perform which I can look at?
Here is the request headers from Firebug:
Host: redmine.local
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Firefox/45.0
Accept: */*
Accept-Language: ru,ru-RU;q=0.8,en-US;q=0.5,en;q=0.3
Accept-Encoding: gzip, deflate
X-CSRF-Token: ZKCHb1NeenfN6EVttTPHMiGItsTsKWDJPm5Q2VqaiYRRn420TH67pnwfRpWo/mQdDOWDhZNe1snDy+eP327PfQ==
Content-Type: application/x-www-form-urlencoded; charset=UTF-8
X-Requested-With: XMLHttpRequest
Referer: http://redmine.local/projects/myproject/agile/board
Content-Length: 60
Cookie: _redmine_session=MjI1Q <... lots of data ...>f
X-Compress: 1
Proxy-Authorization: c7f14568a48248797f198ea6e3c7d7c4f39185ce12aeac08439a9d6726a4cfd5612d4cef98c0ca43
Connection: keep-alive
Here is the answer from server:
Cache-Control: max-age=0, private, must-revalidate
Connection: keep-alive
Content-Encoding: gzip
Content-Type: text/html; charset=utf-8
Date: Sun, 23 Oct 2016 17:10:03 GMT
Etag: W/"0ec2e6c508ed9d99f224fc23e1fd3dbf"
Server: nginx/1.10.1 + Phusion Passenger 5.0.30
Set-Cookie: _redmine_session=VlZkakNTT <... lots of data ...>1f08a39ea5c63; path=/; HttpOnly
Transfer-Encoding: chunked
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-Request-Id: 30b3a5f8-edb0-419a-b5f1-c6603439438a
X-Runtime: 120.523361
X-XSS-Protection: 1; mode=block
status: 200 OK
x-powered-by: Phusion Passenger 5.0.30
In the answer from server "Date" field shows 2 minutes from the moment of starting the PUT request, so I guess that server starts building this answer after those 2 minutes of... what?
In the timings panel of Firebug there's three zeroes for DNS resolving, connection and sending (I am connecting to my local server on the same machine), 120525ms waiting and 8ms receiving.
Here's the log record for this request from Nginx access.log:
127.0.0.1 - - [23/Oct/2016:20:10:03 +0300] "PUT /agile/board HTTP/1.1" 200 920 "http://redmine.local/projects/myproject/agile/board" "Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Firefox/45.0"
I tried to replace Nginx+Passenger stack with Apache2+Passenger one, but problem persists. Even when I start the Webrick the 2 minutes timeout is there, so I believe it's either something with RoR, not the web server.
I have no prior experience writing the Ruby on Rails applications, only installing them.

Problem was that after_save was sending e-mails through sendmail and sendmail was stuck on the relatively well-known error "My unqualified host name (your hostname here) unknown; sleeping for retry". After I googled about this error and ensured that the correct FQN for my hostname was in the /etc/hosts, 2 minutes delay became 2 seconds delay.
127.0.0.1 localhost localhost.localdomain my-host-name

Related

rails 4 http caching returning 200 iso 304, even with the same ETag and last_modified

I'm quite new to caching so I've been trying some different ways of caching my website. I've settled on HTTP caching now, because it's the most appropriate with sporadic updates and lots of users perusing the same pages over and over.
I'm struggling to get it working however. The site shows different content based on whether you're logged in or not, so I have to invalidate cache based on current_user as well as the latest update on the collection of models.
If I look in chrome inspect the ETag and the modified_since are the same, but the server returns a 200 instead of a 304. My code works in development environment, so I'm lost in how to troubleshoot it. Also a different page that only invalidates based on the collection of models (similar on latest update), does work as expected.
Code from the controller:
def index
...#some code
# HTTTP caching:
last_mod = #scraps.order("updated_at").last.updated_at
user = current_user ? current_user.id : 0
fresh_when etag: user.to_s, last_modified: last_mod, public: false
end
Output from chrome inspect
Response Headers:
HTTP/1.1 200 OK
Content-Type: text/html; charset=utf-8
Transfer-Encoding: chunked
Connection: keep-alive
Vary: Accept-Encoding
Status: 200 OK
Last-Modified: Sun, 23 Jul 2017 20:40:53 GMT
Cache-Control: max-age=0, private, must-revalidate
ETag: W/"6e92592bdb6c3cf610020e2b076e64b4"
X-Frame-Options: SAMEORIGIN
X-XSS-Protection: 1; mode=block
X-Content-Type-Options: nosniff
X-Runtime: 3.187090
X-Request-Id: c698c0c6-8a0d-44ba-8ca9-3f162b766478
Date: Mon, 24 Jul 2017 14:49:38 GMT
Set-Cookie: ... [edited out]; path=/; HttpOnly
X-Powered-By: Phusion Passenger 5.0.30
Server: nginx/1.10.1 + Phusion Passenger 5.0.30
Content-Encoding: gzip
Request Headers:
GET /scraps?page=3&price_max=100&price_min=0&producer=silk+scraps HTTP/1.1
Host: www.picture-scraps.com
Connection: keep-alive
Accept: text/html, application/xhtml+xml, application/xml
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.115 Safari/537.36
X-XHR-Referer: https://www.picture-scraps.com/scraps?page=4&price_max=100&price_min=0&producer=silk+scraps
Referer: https://www.picture-scraps.com/scraps?page=4&price_max=100&price_min=0&producer=silk+scraps
Accept-Encoding: gzip, deflate, br
Accept-Language: nl-NL,nl;q=0.8,en-US;q=0.6,en;q=0.4,af;q=0.2
Cookie: ... [edited out]
If-None-Match: W/"6e92592bdb6c3cf610020e2b076e64b4"
If-Modified-Since: Sun, 23 Jul 2017 20:40:53 GMT
I can imagine some additional information is needed, so please request and I'll add to the question.
Figured it out today. This post provides the answer. I saw the server used weak etags while in the dev environment strong etags were used. The latter is as expected as weak etags were only introduced from rails 5 forward.
If you use Nginx with rails 4 you might experience the same problem. Installing rails_weak_etags gem solved it for me.

ODK collect can't GET forms from a rails API

I have a custom rails application to store and aggregate forms from ODK Collect. I can submit a form but when I try getting a form from the server I get no forms yet there are forms.
When I test my API endpoint from curl everything is okay since I am rendering an xml with an array of forms and content-type: text/xml. Here is the output of
curl --head -X GET localhost:3000/formList
HTTP/1.1 200 OK
X-Frame-Options: SAMEORIGIN
X-Xss-Protection: 1; mode=block
X-Content-Type-Options: nosniff
Content-Type: text/xml; charset=utf-8
Etag: W/"950d3122ec123f00905885e0e57d8f1a"
Cache-Control: max-age=0, private, must-revalidate
X-Request-Id: d2b51a32-2a72-4735-b60a-efd7e0419b93
X-Runtime: 0.012544
Server: WEBrick/1.3.1 (Ruby/2.2.1/2015-02-26)
Date: Tue, 01 Sep 2015 16:57:34 GMT
Content-Length: 380
Connection: Keep-Alive
curl -X GET localhost:3000/formList --- returns:
<?xml version="1.0" encoding="UTF-8"?>
<survey-xmls type="array">
<survey-xml>
<id type="integer">2</id>
<survey-xml>
<url>/uploads/survey_xml/survey_xml/2/CMS_1_.xml</url>
</survey-xml>
<created-at type="dateTime">2015-09-01T16:35:48+03:00</created-at>
<updated-at type="dateTime">2015-09-01T16:35:48+03:00</updated-at>
</survey-xml>
</survey-xmls>
Here is the server output when I run GET FORMS from ODK Collect.
Started GET "/formList" for ::1 at 2015-09-01 19:57:45 +0300
Processing by SurveyXmlsController#getforms as */*
SurveyXml Load (0.4ms) SELECT "survey_xmls".* FROM "survey_xmls"
Completed 200 OK in 3ms (Views: 2.3ms | ActiveRecord: 0.4ms)
Not sure if you ever figured it out but I recently had problems with getting a custom form list to pull into ODK Collect. The one piece of information I was missing was including the OpenRosa version in the response headers.
X-OpenRosa-Version: 1
Without this ODK Collect wouldn't pull in the list but it also wouldn't get an error, which made it difficult for me. This may be obvious solution to people familiar with OpenRosa but for me I was just starting to learn all this.

Error code: ERR_RESPONSE_HEADERS_MULTIPLE_CONTENT_LENGTH

I have a rails 2.3.8 application that worked just fine until I noticed that I'm not being able to download files from the same due to multiple Content-Length headers.
Interesting stuff is when I run the app on development mode everything works fine but when I restart the app on production mode I get this set twice.
Here are two different response headers:
When on production:
HTTP/1.1 200 OK
Date: Thu, 25 Jul 2013 07:33:42 GMT
Server: Mongrel 1.1.5
Status: 200 OK
X-Sendfile: filename.pdf
Content-length: 386742
Content-Transfer-Encoding: binary
Cache-Control: no-cache
Content-Disposition: attachment; filename="6301 OCCUPANT EMERGENCY PROCEDURES.pdf"
Content-Type: application/pdf
Content-Length: 1
Set-Cookie: *******
And on dev mode
HTTP/1.1 200 OK
Date: Thu, 25 Jul 2013 07:58:05 GMT
Server: Mongrel 1.1.5
Status: 200 OK
Content-Transfer-Encoding: binary
Cache-Control: private
Content-Disposition: attachment; filename="6301 OCCUPANT EMERGENCY PROCEDURES.pdf"
Content-Type: application/pdf
Content-Length: 386742
Set-Cookie: bssonline=f7d1552a46e499430af3367a0144267e; path=/
So on the dev mode only one Content-Length is found once whereas in the prod mode it comes twice due which Im not able to download any files.
Any idea as to how to solve this issue ?
Thanks

redmine: download wiki.html instead of the wiki page

as I don't manage to register in the forum in redmine.org, I am copying this question here with my actual problem.
I am running Redmine 2.1.4 on Ruby 1.8.7 and Rails 3.2.8 .
It is being served by an Apache 2.2.1 in a Debian Linux using Phusion Passenger 2.2.15
When I click on the Wiki of a project, I get a "download Wiki.html" behavior rather than the wiki page itself.
The downloaded Wiki.html has the content of the main Wiki page in a simple HTML format.
Here are my response headers with Content-Disposition clearly provoking this behaviour.
HTTP/1.1 200 OK
Date: Tue, 04 Dec 2012 01:17:33 GMT
Server: Apache/2.2
X-Powered-By: Phusion Passenger (mod_rails/mod_rack) 2.2.15
x-ua-compatible: IE=Edge,chrome=1
content-transfer-encoding: binary
X-Rack-Cache: miss
Content-Disposition: attachment; filename="Wiki.html"
Cache-Control: private
Status: 200
Vary: Accept-Encoding
Content-Encoding: gzip
Content-Length: 535
Keep-Alive: timeout=5, max=500
Connection: Keep-Alive
Content-Type: text/html
This is from the Redmine log:
Started GET "/projects/inventory/wiki.html" for 10.98.107.47 at Tue Dec 04 02:17:33 +0100 2012
Processing by WikiController#show as HTML
Parameters: {"project_id"=>"inventory"}
Current user: admin (id=4)
Rendered wiki/export.html.erb (1.7ms)
Rendered text template (0.0ms)
Sent data Wiki.html (0.4ms)
Completed 200 OK in 13ms (Views: 0.3ms | ActiveRecord: 3.2ms)
I was able to repair my installation by commenting out this line in redmine/public/.htaccess:
RewriteRule ^([^.]+)$ $1.html [QSA]
Is this directive an important part of Redmine? or Rails? I actually don't know who put all the configurations in .htaccess nor why.
Redmine seems to be working fine, but does anybody know if removing this directive will cause problems somewhere in Redmine?

Using chunked encoding in a POST request to an asmx web service on IIS 6 generates a 404

I'm using a CXF client to communicate with a .net web service running on IIS 6.
This request (anonymised):
POST /EngineWebService_v1/EngineWebService_v1.asmx HTTP/1.1
Content-Type: text/xml; charset=UTF-8
SOAPAction: "http://.../Report"
Accept: */*
User-Agent: Apache CXF 2.2.5
Cache-Control: no-cache
Pragma: no-cache
Host: uat9.gtios.net
Connection: keep-alive
Transfer-Encoding: chunked
followed by 7 chunks of 4089 bytes and one of 369 bytes, generates the following output after the first chunk has been sent:
HTTP/1.1 404 Not Found
Content-Length: 103
Date: Wed, 10 Feb 2010 13:00:08 GMT
Connection: Keep-Alive
Content-Type: text/html
Anyone know how to get IIS to accept chunked input for a POST?
Thanks
Chunked encoding should be enabled by default. You can check your setting with:
C:\Inetpub\AdminScripts>cscript adsutil.vbs get /W3SVC/AspEnableChunkedEncoding
The 404 makes me wonder if it's really a problem with the chunked encoding. Did you triple-check the URL?
You may well have URLScan running on your server. By default URLScan is configured to reject requests that have a transfer-encoding: header and URLScan sends 404 errors (which is conspicuous over a proper server-error).
UrlScan v3.1 failures result in 404 errors and not 500 errors.
Searching for 404 errors in your W3SVC log will include failures due
to UrlScan blocking.
You will need to look at the file located in (path may differ) C:\Windows\System32\inetsrv\URLScan\URLScan.ini. Somewhere in there you will find a [DenyHeaders] section, that will look a bit like this (it will probably have more headers listed).
[DenyHeaders]
transfer-encoding:
Remove transfer-encoding: from this list and it should fix your problem.

Resources