Phantomjs loads pages slowly - connection

I'm new into phantomjs, trying it on a standard centOS server (with httpd etc installed, but no modified settings apart from nameservers set to 8.8.8.8 and 8.8.4.4).
I'm using the default loadspeed.js file (be it renamed). However, page speeds appear to be extremely slow. Here's an example:
$ phantomjs phantomjs.js http://www.google.com/
starting
Loading time 90928 msec
$ phantomjs phantomjs.js http://173.194.67.138/ #(one of google's public ips)
starting
Loading time 30204 msec
When I load any url on the server (such as http://something.be ), loadtime is 141msec:
$ phantomjs phantomjs.js http://something.be
starting
Loading time 141 msec
Does anyone have a clue what causes my connection to be this slow? The connection itself is fine, wget takes seconds to download a file of several MB.
Also, when I run the exact same script on OSX locally for Google, this is the output:
phantomjs phantomjs.js http://google.com/
starting
Loading time 430 msec

Found it - seems like ipv6 was the culprit.
I disabled it temporarily by running the following:
echo 1 > /proc/sys/net/ipv6/conf/all/disable_ipv6
echo 1 > /proc/sys/net/ipv6/conf/default/disable_ipv6
Testing confirms:
$ phantomjs phantomjs.js http://google.com
starting
Loading time 230 msec

Well, in my case, the page was waiting for some GET requests and was not able to reach the requests' server and it kept waiting for long. I could only figure it out when i used the remote debugger option.
phantomjs --remote-debugger-port=9000 loadspeed.js <some_url>
and inside the loadspeed.js file
page.onResourceRequested = function (req) {
console.log('requested: ' + JSON.stringify(req, undefined, 4));
};
page.onResourceReceived = function (res) {
console.log('received: ' + JSON.stringify(res, undefined, 4));
};
and then loading localhost:9000 in any webkit browser (safari/chrome) and seeing the console logs where i could figure out it was waiting for some failed requests for a long time.
TO BYPASS THIS - REDUCE THE TIMEOUT:
page.settings.resourceTimeout = 3000; //in secs
and things were very quick after that. Hope this helps

Related

mod_fcgid: read data timeout in 45 seconds and Premature end of script headers: index.php

One of my website clients had issues while placing orders.
When I checked my error log I could see this :
[warn] mod_fcgid: read data timeout in 45 seconds, referer: https://myDomain/cart
[error] Premature end of script headers: index.php, referer: https://myDomain/cart
What does this error mean? What should I do to eliminate this error? Are there any settings to be changed in Plesk Control panel? will it be solved if I change 'max_execution_time' in 'Php settings' to 3600?
I am using Plesk 12.0.18, CentOS 5.11
The error means that website code in index.php file fails to be executed in the time limit, which set for Apache FastCGI module and/or PHP.
Most likely, there is an error in the index.php, which makes it inoperable at all. In this case, you should increase the PHP error reporting level in Plesk > Domains > example.com > PHP Settings and review the script itself.
Less likely that script is meant to take a long time to execute. In this case, you may simply increase timeout via Plesk. To set 120 seconds instead of default 45, do the following:
1. Set max_execution_time to 120 in Plesk > Domains > example.com > PHP settings.
2. Increase FastCGI timeout by adding the following Apache dirctives in Plesk > Domains > example.com > Apache & Nginx settings > Additional Apache directives:
<IfModule mod_fcgid.c>
FcgidIOTimeout 120
</IfModule>

Grunt tasks are slow in yeoman application

I have an angular project build with yeoman, talking to a rails api backend.
Everything works fine, except that grunt tasks are very slow.
When I run grunt server --verbose:
Execution Time (2014-01-15 13:37:55 UTC)
loading tasks 14.3s ▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 26%
server 1ms 0%
preprocess:multifile 11ms 0%
clean:server 13ms 0%
concurrent:server 34.3s ▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 63%
autoprefixer 1ms 0%
autoprefixer:dist 369ms ▇ 1%
connect:livereload 17ms 0%
watch 5.8s ▇▇▇▇▇▇▇▇▇ 11%
Total 54.8s
Some of my Gruntfile:
'use strict';
module.exports = function (grunt) {
require('time-grunt')(grunt);
require('load-grunt-tasks')(grunt);
require('time-grunt')(grunt);
grunt.initConfig({
...
});
grunt.loadNpmTasks('grunt-preprocess');
grunt.registerTask('server', function (target) {
if (target === 'dist') {
return grunt.task.run(['build', 'connect:dist:keepalive']);
}
grunt.task.run([
'preprocess:multifile',
'clean:server',
'concurrent:server',
'autoprefixer',
'connect:livereload',
'watch'
]);
});
grunt.registerTask('test', [
'clean:server',
'concurrent:test',
'autoprefixer',
'connect:test'
//'karma'
]);
grunt.registerTask('build', [
'preprocess:multifile',
'clean:dist',
'useminPrepare',
'concurrent:dist',
'autoprefixer',
'concat',
'copy:dist',
'cdnify',
'ngmin',
'cssmin',
'uglify',
'rev',
'usemin'
]);
grunt.registerTask('default', [
'jshint',
'test',
'build'
]);
};
Size of project:
vagrant#vm ~code/myapp/app/scripts
$> find -name "*.js" | xargs cat | wc -l
10209
I am running on MacOS 10.8 with i7 processor, 16GB ram, SSD... It is normal that is takes so long ? What makes the grunt task (and especially "loading tasks") so slow ?
Note: I am ssh'd inside a vagrant machine and running the grunt commands from there. If I run the grunt command on my native system, it's much faster (loading tasks takes 1.6s instead of 14.3).
So the shared filesystem might be an issue. But why...
I had exactly same problem with Vagrant and Yeomans angular-generator. After running grunt serve it took almost 30 seconds to compile sass, restart server etc.
I already used NFS but it was still slow. Then I tried jit-grunt, just-in-time grunt loader. I replaced load-grunt-tasks with jit-grunt and everything is now a lot faster.
Here's a good article about JIT-Grunt:
https://medium.com/written-in-code/ced193c2900b
I am using grunt inside a Vagrant virtualbox. (ubuntu 12.04). My native files are on my host machine (OSx). Because the grunt tasks are io-intensive, and they are run through file sharing, this make them quite slow.
This can be improved by adding nfs to Vagrant (http://docs.vagrantup.com/v2/synced-folders/nfs.html). This will make Vagrant share files with nfs instead of the default Vagrant file sharing. It will be a bit faster, but not much.
For comparison, on my machine:
For running the subtask loading grunt tasks
natively: 1.2s
with nfs: 4s
vagrant file sharing: 16s
If only a specific task is taking a lot of time, this specific task might be the problem. To troubleshoot, use time-grunt: https://npmjs.org/package/time-grunt.
I've had issues as well, and have found:
nospawn: true
To be the fastest option. I went from ~20s to ~1s to concat, minify, and uglify JS.
I had the same problem with Yeoman's ngbp generator and Vagrant. Even with nfs, a simple change on a template took about 30s to be seen in the browser.
Using jit-grunt caused time to be reduced to 10s. After using spawn:false, even though there was no reduction at the first load, changes took less than 1s (0.086s) to propagate to the browser! (Yes!)
The changes I've made to Gruntfile.js:
I commented all the grunt.loadNpmTasks but grunt.loadNpmTasks('grunt-contrib-watch') [That's because of a task rename ngbp does later on];
I added require('jit-grunt')(grunt); after grunt.loadNpmTasks('grunt-contrib-watch');
I added spawn: false to delta: { options: {livereload: true, spawn: false} ...}.

How to debug/fix random occurring Redis::TimeoutError?

I have a rails app running which is using redis quite a lot - however - I'm seeing quite a few Redis::TimeoutError occurring here and there, from time to time. There is no pattern in the circumstances. It occurs both in the web app and in the background jobs (which is being processed using sidekiq) - not often but from time to time.
Now I have no idea how to track down the root cause of this and hence no idea how to fix it.
Here is a little background on my setup:
The redis instance is running on a separate physical server which is connected to both my web server and background server in a private local 1Gbit network. All servers are running ubuntu 12.04. The redis version is 2.6.10. I'm connecting from my rails app (which is 3.2) using an initializer like so:
require 'redis'
require 'redis/objects'
REDIS = Redis.new(:url => APP_CONFIG['REDIS_URL'])
Redis.current = REDIS
This is the output of redis-cli INFO:
# Server
redis_version:2.6.10
redis_git_sha1:00000000
redis_git_dirty:0
redis_mode:standalone
os:Linux 3.2.0-38-generic x86_64
arch_bits:64
multiplexing_api:epoll
gcc_version:4.6.3
process_id:28475
run_id:d89bbb1b81d3169c4228cf23c0988ae437d496a1
tcp_port:6379
uptime_in_seconds:14913365
uptime_in_days:172
lru_clock:1507056
# Clients
connected_clients:233
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:19
# Memory
used_memory:801637360
used_memory_human:764.50M
used_memory_rss:594706432
used_memory_peak:4295394784
used_memory_peak_human:4.00G
used_memory_lua:31744
mem_fragmentation_ratio:0.74
mem_allocator:jemalloc-3.3.0
# Persistence
loading:0
rdb_changes_since_last_save:23166
rdb_bgsave_in_progress:0
rdb_last_save_time:1378219310
rdb_last_bgsave_status:ok
rdb_last_bgsave_time_sec:4
rdb_current_bgsave_time_sec:-1
aof_enabled:0
aof_rewrite_in_progress:0
aof_rewrite_scheduled:0
aof_last_rewrite_time_sec:-1
aof_current_rewrite_time_sec:-1
aof_last_bgrewrite_status:ok
# Stats
total_connections_received:932395
total_commands_processed:3088408103
instantaneous_ops_per_sec:837
rejected_connections:0
expired_keys:31428
evicted_keys:3007
keyspace_hits:124093049
keyspace_misses:53060192
pubsub_channels:0
pubsub_patterns:0
latest_fork_usec:17651
# Replication
role:master
connected_slaves:1
slave0:192.168.0.2,6379,online
# CPU
used_cpu_sys:54000.21
used_cpu_user:73692.52
used_cpu_sys_children:36229.79
used_cpu_user_children:420655.84
# Keyspace
db0:keys=1498962,expires=1310
In my redis config I have the following set:
\fidaemonize yes
pidfile /var/run/redis/redis-server.pid
timeout 0
loglevel notice
databases 1
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
dir /var/lib/redis
slave-serve-stale-data yes
slave-read-only yes
slave-priority 100
maxclients 1000
maxmemory 4GB
maxmemory-policy volatile-lru
appendonly no
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-entries 512
list-max-ziplist-value 64
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
That could come from many issues :
because you use the SAVE command (it is setup in your conf) generating a lot of I/O and hammering the server, especially if you use EBS volumes on Amazon.
because you have a Redis slave (same as before, doing SAVE before mirroring).
because you use a KEY * which is very slow on a lot of indexes.
Try "slowlog" command on the redis server to see if there are some "slow query".
Write some logs when "TimeoutError" happens, to see if the "error redis command" in the "slow log".
adjust your timeout setting on the client side。
It might be a problem on the client side if server performs normally. Each redis client instance, not the server, also has a timeout setting, and the default setting is very short - something like a few milliseconds. So if the server does not respond within that time, a Redis::TimeoutError will be raised by the client.
First thing you can try is to set a longer timeout value, and see if things get better.
redis_url = 'redis://user:password#host:port/'
redis = Redis.connect(:url => redis_url, :timeout => 0.7)
Even with longer timeout setting, there is no guarantee that timeout would not happen, but then it'd be a problem of the design of your system.
Are you rolling your own code to connect to redis or just letting sidekiq handle it? I think you should really just design your connection code to reconnect if the connection has been lost. You can rescue Redis::BaseConnectionError and reconnect.

CouchDB 1.3.1 on Centos 6.4

I compiled CouchDB and installed. It seems to work great except when I use views on the database, then it just spins the wheel and nothing happens and the cpu load spikes to 100% and slowly it eats away all memory and starts to swap a lot which in return increases the cpu load.
I have tried both with the js-1.70-12 that comes with centos 6.4, as well as build and install my own js-1.85-1. All erlang packages are installed from epel :
erlang-crypto-R14B-04.2.el6.x86_64
erlang-syntax_tools-R14B-04.2.el6.x86_64
erlang-mnesia-R14B-04.2.el6.x86_64
erlang-ssl-R14B-04.2.el6.x86_64
erlang-cosProperty-R14B-04.2.el6.x86_64
erlang-asn1-R14B-04.2.el6.x86_64
erlang-cosEventDomain-R14B-04.2.el6.x86_64
erlang-eunit-R14B-04.2.el6.x86_64
erlang-erl_docgen-R14B-04.2.el6.x86_64
erlang-toolbar-R14B-04.2.el6.x86_64
erlang-debugger-R14B-04.2.el6.x86_64
erlang-tools-R14B-04.2.el6.x86_64
erlang-typer-R14B-04.2.el6.x86_64
erlang-megaco-R14B-04.2.el6.x86_64
erlang-oauth-1.1.1-1.el6.x86_64
erlang-stdlib-R14B-04.2.el6.x86_64
erlang-hipe-R14B-04.2.el6.x86_64
erlang-kernel-R14B-04.2.el6.x86_64
erlang-runtime_tools-R14B-04.2.el6.x86_64
erlang-snmp-R14B-04.2.el6.x86_64
erlang-public_key-R14B-04.2.el6.x86_64
erlang-inets-R14B-04.2.el6.x86_64
erlang-ibrowse-2.2.0-4.el6.x86_64
erlang-cosEvent-R14B-04.2.el6.x86_64
erlang-cosNotification-R14B-04.2.el6.x86_64
erlang-edoc-R14B-04.2.el6.x86_64
erlang-otp_mibs-R14B-04.2.el6.x86_64
erlang-cosFileTransfer-R14B-04.2.el6.x86_64
erlang-cosTransactions-R14B-04.2.el6.x86_64
erlang-inviso-R14B-04.2.el6.x86_64
erlang-jinterface-R14B-04.2.el6.x86_64
erlang-erl_interface-R14B-04.2.el6.x86_64
erlang-diameter-R14B-04.2.el6.x86_64
erlang-gs-R14B-04.2.el6.x86_64
erlang-tv-R14B-04.2.el6.x86_64
erlang-appmon-R14B-04.2.el6.x86_64
erlang-odbc-R14B-04.2.el6.x86_64
erlang-wx-R14B-04.2.el6.x86_64
erlang-et-R14B-04.2.el6.x86_64
erlang-observer-R14B-04.2.el6.x86_64
erlang-sasl-R14B-04.2.el6.x86_64
erlang-dialyzer-R14B-04.2.el6.x86_64
erlang-common_test-R14B-04.2.el6.x86_64
erlang-os_mon-R14B-04.2.el6.x86_64
erlang-examples-R14B-04.2.el6.x86_64
erlang-compiler-R14B-04.2.el6.x86_64
erlang-erts-R14B-04.2.el6.x86_64
erlang-xmerl-R14B-04.2.el6.x86_64
erlang-orber-R14B-04.2.el6.x86_64
erlang-cosTime-R14B-04.2.el6.x86_64
erlang-ssh-R14B-04.2.el6.x86_64
erlang-docbuilder-R14B-04.2.el6.x86_64
erlang-percept-R14B-04.2.el6.x86_64
erlang-parsetools-R14B-04.2.el6.x86_64
erlang-ic-R14B-04.2.el6.x86_64
erlang-pman-R14B-04.2.el6.x86_64
erlang-webtool-R14B-04.2.el6.x86_64
erlang-test_server-R14B-04.2.el6.x86_64
erlang-reltool-R14B-04.2.el6.x86_64
erlang-R14B-04.2.el6.x86_64
erlang-mochiweb-1.4.1-5.el6.x86_64
Every thing configures and makes and installs as expected. You can dump data into the database, you can create documents and all that. But I can not run any view, temporary or not.
The only error I see in the logs is like this one, and it is a lot of these errors :
[Sun, 18 Aug 2013 23:10:38 GMT] [error] [<0.124.0>] {error_report,<0.30.0>,
{<0.124.0>,crash_report,
[[{initial_call,
{mochiweb_socket_server,init,['Argument__1']}},
{pid,<0.124.0>},
{registered_name,[]},
{error_info,
{exit,eaddrinuse,
[{gen_server,init_it,6},
{proc_lib,init_p_do_apply,3}]}},
{ancestors,
[couch_secondary_services,couch_server_sup,
<0.31.0>]},
{messages,[]},
{links,[<0.93.0>]},
{dictionary,[]},
{trap_exit,true},
{status,running},
{heap_size,987},
{stack_size,24},
{reductions,459}],
[]]}}
But I have no idea what they mean.
Do I need to compile and install erlang as well ? All the above packages or just erlang ?
Your compilation and installation looks fine. At least your error (note: eaddrinuse in traceback) is about that there is some process that listens same address and port as your CouchDB try to. Check other listening processes with netstat -anp command or change CouchDB's listen port to different.

How to solve "mod_fastcgi.c.2566 unexpected end-of-file (perhaps the fastcgi process died)" when calling .php that takes long time to execute?

In my php application I restore db2 database. It works fine but here is one huge one 2.9GB that finishes with 500 - Internal Server Error.
I use exec() to run unix shell commands from .php - cp, db2 etc. The same error happens when running from firefox or ruby script.
I have to copy the backup image file first which takes few minutes. Then I call db2 to restored the image. For this particular database the php process finishes with above error. Then I can find this in the error log file
2012-08-02 10:25:18: (mod_fastcgi.c.2566) unexpected end-of-file (perhaps the fastcgi process died): pid: 0 socket: tcp:127.0.0.1:9090
2012-08-02 10:25:18: (mod_fastcgi.c.3352) response not received, request sent: 2758 on socket: tcp:127.0.0.1:9090 for /wrational/tools/rationalTest.php?mode=restore&database=RATIONAL&from_database=dbb&dbbackuptype=weekly, closing connection
I set both default_socket_timeout and max_execution_time to 5660 in php.ini and confirmed that it is set by phpinfo() but it doesn't look like it helped.
Any idea how I can make this one work?
update
It looks like it dies after 40mins.
The corresponding line in access.log file look like
"GET /rational/tools/rationalTest.php?mode=restore&from_database=dbb&dbbackuptype=weekly HTTP/1.1" 500 369 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6; rv:11.0) Gecko/20100101 Firefox/11.0"
Looks like request_terminate_timeout option in php-fpm.ini was causing the troubles. It was set to 30 mins. I changed it to 0 and it looks good so far. Need to do more testing though.
; The timeout for serving a single request after which the worker process will
; be killed. This option should be used when the 'max_execution_time' ini option
; does not stop script execution for some reason. A value of '0' means 'off'.
; Available units: s(econds)(default), m(inutes), h(ours), or d(ays)
; Default Value: 0

Resources