Circleci tests passing but then sitting there until timing out - circleci

I'm running Jasmine tests which are passing, but then nothing happens until eventual timeout. What am I missing?
PhantomJS 2.1.1 (Linux 0.0.0): Executed 49 of 61 SUCCESS (0 secs / 0.539 secs)
PhantomJS 2.1.1 (Linux 0.0.0): Executed 50 of 61 SUCCESS (0 secs / 0.542 secs)
PhantomJS 2.1.1 (Linux 0.0.0): Executed 51 of 61 SUCCESS (0 secs / 0.546 secs)
PhantomJS 2.1.1 (Linux 0.0.0): Executed 52 of 61 SUCCESS (0 secs / 0.549 secs)
PhantomJS 2.1.1 (Linux 0.0.0): Executed 53 of 61 SUCCESS (0 secs / 0.553 secs)
PhantomJS 2.1.1 (Linux 0.0.0): Executed 54 of 61 SUCCESS (0 secs / 0.562 secs)
PhantomJS 2.1.1 (Linux 0.0.0): Executed 55 of 61 SUCCESS (0 secs / 0.567 secs)
PhantomJS 2.1.1 (Linux 0.0.0): Executed 56 of 61 SUCCESS (0 secs / 0.573 secs)
PhantomJS 2.1.1 (Linux 0.0.0): Executed 57 of 61 SUCCESS (0 secs / 0.575 secs)
PhantomJS 2.1.1 (Linux 0.0.0): Executed 58 of 61 SUCCESS (0 secs / 0.583 secs)
PhantomJS 2.1.1 (Linux 0.0.0): Executed 59 of 61 SUCCESS (0 secs / 0.588 secs)
PhantomJS 2.1.1 (Linux 0.0.0): Executed 60 of 61 SUCCESS (0 secs / 0.593 secs)
PhantomJS 2.1.1 (Linux 0.0.0): Executed 61 of 61 SUCCESS (0 secs / 0.621 secs)
PhantomJS 2.1.1 (Linux 0.0.0): Executed 61 of 61 SUCCESS (0.446 secs / 0.621 secs)
command ((npm :test)) took more than 10 minutes since last output

Support solved the issue:
node_modules/karma/bin/karma start --log-level=debug --single-run
Running single-run causes phantomJS to shut down cleanly. This is apparently a special "Continuous Integration" mode for Karma. http://karma-runner.github.io/1.0/config/configuration-file.html
So, all you should have to do is add
singleRun: true, to your karma.conf.js file.

// Continuous Integration mode
// if true, Karma captures browsers, runs the tests and exits
singleRun: true,
Set singleRun to true in your karma config file. This would return 1 or 0 depending on if the test passed or failed when it ended as opposed to watching for changes infinitely in the spec file.

The default value for running tests is to be in watch mode in the karma.config.
If you like to run ng test in a single-run manner please execute this command:
ng test --watch false

Related

How to analyze unsuccessful builds in the analysis phase?

A bazel binary that I am building completes unsuccessfully during the analysis phase. What flags and tools can I use to debug why it fails during analysis.
Currently, clean builds return the following output
ERROR: build interrupted
INFO: Elapsed time: 57.819 s
FAILED: Build did NOT complete successfully (133 packages loaded)
If I retry building after failed completion, I receive the following output
ERROR: build interrupted
INFO: Elapsed time: 55.514 s
FAILED: Build did NOT complete successfully (68 packages loaded)
What flags can I use to identify
what packages are being loaded
what package the build is being interrupted on
whether the interruption is coming from a timeout or an external process.
Essentially, something similar to --verbose_failures but for the analysis phase rather than the execution phrase.
So far I have ran my build through the build profiler, and have not been able to glean any insight. Here is the output of my build:
WARNING: This information is intended for consumption by Blaze developers only, and may change at any time. Script against it at your own risk
INFO: Loading /<>/result
INFO: bazel profile for <> at Mon Jun 04 00:10:11 GMT 2018, build ID: <>, 49405 record(s)
INFO: Aggregating task statistics
=== PHASE SUMMARY INFORMATION ===
Total launch phase time 9.00 ms 0.02%
Total init phase time 91.0 ms 0.16%
Total loading phase time 1.345 s 2.30%
Total analysis phase time 57.063 s 97.53%
Total run time 58.508 s 100.00%
=== INIT PHASE INFORMATION ===
Total init phase time 91.0 ms
Total time (across all threads) spent on:
Type Total Count Average
=== LOADING PHASE INFORMATION ===
Total loading phase time 1.345 s
Total time (across all threads) spent on:
Type Total Count Average
CREATE_PACKAGE 0.67% 9 3.55 ms
VFS_STAT 0.69% 605 0.05 ms
VFS_DIR 0.96% 255 0.18 ms
VFS_OPEN 2.02% 8 12.1 ms
VFS_READ 0.00% 5 0.01 ms
VFS_GLOB 23.74% 1220 0.93 ms
SKYFRAME_EVAL 24.44% 3 389 ms
SKYFUNCTION 36.95% 8443 0.21 ms
SKYLARK_LEXER 0.19% 31 0.29 ms
SKYLARK_PARSER 0.68% 31 1.04 ms
SKYLARK_USER_FN 0.03% 5 0.27 ms
SKYLARK_BUILTIN_FN 5.91% 349 0.81 ms
=== ANALYSIS PHASE INFORMATION ===
Total analysis phase time 57.063 s
Total time (across all threads) spent on:
Type Total Count Average
CREATE_PACKAGE 0.30% 138 3.96 ms
VFS_STAT 0.05% 2381 0.03 ms
VFS_DIR 0.19% 1020 0.35 ms
VFS_OPEN 0.04% 128 0.61 ms
VFS_READ 0.00% 128 0.01 ms
VFS_GLOB 0.92% 3763 0.45 ms
SKYFRAME_EVAL 31.13% 1 57.037 s
SKYFUNCTION 65.21% 32328 3.70 ms
SKYLARK_LEXER 0.01% 147 0.10 ms
SKYLARK_PARSER 0.03% 147 0.39 ms
SKYLARK_USER_FN 0.20% 343 1.08 ms
As far as my command, I am running
bazel build src:MY_TARGET --embed_label MY_LABEL --stamp --show_loading_progress
Use the --host_jvm_debug startup flag to debug Bazel itself during a build.
From https://bazel.build/contributing.html:
Debugging Bazel
Start creating a debug configuration for both C++ and
Java in your .bazelrc with the following:
build:debug -c dbg
build:debug --javacopt="-g"
build:debug --copt="-g"
build:debug --strip="never"
Then you can rebuild Bazel with bazel build --config debug //src:bazel and use your favorite debugger to start debugging.
For debugging the C++ client you can just run it from gdb or lldb as
you normally would. But if you want to debug the Java code, you must
attach to the server using the following:
Run Bazel with debugging option --host_jvm_debug before the command (e.g., bazel --batch --host_jvm_debug build //src:bazel).
Attach a debugger to the port 5005. With jdb for instance, run jdb -attach localhost:5005. From within Eclipse, use the remote
Java application launch configuration.
Our IntelliJ plugin has built-in debugging support

Rails server command bug

I started a new ruby-on-rails project called "myrubyblog", changed directories to my project then launched rails server command, but the terminal then outputs me this after an incredible amount of lines of information I don't understand;
-- Other runtime information -----------------------------------------------
* Loaded script: bin/rails
* Loaded features:
0 enumerator.so
1 thread.rb
2 rational.so
3 complex.so
4 /home/pi/.rvm/rubies/ruby-2.5.1/lib/ruby/2.5.0/armv7l-linux-eabihf/enc/encdb.so
5 /home/pi/.rvm/rubies/ruby-2.5.1/lib/ruby/2.5.0/armv7l-linux-eabihf/enc/trans/transdb.so
6 /home/pi/.rvm/rubies/ruby-2.5.1/lib/ruby/2.5.0/armv7l-linux-eabihf/rbconfig.rb
7 /home/pi/.rvm/rubies/ruby-2.5.1/lib/ruby/2.5.0/rubygems/compatibility.rb
8 /home/pi/.rvm/rubies/ruby-2.5.1/lib/ruby/2.5.0/rubygems/defaults.rb
9 /home/pi/.rvm/rubies/ruby-2.5.1/lib/ruby/2.5.0/rubygems/deprecate.rb
10 /home/pi/.rvm/rubies/ruby-2.5.1/lib/ruby/2.5.0/rubygems/errors.rb
11 /home/pi/.rvm/rubies/ruby-2.5.1/lib/ruby/2.5.0/rubygems/version.rb
12 /home/pi/.rvm/rubies/ruby-2.5.1/lib/ruby/2.5.0/rubygems/requirement.rb
13 /home/pi/.rvm/rubies/ruby-2.5.1/lib/ruby/2.5.0/rubygems/platform.rb
14 /home/pi/.rvm/rubies/ruby-2.5.1/lib/ruby/2.5.0/rubygems/basic_specification.rb
15 /home/pi/.rvm/rubies/ruby-2.5.1/lib/ruby/2.5.0/rubygems/stub_specification.rb
16 /home/pi/.rvm/rubies/ruby-2.5.1/lib/ruby/2.5.0/rubygems/util/list.rb
17 /home/pi/.rvm/rubies/ruby-2.5.1/lib/ruby/2.5.0/armv7l-linux-eabihf/stringio.so
18 /home/pi/.rvm/rubies/ruby-2.5.1/lib/ruby/2.5.0/uri/rfc2396_parser.rb
19 /home/pi/.rvm/rubies/ruby-2.5.1/lib/ruby/2.5.0/uri/rfc3986_parser.rb
20 /home/pi/.rvm/rubies/ruby-2.5.1/lib/ruby/2.5.0/uri/common.rb
21 /home/pi/.rvm/rubies/ruby-2.5.1/lib/ruby/2.5.0/uri/generic.rb
22 /home/pi/.rvm/rubies/ruby-2.5.1/lib/ruby/2.5.0/uri/ftp.rb
23 /home/pi/.rvm/rubies/ruby-2.5.1/lib/ruby/2.5.0/uri/http.rb
24 /home/pi/.rvm/rubies/ruby-2.5.1/lib/ruby/2.5.0/uri/https.rb
25 /home/pi/.rvm/rubies/ruby-2.5.1/lib/ruby/2.5.0/uri/ldap.rb
26 /home/pi/.rvm/rubies/ruby-2.5.1/lib/ruby/2.5.0/uri/ldaps.rb
27 /home/pi/.rvm/rubies/ruby-2.5.1/lib/ruby/2.5.0/uri/mailto.rb
28 /home/pi/.rvm/rubies/ruby-2.5.1/lib/ruby/2.5.0/uri.rb
29 /home/pi/.rvm/rubies/ruby-2.5.1/lib/ruby/2.5.0/rubygems/specification.rb
30 /home/pi/.rvm/rubies/ruby-2.5.1/lib/ruby/2.5.0/rubygems/exceptions.rb
... (up to 320 lines)
[NOTE]
You may have encountered a bug in the Ruby interpreter or extension libraries.
Bug reports are welcome.
For details: http://www.ruby-lang.org/bugreport.html
Abandon
What is that supposed to mean?

travis " Segmentation fault " but works fine locally

there, I ran into a 'Segmentation fault' error when using travis-ci for my project : IPython-Dashboard
there is no error msg and it works fine on local, I feel a little confusing. any one can give any idea on fixing this, thanks.
here is the travis build log on cloud:
travis-log
$ nosetests --with-coverage --cover-package=dashboard
../home/travis/build.sh: line 45: 3187 Segmentation fault (core dumped)
nosetests --with-coverage --cover-package=dashboard
The command "nosetests --with-coverage --cover-package=dashboard" exited with 139.
here is the build log on local [osx]
taotao#mac007:~/Desktop/github/IPython-Dashboard$sudo nosetests --with-coverage --cover-package=dashboard
.../Users/chenshan/Desktop/github/IPython-Dashboard/dashboard/tests/testCreateData.py:78: Warning: Can't create database 'IPD_data'; database exists
conn.cursor().execute('CREATE DATABASE IF NOT EXISTS {};'.format(config.sql_db))
/Library/Python/2.7/site-packages/pandas/io/sql.py:599: FutureWarning: The 'mysql' flavor with DBAPI connection is deprecated and will be removed in future versions. MySQL will be further supported with SQLAlchemy engines.
warnings.warn(_MYSQL_WARNING, FutureWarning)
...
Name Stmts Miss Cover Missing
---------------------------------------------------------------------
dashboard.py 13 0 100%
dashboard/client.py 1 0 100%
dashboard/client/sender.py 11 3 73% 26-27, 33
dashboard/conf.py 0 0 100%
dashboard/conf/config.py 29 0 100%
dashboard/server.py 0 0 100%
dashboard/server/resources.py 0 0 100%
dashboard/server/resources/dash.py 35 10 71% 36, 55-56, 67-69, 86-89
dashboard/server/resources/home.py 40 12 70% 25, 28-30, 83-91
dashboard/server/resources/sql.py 27 11 59% 30, 52-75
dashboard/server/resources/status.py 8 1 88% 19
dashboard/server/resources/storage.py 13 5 62% 26-28, 43-47
dashboard/server/utils.py 79 18 77% 20-24, 78-80, 82-83, 86, 96, 99-100, 126-127, 140-142
dashboard/server/views.py 21 1 95% 16
---------------------------------------------------------------------
TOTAL 277 61 78%
----------------------------------------------------------------------
Ran 6 tests in 4.600s
OK
taotao#mac007:~/Desktop/github/IPython-Dashboard$

rails performance test warmup time

I am using a rails performance test run as rake test:benchmark. The result gives me the warmup time.
I can't find the meaning of the 211 ms warm up time. Some of the test take longer warmup
time. I know what wall_time, user_time and etc.
.ApiTest#test_license_pool (211 ms warmup)
wall_time: 167 ms
user_time: 47 ms
memory: 6.2 MB
gc_runs: 0
gc_time: 0 ms

Why are ruby processes at 100% CPU on passenger

I have a rails app (2.3.5) running on a VPS with 4 cores # 2 GHz and 4GB memory. I am running nginx (0.7.61) and phusion passenger(2.2.14) on Ruby Enterprise (1.8.7-2010.01) with the max pool size set at 30. My problem is that it seems as if every ruby process that is executing a rails request runs at near 100% cpu. If I run TOP they drop off every time the display refreshes so they are not getting hung, but they are still running at 100%.
Is there any way I can bring this down? Or at least figure out what portion of code is spiking the CPU? Is this a normal behavior?
Here is the TOP output:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2427 psadmin 25 0 91904 76m 2696 R 100 1.9 739:05.96 Rails: /var/www/apps/main_rails_app/current
3457 psadmin 25 0 98180 82m 2532 R 100 2.0 711:21.91 Rails: /var/www/apps/main_rails_app/current
2415 psadmin 25 0 93952 77m 2708 R 99 1.9 727:49.31 Rails: /var/www/apps/main_rails_app/current
3455 psadmin 25 0 99204 83m 2528 R 69 2.0 726:04.70 Rails: /var/www/apps/main_rails_app/current
2791 psadmin 16 0 98044 81m 2492 S 31 2.0 0:10.16 Rails: /var/www/apps/main_rails_app/current
8034 psadmin 15 0 8160 3656 1772 S 1 0.1 0:35.39 nginx: worker process
8035 psadmin 15 0 8324 3696 1732 S 0 0.1 0:31.34 nginx: worker process
2588 psadmin 15 0 197m 183m 2712 S 0 4.5 1:02.16 Rails: /var/www/apps/main_rails_app/current
Thanks!
Edit: Tried strace with follow forks as mentioned below. This is the output that is dumped over and over:
sudo strace -f -p 3455
clock_gettime(CLOCK_MONOTONIC, {394577, 508326476}) = 0
select(0, [], [], [], {0, 0}) = 0 (Timeout)
--- SIGVTALRM (Virtual timer expired) # 0 (0) ---
sigreturn()
check your logs for suspicious behavior. In general rails does suck a bunch of cpu though...you could also try pointing strace at the offending pids.

Resources