highcharts-export-server always produces blank PNG - highcharts

I'm trying to use highcharts-export-server in command-line/batch mode. I've installed the latest version (2.0.28). I'm running Windows 10 with Node 12.
I'm running it from the command line using the following command:
highcharts-export-server --nologo 1 --logLevel 4 --options chart.json --outfile chart.png --type png --width 500
The console output says:
starting highcharts export server v2.0.28...
Fri Sep 25 2020 15:07:48 GMT+0100 (British Summer Time) [verbose] attaching exit listeners to the process..
Fri Sep 25 2020 15:07:48 GMT+0100 (British Summer Time) [verbose] Pool started:
maxWorkers: 1
initialWorkers: 1
workLimit: 60
listening to process exit: true
Fri Sep 25 2020 15:07:48 GMT+0100 (British Summer Time) [verbose] phantom 1 - spawning worker
Fri Sep 25 2020 15:07:48 GMT+0100 (British Summer Time) [verbose] starting export
Fri Sep 25 2020 15:07:48 GMT+0100 (British Summer Time) [verbose] attempting to export from raw input
Fri Sep 25 2020 15:07:48 GMT+0100 (British Summer Time) [verbose] phantom - received work, finding available worker
Fri Sep 25 2020 15:07:48 GMT+0100 (British Summer Time) [verbose] phantom - found available worker
Fri Sep 25 2020 15:07:48 GMT+0100 (British Summer Time) [verbose] phantom 1 - starting work
Fri Sep 25 2020 15:07:52 GMT+0100 (British Summer Time) [notice] phantom worker 1 finished work ??? in 3562 ms
Fri Sep 25 2020 15:07:52 GMT+0100 (British Summer Time) [notice] phantom worker 1 - process was closed
Fri Sep 25 2020 15:07:52 GMT+0100 (British Summer Time) [notice] terminating, killing all running phantom processes
...but although a chart.png file is created and has the correct width, it's blank (transparent):
This is the content of the chart.json file, which I took from one of the examples on the Highcharts website:
{
chart: {
type: 'bar'
},
title: {
text: 'Fruit Consumption'
},
xAxis: {
categories: ['Apples', 'Bananas', 'Oranges']
},
yAxis: {
title: {
text: 'Fruit eaten'
}
},
series: [{
name: 'Jane',
data: [1, 0, 4]
}, {
name: 'John',
data: [5, 7, 3]
}]
}
Note: I'll mention that when I first tried to install highcharts-export-server I hit a different problem which was that I couldn't run it at all (it failed with uncaughtException: TypeError: "file" argument must be a non-empty string). After some googling I found some blog posts suggesting that I needed to install PhantomJS first, and to use the --unsafe-perm option. So, the commands I actually used to install it was:
npm install phantomjs --unsafe-perm
npm install highcharts-export-server --unsafe-perm
Oh, and it wasn't until some time later that I realised I might have to install Highcharts itself :-) So I did that too (npm install highcharts), but it didn't actually seem to make a difference.

In case this helps anyone else, here is the full conversation about this issue on the Highcharts Forum.
In summary, it turned out that the NPM package was improperly installed (due to a firewall issue).
This wasn't apparent from the installation logs, but the effect was that in the node_modules/highcharts/export-server/phantom/export.html file there was a script block that should have contained minified JavaScript but that actually contained only lines saying undefined;.
The fix in my case (to work around the firewall issue) was to specify an http URL for the CDN when installing Highcharts Export Server, rather than the default URL which uses https. Once I'd done that, the export.html did contain the minified JS code, and the tool worked correctly.

Related

How to build up a ML model for text classification which contains none 'natural language'?

I am looking for a model for text classification for our log notes analytics.
The challenge is that each note may contain none 'natural language' texts. For example, some notes are thread backtrace output with symbols, some notes are logging information from source code. And among these notes, some notes with description of how customer is using our product are the ones we want to classify.
Is there any ML model or approach that I could use for this text classification?
Below are some examples for different notes (I changed some content so no company confidential materials are shown):
backtrace info developer pasted for bug analysis:
func118 4563453 344 = SYSTEM_FUNC_1 0x00000efa34343 0x0000000009f333a0 0xffe3ebdfd700 <<<<<
Total of 1 API working thread(s)
(gdb) thread find 0x123456
Thread 670 has target id 'Thread 0x123456 (LWP 443)'
(gdb) t 670
[Switching to thread 670 (Thread 0x123456 (LWP 443))]
#0 0x35353453563abcd in __lock_func1_ ()
from /disks/folder1/xxx/xxx_folder1/info_folder/info2_dir/lib64/libpthread.so.0
(gdb) ebt
#0 __lock_func1_()
#1 _LOCK_F_10()
#2 func_mod_4()
#3 func_mod_5()
#4 ModCon::disconnect()
#5 ModCon::abort()
#6 ModServ::disconnect()
#7 ModServManager::disconnect()
#8 mod1::func1()
#9 mod1::func2()
Product log for issue analysis:
cpu/MOD/MOD2/log/
start_mod.log:
Thu Dec 24 00:01:12 UTC 2019 FUN: HG: FILE_A: stopping
Thu Dec 24 00:01:12 UTC 2019 FUN: FILE_A: stopping, timeout -22-
Thu Dec 24 00:01:12 UTC 2019 system-state: cleared FILE_A_start_complete
Thu Dec 24 00:01:12 UTC 2019 FUN: FILE_A: run thread still running: con_b.pl FUN_run 0
Thu Dec 24 00:01:12 UTC 2019 FUN: FILE_A: calling con_b.pl FUN_cleanup 0, time left: -160-
Thu Dec 24 00:01:12 2019 cli: con_a.pl: FUN_cleanup for FILE_A
Thu Dec 24 00:01:12 2019 cmd: con_a.pl: sp got xxx error, will try to act_xxx
Thu Dec 24 00:01:13 UTC 2019 FUN: FILE_A: action 1
Thu Dec 24 00:01:13 UTC 2019 FUN: FILE_A: action 1 complete
Thu Dec 24 00:01:13 UTC 2019 FUN: FILE_A: action 2
Customer related infomation for configuration (which will be most interested notes that I want to classify and retreive from all notes):
Customer xxx has created func_xxx to protect their data,
they also perform daily backup of their data by using func_xxx2.
They totally created xxx3 objects in each node...

Lighthouse in GitLab CI

I'm trying to use Lighthouse in GitLab CI to run a scan against a remote website after a deploy. The job keeps throwing an error.
My job configuration looks like this:
lighthouse:
stage: scan
image: markhobson/node-chrome
script:
- npm install -g lighthouse lighthouse-plugin-field-performance --unsafe-perm
- lighthouse $URL --plugins=lighthouse-plugin-field-performance --chrome-flags=”--headless --no-sandbox” --verbose
I've also tried with image: buildkite/puppeteer. In both instances I get a similar error when I try to invoke Lighthouse, which looks like this:
Wed, 09 Oct 2019 20:22:42 GMT ChromeLauncher:verbose created /tmp/lighthouse.KXhqWF0
Wed, 09 Oct 2019 20:22:42 GMT ChromeLauncher:verbose Launching with command:
"/usr/bin/google-chrome-stable" --disable-translate --disable-extensions --disable-background-networking --disable-sync --metrics-recording-only --disable-default-apps --mute-audio --no-first-run --remote-debugging-port=44495 --disable-setuid-sandbox --user-data-dir=/tmp/lighthouse.KXhqWF0 about:blank
Wed, 09 Oct 2019 20:22:42 GMT ChromeLauncher:verbose Chrome running with pid 36 on port 44495.
Wed, 09 Oct 2019 20:22:42 GMT ChromeLauncher Waiting for browser.
Wed, 09 Oct 2019 20:22:42 GMT ChromeLauncher Waiting for browser...
Wed, 09 Oct 2019 20:22:43 GMT ChromeLauncher Waiting for browser.....
Wed, 09 Oct 2019 20:22:43 GMT ChromeLauncher Waiting for browser.......
Wed, 09 Oct 2019 20:22:44 GMT ChromeLauncher Waiting for browser.........
Wed, 09 Oct 2019 20:22:44 GMT ChromeLauncher Waiting for browser...........
etc
Wed, 09 Oct 2019 20:23:07 GMT ChromeLauncher:error connect ECONNREFUSED 127.0.0.1:44495
Wed, 09 Oct 2019 20:23:07 GMT ChromeLauncher:error Logging contents of /tmp/lighthouse.KXhqWF0/chrome-err.log
Wed, 09 Oct 2019 20:23:07 GMT ChromeLauncher:error
(google-chrome-stable:36): Gtk-WARNING **: cannot open display:
[1009/202244.656645:ERROR:nacl_helper_linux.cc(310)] NaCl helper process running without a sandbox!
Most likely you need to configure your SUID sandbox correctly
Unable to connect to Chrome
I'm not entirely sure what I need to do at this point. I'm questioning whether or not to try a more basic node image and try installing what I need manually, which I tried originally and found that managing Chrome/Chromium with Lighthouse was not quite as straight-forward as I wanted. Any thoughts or suggestions?
You could try using this image which has both everything installed ready to execute a report https://hub.docker.com/r/femtopixel/google-lighthouse/

Jenkins - changes exists in SVN, but Jenkins show no changes, but build is running

I had scheduled pulling in Jenkins every 5 minutes: */5 * * * *.
I committed changes to SVN, I can see them in SVN history (logs).
Jenkins starting the build, but it shows: Revision: x
No changes. And everything what was configured is running.
After 5 minutes Jenkins starts another run, with message: Revision: x+1
Changes
just for test Jenkins deploy (detail)
by UserName
After, for test purposes I have changed to * * * * * to run every minute, and the results were such:
Jenkins runs:
Success > Console Output#1​76 Nov 29, 2018 2:14 PM
Success > Console Output#1​75 Nov 29, 2018 2:13 PM
Success > Console Output#1​74 Nov 29, 2018 2:11 PM
Success > Console Output#1​73 Nov 29, 2018 2:10 PM
Success > Console Output#1​72 Nov 29, 2018 2:09 PM
Success > Console Output#1​71 Nov 29, 2018 2:08 PM
Success > Console Output#1​70 Nov 29, 2018 2:07 PM
Success > Console Output#1​69 Nov 29, 2018 2:06 PM
---Commit goes here
Success > Console Output#1​68 Nov 29, 2018 1:01 PM
from 2.06 he see that there is changes, he run job, but it really didn't see what changes was there, and only at 2:14 it wrote SVN commit message.
also in Recent Changes logs:
Changes
176 (Nov 29, 2018 2:14:19 PM)
just for test Jenkins deploy — UserName / detail
168 (Nov 29, 2018 1:01:36 PM)
It is strange behavior, could someone have idea, where could be the issue?
Based on the conversation, We have found out the culprit was the Time Difference of the 2 servers(Jenkins and SVN) approx 7-8 mins was the issue.
Why does this happen ?
Jenkins realized that a new version was available, but couldn't check it out
as long as its own time was smaller than the new versions time.
A similar issue is described here:
Why up-to-date files committed to SVN will not be immediately pulled out by Hudson to build

NodeMCU traceback on reboot

I have an embedded application running NodeMCU that is not connected to a console as the UART has been repurposed to obtain serial data from an attached device.
During testing the application ran for about 15 hours then rebooted 5 times in a row before "settling" and continuing to run correctly.
Is it possible to log to a file a traceback of what caused the reboots? I am assuming some kind of PANIC error caused the reboot. I don't think it is a memory issue as the application reports the heap size (via http to a local server) every 30 seconds. Here is a log extract:
Wed May 18 00:46:37 2016 -> '{"s":"1782","i":"1afe34d26348", "d":"heap=12408
Wed May 18 00:47:08 2016 -> '{"s":"1783","i":"1afe34d26348", "d":"heap=12408
Wed May 18 00:47:39 2016 -> '{"s":"1784","i":"1afe34d26348", "d":"heap=12408
Wed May 18 00:48:19 2016 -> '{"s":"1785","i":"1afe34d26348", "d":"heap=11432
Wed May 18 00:50:06 2016 -> '{"s":"0","i":"1afe34d26348", "d":"heap=14560
Wed May 18 00:51:25 2016 -> '{"s":"0","i":"1afe34d26348", "d":"heap=14584
Wed May 18 00:52:45 2016 -> '{"s":"0","i":"1afe34d26348", "d":"heap=14560
Wed May 18 00:54:04 2016 -> '{"s":"0","i":"1afe34d26348", "d":"heap=14584
Wed May 18 00:55:24 2016 -> '{"s":"0","i":"1afe34d26348", "d":"heap=14608
Wed May 18 00:55:55 2016 -> '{"s":"1","i":"1afe34d26348", "d":"heap=12608
Wed May 18 00:56:26 2016 -> '{"s":"2","i":"1afe34d26348", "d":"heap=12600
Wed May 18 00:56:56 2016 -> '{"s":"3","i":"1afe34d26348", "d":"heap=12624
Wed May 18 00:57:27 2016 -> '{"s":"4","i":"1afe34d26348", "d":"heap=12600
I the above log "s" is a sequential counter that is reset to 0 when the device reboots, and "d" is the heap size (you can ignore the "i" entry, it is just the MAC address of the device that is sending the data).
xpcall won't work in the case of a PANIC device reset.
I tried logging node.bootreason() to a file on reboot, but it doesn't contain a traceback to where the error occurred.
Is there some method for troubleshooting nodemcu applications that aren't connected to a console?

ConnectionFailure using mongo in rails 3.1

I have an app setup with Rails 3.1, Mongo 1.4.0, Mongoid 2.2.4.
What I am experiencing is this:
Mongo::ConnectionFailure: Failed to connect to a master node at localhost:27017
I've had this problem before, but it went away on a computer restart... this time it does not.
I don't understand, I didn't do anything. I just put my computer in sleep mode, went home and woke it up, then there it was.
Here is the output of sudo mongod
Fri Nov 25 21:47:14 [initandlisten] MongoDB starting : pid=1963 port=27017 dbpath=/data/db/ 64-bit host=xxx.local
Fri Nov 25 21:47:14 [initandlisten] db version v2.0.0, pdfile version 4.5
Fri Nov 25 21:47:14 [initandlisten] git version: 695c67dff0ffc361b8568a13366f027caa406222
Fri Nov 25 21:47:14 [initandlisten] build info: Darwin erh2.10gen.cc 9.6.0 Darwin Kernel Version 9.6.0: Mon Nov 24 17:37:00 PST 2008; root:xnu-1228.9.59~1/RELEASE_I386 i386 BOOST_LIB_VERSION=1_40
Fri Nov 25 21:47:14 [initandlisten] options: {}
Fri Nov 25 21:47:14 [initandlisten] journal dir=/data/db/journal
Fri Nov 25 21:47:14 [initandlisten] recover : no journal files present, no recovery needed
Fri Nov 25 21:47:15 [websvr] admin web console waiting for connections on port 28017
Fri Nov 25 21:47:15 [initandlisten] waiting for connections on port 27017
And I am able to connect with mongoin terminal.
After 2 hours of Googling I hope the competence of SOs community are able to figure this out.
Please, if you need more information about my app-setup just ask.
Thanks!
What you see is that the connection times out... that happens either after a long period of inactivity, or if you put your computer to sleep.
You can change / increase the timeout value, but this way you can't get rid of the connection timing out eventually.
Some MongoDB drivers allow to set :timeout => false , but Mongoid seems to still have problems with that
(see last 3 links in the list)
Hope this helps.
See also:
Mongodb server goes down, how to prevent Rails app from timing out?
MongoDB: What is connection pooling and timeout?
https://github.com/mongodb/mongo-ruby-driver
How can I query mongodb using mongoid/rails without timing out?
http://groups.google.com/group/mongoid/browse_thread/thread/b5c94e7047b42f8a
https://github.com/mongoid/mongoid/issues/455
Try to change localhost to 127.0.0.1!

Resources