local phantomjs/highcharts export server on tomcat - highcharts

I have searched everywhere on this site and the internet without getting a clear understanding.
I have successfully installed phantomjs and highcharts on a CentOS 6.7 per instructions Setting Up Export Server
Here are the required .js files in
"/software/phantomjs/highcharts/highcharts-export-server-master/phantomjs":
highcharts-convert.js highcharts-more.js highstock.js highmaps.js
d3-funnel.js gauge.min.js exporting.js jquery.1.9.1.min.js
I'm very new to phantomjs and especially highcharts - what I am looking to do is provide a way for batch programs running on one server (Server B) to send POST requests to the export server on Server A and get back .png or .pdf files.
The war is deployed on Tomcat and 10 separate servers are running starting with port 7777 and the PhantomJS server is running as well at 127.0.0.1:3003 per the following command and app-convert.js configuration file:
phantomjs highcharts-convert.js -host 127.0.0.1 -port 3003
phantomjs properties
the host and port phantomjs listens to
host = 127.0.0.1
port = 7777
location of the phantomjs executable, could be for example > /usr/local/bin/phantomjs
exec = /software/phantomjs/phantomjs
specify here an alternative location (the whole path!) for the script that > starts an Phantomjs server. F.eks /home/bert/scripts/my-highcharts-convert.js
Leave empty if you're using the script bundled with the export-server.
script =
connect properties used to connect with phantomjs running as HTTP-server >
all values in milliseconds
specifies the timeout when reading from phantomjs when a connection is > established
readTimeout = 6000
timeout to be used when opening a communications link to the phantomjs > server
connectTimeout = 1000
the whole request to the phantomjs server is scheduled, max timeout can last > to this value. This is because in java you can't rely on the above two > timeouts.
maxTimeout = 6500
Pool properties
number of phantomjs servers you can run in the pool.
poolSize = 10
The pool is implemented as a BlockingQueue. When asking for a phantom server > connection and nothing is available, it waits for the number of milliseconds > defined by maxWait
maxWait = 6000
Keep files in the temp folder for a certain retentionTime, defined in > miliseconds
retentionTime = 300000
I can hit http://my-server/highcharts-export-web/ demo page and it works fine from a browser.
THE QUESTIONS I HAVE:
What URL do I want to use for my remote batch program?
Is //my-server/highcharts-export-web/ supposed to work for my
remote calls?
Is the webapp designed to receive direct requests from non-browser clients?
What process calls the 10 servers in the server pool?
Can someone provide an example of how you would setup remote calls to the export server (they will run multiple times per day) and return .png's or .pdf's from batch program?
Thanks
Brian

We have had success! Part of the problem was calling a hosted server in HTTP from HTTPS (javascript doesn't like it.
We are also a coldfusion house so this first example is a straight server-side call with a mostly minified basic chart.
Note: We did have success calling HTTP from HTTPS server-side through the cfhttp tag (which is similar to PHP cURL stuff)...
<!--- first we need to create a small chart --->
{
"xAxis":{
"categories":["Jan","Feb","Mar"]
},
"series":[{
"data":[29.9,71.5,106.4]
}]
}
<!--- lets go a little bit larger (I cut in something bigger below...
--->
<cfsavecontent variable="stringItForMe">
<cfprocessingdirective suppressWhiteSpace="true">
{xAxis: {categories: ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec']},series: [{data: [29.9, 71.5, 106.4, 129.2, 144.0, 176.0, 135.6, 148.5, 216.4, 194.1, 95.6, 54.4]}]}
</cfprocessingdirective>
</cfsavecontent>
<!--- next we encode the variable the parameter names will normally have double quotes for cleanliness but highcharts exports works without them. --->
<cfset stepTwo = URLEncodedFormat(stringItForMe)>
<!---
setting my URL target...
--->
<cfset urlMaker = "http://targetServer:10001/highcharts-export-web/">
<cfhttp url="#urlMaker#" result="r" method="post" timeout="1">
<cfhttpparam type="HEADER" name="Content-Type" value="application/x-www-form-urlencoded; charset=UTF-8">
<cfhttpparam name="data" type="body" value="async=true&type=jpeg&width=800&options=#stepTwo#">
</cfhttp>
<cfset targetImage = urlMaker&r.fileContent>
<cfoutput><img src="#targetImage#"/></cfoutput>
Now this second one is javascript/jQuery it was pulled from a jsfiddle example provided by highsoft but I don't remember if I found it on the forum or the exprt server page, but we called the highcharts hosted export server over HTTPS and it worked (how every our in house rendering looked much better, I think we have some additional dependencies but either way success on both...
We had to debug this javascript to get the coldfusion version of this to function.
<br><br>
<button id='b'>Run Code</button>
<div id="container"></div>
<script>
$(function () {
$("#b").click(testPOST);
//var exportUrl = 'http://targetServer:10001/highcharts-export-web/';
var exportUrl = 'https://export.highcharts.com/';
function testPOST() {
var optionsStr = JSON.stringify({
"xAxis": {
"categories": ["Jan", "Feb", "Mar"]
},
"series": [{
"data": [29.9, 71.5, 106.4]
}]
}),
dataString = encodeURI('async=true&type=jpeg&width=400&options=' + optionsStr);
if (window.XDomainRequest) {
var xdr = new XDomainRequest();
xdr.open("post", exportUrl+ '?' + dataString);
xdr.onload = function () {
console.log(xdr.responseText);
$('#container').html('<img src="' + exporturl + xdr.responseText + '"/>');
};
xdr.send();
} else {
$.ajax({
type: 'POST',
data: dataString,
url: exportUrl,
success: function (data) {
console.log('get the file from relative url: ', data);
$('#container').html('<img src="' + exportUrl + data + '"/>');
},
error: function (err) {
debugger;
console.log('error', err.statusText)
}
});
}
}
});
</script>
I think with these two working examples someone could port it to PHP, or other languages with little problem. just keep in mind to have your console open if in javascript and debug on in Coldfusion :)
Lastly, one of the most frustrating parts of this discovery was hitting the server, talking to the server but pulling back the default export example page in data OR filecontent (for Coldfusion), it had us scratching our heads because we didn't know how to get passed that part and just get our file.

Related

Cannot POST with ESP8266 (espruino)

I cannot make post request (get works fine) with espruino.
I've already checked the documentation and it seems pretty equal
here is my code:
let json = JSON.stringify({v:"1"});
let options = {
host: 'https://******,
protocol: 'https',
path: '/api/post/*****',
method: 'POST',
headers:{
"Content-Type":"application/json",
"Content-Length":json.length
}
};
let post = require("http").request(options, function(res){
res.on('data', function(data){
console.log('data: ' + data);
});
res.on('close', function(data){
console.log('Connection closed');
});
});
post.end(json);
The espruino console only return the 'connection closed' console.log.
The node.js server console (hosted on heroku and tested with postman) dont return anything.
Obv the esp8266 is connected to the network
What you're doing looks fine (an HTTP Post example is here), however Espruino doesn't support HTTPS on ESP8266 at the moment (there isn't enough memory on the chips for JS and HTTPS).
So Espruino will be ignoring the https in the URL and going via HTTP. It's possible that your server supports HTTP GET requests, but POST requests have to be made via HTTPS which is why it's not working?
If you did need to use HTTPS with Espruino then there's always the official Espruino WiFi boards, or I believe ESP32 supports it fine too.
you're using a package called "http" and then trying to send a request over https. You should also log out 'data' in the res.close so you can get some errors to work with.

Ajax calls to TFS 15 Rest Api stopped working after upgrade

In TFS 2015 Update 3 everything was working without issues. I used to consume all apis using the npm package request without any problems.
Using jquery the following call also would always complete correctly:
//this used to work in 2015 Update 3
var request = {
url: "https://my_server/tfs/DefaultCollection/_apis/projects?api-version=2.0",
type:'GET',
contentType: "application/json",
accepts: "application/json",
dataType: 'json',
data: JSON.stringify(data),
beforeSend: function (xhr) {
xhr.setRequestHeader("Authorization", "Basic " + btoa("my_username:my_password"));
}
};
$.ajax(request);
After upgrading to TFS 15 RC2 the above mechanism is not working anymore. The server always returns a 401 - Unauthorized error.
Testing the same call via curl, everything worked out well:
//this works well
curl -u my_username:my_password https://my_server/tfs/DefaultCollectiopis/projects?api-version=2.0
But again failed when I tried to send the credentials in the header, something like this:
//this will fail
curl https://my_server/tfs/DefaultCollection/_apis/projects?api-version=2.0 \
-H "Accept: application/json" \
-H "Authorization: Basic eNjPllEmF1emEuYmFuNppOUlOnVuZGVmaW5lZA=="
Same 401 - Unauthorized error.
I tried to set up my Personal Access token, since it is included in TFS 15 RC2, and do a test as indicated here
$( document ).ready(function() {
$.ajax({
url: 'https://my_server/defaultcollection/_apis/projects?api-version=2.0',
dataType: 'json',
headers: {
'Authorization': 'Basic ' + btoa("" + ":" + myPatToken)
}
}).done(function( results ) {
console.log( results.value[0].id + " " + results.value[0].name );
});
});
and it also fails. However, after replacing myPatToken for my actual password and passing my username as well, then the request completed successfully:
//everything works correctly with the headers like this
headers: {
'Authorization': 'Basic ' + btoa("my_username:my_password")
}
In the nutshell, something is going wrong when I setup the header like this (using jquery):
//this fails
beforeSend: function (xhr) {
xhr.setRequestHeader("Authorization", "Basic " + btoa("my_username:my_password"));
}
And looks like the package npm request, which is the one I'm using, also probably uses the beforeSend property or similar and it's failing.
//this used to work with 2015 Update 3, not anymore after upgrading to 15 RC2
var options = {
url: 'https://my_server/defaultcollection/_apis/projects?api-version=2.0',
method:'GET',
headers: {
'Authorization': "Basic " + btoa("my_username:my_password")
},
json: data
};
request(options, (error, response, body) => {
if (!error && response.statusCode == 200) {
console.log(response);
} else {
console.log(error);
}
});
It makes me think it is probably something in the IIS configuration but Basic Authentication is properly configured. Is there a way to get this working using the package request?
Something changed in the IIS configuration after the upgrade?
The problems got solved after restarting the server, so I guess my case was an isolated situation, not related with TFS it self.
Now sending the request using the request package seems to work well.
However, it is strange that testing it in the browser using jquery still fails. I noticed that
var username = "my_username",
password = "my_password";
btoa(my_username + ":" + my_password); //generates wrong encoded string
generates a different encoded string then simply
btoa("my_username:my_password") //generates right encoded string
The right one is a few characters shorter, this is an example:
eXVuaWQuYmZGV12lpmaW5lZA== //wrong
eXVuaWQuYmZGVsbEAxMw== //correct
No idea why, though.

Connect to Socket.io server from a Rails server

[update 1]
i found out that io.connect('http://xxxxx.herokuapp.com') actually send out a request on xxxxx.herokuapp.com port 3000:
the connection works if the request is WITHOUT port number. I didn't specify a port in my io.connect, How can i get rid of that?
Request URL:http://xxxxx.herokuapp.com:3000/socket.io/1/?t=1360853265439
Request Headersview source
Cache-Control:max-age=0
Origin:http://localhost:3000
Referer:http://localhost:3000/about
User-Agent:Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_2) AppleWebKit/537.17 (KHTML, like Gecko) Chrome/24.0.1312.57 Safari/537.17
Query String Parametersview sourceview URL encoded
t:1360853265439
[update 2]
for comparison here is the HEAD of a successful connect when i run the java script from local file system
Request URL:http://xxxxx.herokuapp.com/socket.io/1/?t=1360854705943
Request Method:GET
Status Code:200 OK
Request Headersview source
Accept:*/*
Accept-Charset:ISO-8859-1,utf-8;q=0.7,*;q=0.3
Accept-Encoding:gzip,deflate,sdch
Accept-Language:en-US,en;q=0.8
Cache-Control:max-age=0
Connection:keep-alive
Host :xxxxx.herokuapp.com
Origin:null
User-Agent:Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_2) AppleWebKit/537.17 (KHTML, like Gecko) Chrome/24.0.1312.57 Safari/537.17
[update 3]
set port to 80 seems to be working
var socket = io.connect('http://xxxxx.herokuapp.com',{port:80});
[Question] I intend to have socket.io set up on one host to talk to a Rails app on another host. It all works fine when the client side Javascript was hosted on localhost:3000 to connect to Socket.io hosted on a node.js server on localhost:5000. It works as well when both client side Javascript and Socket.io hosted on Heroku on the same port.
socket_server.js hosted on XXXX.herokuapp.com
var app = require('http').createServer(handler)
, io = require('socket.io').listen(app)
, fs = require('fs')
, i = 0
io.configure(function () {
io.set("origin = *");
io.set("transports", ["xhr-polling"]);
io.set("polling duration", 100);
});
var port = process.env.PORT || 5000; // Use the port that Heroku provides or default to 5000
app.listen(port, function() {
console.log(">>>>>>socket server up and running on port: "+port);
});
function handler (req, res) {
fs.readFile(__dirname + '/socket.html',
function (err, data) {
if (err) {
res.writeHead(500);
return res.end('Error loading socket.html');
}
res.writeHead(200);
res.end(data);
});
}
io.sockets.on('connection', function (socket) {
console.log(">>>>>>client connected through socket");
socket.emit('news', '>>>>>>server say hello to client', i);
console.log('>>>>>>server say hello to client' +'['+i+']')
socket.on('my other event', function (data) {
socket.emit('news', i);
i++;
console.log(data +'['+i+']');
});
});
if I put client side javascript in socket.html on XXXX.herokuapp.com same as socket.io it behave as expected.
<div id='sandbox'>sandbox</div>
<script src="/socket.io/socket.io.js"></script>
<script>
var socket = io.connect('window.location.hostname');
$('#sandbox').append('<div> lora </div>');
socket.on('news', function (data, index) {
$('#sandbox').append('<div>' + data + ' ' + index + '</div>');
console.log(data);
socket.emit('my other event', { my: 'data' });
});
</script>
However if I put client side Javascript on Rails server on YYYY.herokuapp.com and try to connect to Socket.io server on xxx.herokuapp.com, it doesnt work. it managed to retreive socket.io.js on the server but io.connect('http://xxxxx.herokuapp.com') doesnt not get any response from the server.
<div id='sandbox'>sandbox</div>
<script src="http://xxxxx.herokuapp.com/socket.io/socket.io.js"></script>
<script>
var socket = io.connect('http://xxxxx.herokuapp.com');
$('#sandbox').append('<div> lora </div>');
socket.on('news', function (data, index) {
$('#sandbox').append('<div>' + data + ' ' + index + '</div>');
console.log(data);
socket.emit('my other event', { my: 'data' });
});
</script>
I read a few post point the solution to set io.set("origins = *") but this seems not working in this case as well.
Origins are set to * by default in socket.io. And it would be io.set("origin", "*"); but as this is the default value, you can just ignore this function.
I do notice that you also have var socket = io.connect('window.location.hostname') in your client side code, you problably ment: var socket = io.connect(window.location.hostname) without the single quotes. The heroku docs do recommend the use of io.set("polling duration", 100); But you set it to 100 instead of the advised 10.Other then that I don't see anything wrong with your code and it should just work.
If these fixes don't help then I would blame it on your hosting. Heroku is a really restrictive platform when it comes to deploying real-time applications. So it could be that they are messing something up here. I would suggest trying to deploy somewhere else like on nodejitsu.com which also supports WebSockets and does not impose any limits to polling to see if it works there for you.

Uploading files via trigger.io forge

I'm using the Forge file module to try and upload an image from the gallery. Forge is running on Android 2.3 and the image selection capture bit works fine. But when I try to send the file with Request.Ajax() I get a forge exception.
I've dumped the output from the Catalyst log below
Request URL:forge.request.ajax
Request Method:undefined
Status Code:400 error
{ url: 'http://example.com/',
username: null,
password: null,
data: null,
headers: { Accept: '*/*', 'Content-Type': 'image/jpg' },
timeout: 60000,
type: 'POST',
boundary: null,
files:
[ { uri: 'content://media/external/images/media/212#Intent;end',
name: 'Image',
height: 500,
width: 500 } ],
fileUploadMethod: 'raw' } // <- got this from a blog post,
And this is what I get in return
{ type: 'UNEXPECTED_FAILURE',
message: 'Forge Java error: FileNotFoundException: http://example.com/' }
I've checked the server side and confirmed there is no problem there (Made a test script that posts there). The app posts to the server if I remove the file attach calls.
I've looked at the sample code posted here but it seems to be using the old API and I can't find some of the methods - https://github.com/trigger-corp/photo-log/blob/master/photolog.js
Am I doing anything wrong in the file call?
There are no obvious problems with your Catalyst output: the FileNotFoundException just indicated something went wrong on the server side. In this case, I guess example.com wasn't expecting a multipart encoded POST.
We pushed some code live yesterday which makes our request.ajax error messages much clearer: I'd suggest you rebuild and re-run your app and see if you can tell what the server-side problem is.

Xhr upload support on Cross Domain requests

I have a proxy that uploads a fire to the Amazon S3 Server. This proxy is made using NodeJS and my webpage is hosted on a Tomcat server. So to make a Xhr upload I had to use Nginx to solve the cross domain issues as the both servers are on the same machine.
But using Nginx has a lot of issues so my boss asked if I can do the same thing using a Cross Domain Policy. I've made it but that is some things that I'm not being able to do. Here is my code:
var xhr = new XMLHttpRequest();
xhr.onreadystatechange = function(e) {};
xhr.open('POST', self.basePath + '/upload' + file.name, true, null, null, null, true, true);
xhr.setRequestHeader("Accept", "application/json, text/javascript, */*; q=0.01");
xhr.setRequestHeader("Connection", "close");
xhr.sendAsBinary(file.getAsBinary());
This works on a Cross Domain Request reaching the Server with the file and returning the response, but when I try to set a progress event like these:
var xhr = new XMLHttpRequest();
xhr.onreadystatechange = function(e) {};
xhr.upload.onprogress = function(){console.info('on progress');};
xhr.open('POST', self.basePath + '/upload' + file.name, true, null, null, null, true, true);
xhr.setRequestHeader("Accept", "application/json, text/javascript, */*; q=0.01");
xhr.setRequestHeader("Connection", "close");
xhr.sendAsBinary(file.getAsBinary());
The Cross Domain Request isn't fired ( the request uses a "OPTIONS" method instead of POST and it never reaches the server ). But, as you may think, I need the progress event to show it to the user. Does anyone knows what is happening??
Ps: All of the codes above works perfectly on a "same domain" request.
Ps2: I've tried xhr.onprogress but it is never fired ( on cross or same domain requests )
Ps3: I've tried on FF4+ and Chrome 12+
Thanks A LOT.
Thiago
I faced a similar problem before.
Solution: Add
<AllowedHeader>*</AllowedHeader>
in your CORS configuration on s3
xhr.upload.addEventListener("progress", yourUpdateProgressFunction, false);

Resources