I am displaying the contents of a large database in a Highcharts chart and I would like to capture the image of the graph automatically, say at 9pm daily, and save the image in a file. I have tried to install the server-side export server solution, but I have been unable to make it work on the raspberry pi.
As an alternative, I have captured the json used to generate the graph in a file. Can I send the file to Highcharts and get a PNG image in return? How, exactly, should I do this?
Thanks.
The json file contains the following. This is exactly the same json that is passed to the browser and results in a proper graph:
[{
"name": "Consumed",
"data": [
[1521612002000, 668],
[1521612182000, 668],
[1521634338000, 487]
]
}, {
"name": "Paid For",
"data": [
[1521612002000, 668],
[1521612182000, 668],
[1521634338000, 75]
]
}, {
"name": "Produced",
"data": [
[1521612002000, 0],
[1521612182000, 0],
[1521634338000, 412]
]
}, {
"name": "Paid For kWh",
"data": [
[1521612002000, null],
[1521615425000, 0.6118],
[1521634338000, null]
]
}]
I have tried the following curl command:
curl -H "Content-Type: application/json" -X POST -d '{"infile": combined_readings.json}' http://export.highcharts.com > file.png
It resulted in the following error:
SyntaxError: Unexpected token c in JSON at position 11<br> at Object.parse (native)<br> at parse (/var/app/current/node_modules/highcharts-export-server/node_modules/body-parser/lib/types/json.js:88:17)<br> at /var/app/current/node_modules/highcharts-export-server/node_modules/body-parser/lib/read.js:116:18<br> at invokeCallback (/var/app/current/node_modules/highcharts-export-server/node_modules/raw-body/index.js:262:16)<br> at done (/var/app/current/node_modules/highcharts-export-server/node_modules/raw-body/index.js:251:7)<br> at IncomingMessage.onEnd (/var/app/current/node_modules/highcharts-export-server/node_modules/raw-body/index.js:307:7)<br> at emitNone (events.js:86:13)<br> at IncomingMessage.emit (events.js:185:7)<br> at endReadableNT (_stream_readable.js:974:12)<br> at _combinedTickCallback (internal/process/next_tick.js:74:11)
With the help of highcharts support I was able to resolve several issues:
- the curl command requires a json string (as noted in the comments)
- the json string needs to be reformatted from that required by the browser
Here is the reformatted json string:
{
"series": [{
"name": "Consumed",
"data": [
[1521612002000, 668],
[1521612182000, 668],
[1521634338000, 487]
]
}, {
"name": "Paid For",
"data": [
[1521612002000, 668],
[1521612182000, 668],
[1521634338000, 75]
]
}, {
"name": "Produced",
"data": [
[1521612002000, 0],
[1521612182000, 0],
[1521634338000, 412]
]
}, {
"name": "Paid For kWh",
"data": [
[1521612002000, null],
[1521615425000, 0.6118],
[1521634338000, null]
]
}]
}
The curl command required looks like this:
curl -H "Content-Type: application/json" -X POST -d "{\"infile\": \"{series: [{name: 'Consumed',data: [[1521612002000, 668],[1521612182000, 668],[1521634338000, 487]]}, {name: 'Paid For',data: [[1521612002000, 668],[1521612182000, 668],[1521634338000, 75]]}, {name: 'Produced',data: [[1521612002000, 0],[1521612182000, 0],[1521634338000, 412]]}, {name: 'Paid For kWh',data: [[1521612002000, null],[1521615425000, 0.6118],[1521634338000, null]]}]}\"}" export.highcharts.com > chart.png
Related
I've been using youtube analytics API to fetch different reports, but since some days ago the api is not returning complete data, but only until 3 days before the current date.
The following is a request for video views report from 2020-04-20 until 2020-04-28, but the response returns data only until 2020-04-25, but I can see in my channel that we have video views in the dates 2020-04-26 and 2020-04-27.
'https://youtubeanalytics.googleapis.com/v2/reports?dimensions=day%2CinsightTrafficSourceType&endDate=2020-04-28&filters=video%3D%3DoQqVWPfSfe8&ids=channel%3D%3DMINE&metrics=views%2CestimatedMinutesWatched&sort=day%2C-views&startDate=2020-04-20&key=[YOUR_API_KEY]' \
--header 'Authorization: Bearer [YOUR_ACCESS_TOKEN]' \
--header 'Accept: application/json' \
--compressed
Response:
{
"kind": "youtubeAnalytics#resultTable",
"columnHeaders": [
{
"name": "day",
"columnType": "DIMENSION",
"dataType": "STRING"
},
{
"name": "insightTrafficSourceType",
"columnType": "DIMENSION",
"dataType": "STRING"
},
{
"name": "views",
"columnType": "METRIC",
"dataType": "INTEGER"
},
{
"name": "estimatedMinutesWatched",
"columnType": "METRIC",
"dataType": "INTEGER"
}
],
"rows": [
[
"2020-04-20",
"ADVERTISING",
23429,
12417
],
[
"2020-04-20",
"NO_LINK_OTHER",
31,
5
],
[
"2020-04-20",
"EXT_URL",
5,
0
],
[
"2020-04-21",
"ADVERTISING",
46469,
24522
],
[
"2020-04-21",
"NO_LINK_OTHER",
54,
9
],
[
"2020-04-21",
"EXT_URL",
5,
1
],
[
"2020-04-22",
"ADVERTISING",
40020,
21132
],
[
"2020-04-22",
"NO_LINK_OTHER",
43,
9
],
[
"2020-04-22",
"EXT_URL",
7,
2
],
[
"2020-04-23",
"ADVERTISING",
22944,
12127
],
[
"2020-04-23",
"NO_LINK_OTHER",
32,
6
],
[
"2020-04-23",
"EXT_URL",
3,
0
],
[
"2020-04-24",
"ADVERTISING",
8549,
4524
],
[
"2020-04-24",
"NO_LINK_OTHER",
42,
9
],
[
"2020-04-24",
"EXT_URL",
7,
3
],
[
"2020-04-25",
"ADVERTISING",
820,
432
],
[
"2020-04-25",
"NO_LINK_OTHER",
30,
3
],
[
"2020-04-25",
"EXT_URL",
8,
1
]
]
}
Has something changed in the API? any new restrictions?
Thank you for your support.
As a marketer, I'm going through the EmailOctopus (email service provider) API docs (https://emailoctopus.com/api-documentation) and have trouble combining multiple requests in one.
Goal: Get all campaign reports for all campaigns exported to a CSV.
Step 1: Get all campaign IDs. This works.
curl GET https://emailoctopus.com/api/1.5/campaigns?api_key={APIKEY}
Step 2: Get the report for a single campaign. This works too.
curl GET https://emailoctopus.com/api/1.5/campaigns/{CAMPAIGNID}/reports/summary?api_key={APIKEY}
Step 3: Combine step 1 and 2 and export to a CSV. No idea how to proceed here.
Output step 1:
{
"data": [
{
"id": "00000000-0000-0000-0000-000000000000",
"status": "SENT",
"name": "Foo",
"subject": "Bar",
"to": [
"00000000-0000-0000-0000-000000000001",
"00000000-0000-0000-0000-000000000002"
],
"from": {
"name": "John Doe",
"email_address": "john.doe#gmail.com"
},
"content": {
"html": "<html>Foo Bar<html>",
"plain_text": "Foo Bar"
},
"created_at": "2019-10-30T13:46:46+00:00",
"sent_at": "2019-10-31T13:46:46+00:00"
},
{
"id": "00000000-0000-0000-0000-000000000003",
"status": "SENT",
"name": "Bar",
"subject": "Foo",
"to": [
"00000000-0000-0000-0000-000000000004",
"00000000-0000-0000-0000-000000000005"
],
"from": {
"name": "Jane Doe",
"email_address": "jane.doe#gmail.com"
},
"content": {
"html": "<html>Bar Foo<html>",
"plain_text": "Bar Foo"
},
"created_at": "2019-11-01T13:46:46+00:00",
"sent_at": "2019-11-02T13:46:46+00:00"
}
],
"paging": {
"next": null,
"previous": null
}
}
Output step 2:
{
"id": "00000000-0000-0000-0000-000000000000",
"sent": 200,
"bounced": {
"soft": 10,
"hard": 5
},
"opened": {
"total": 110,
"unique": 85
},
"clicked": {
"total": 70,
"unique": 65
},
"complained": 50,
"unsubscribed": 25
}
How can I get all campaign reports in one go and exported to a CSV?
May be this URLs be helpful
Merging two json in PHP
How to export to csv file a PHP Array with a button?
https://www.kodingmadesimple.com/2016/12/convert-json-to-csv-php.html
I have an Excel on-line spreadsheet. It's sits on my company's OneDrive.
I'd like to ask is it possible to get value of the specific cell (with formula in it) using Microsoft Graph and simple curl from bash?
The answer is yes. It is possible to get value of the specific cell (with formula in it) using Microsoft Graph and simple curl from bash.
Try the following endpoint:
GET
/{version}/me/drive/items/{item-id}/workbook/worksheets/{worksheet-id}/range(address='A1:B2')
authorization: Bearer {access-token} workbook-session-id:
{session-id}
My test request endpoint:
https://graph.microsoft.com/v1.0/me/drive/root:/test.xlsx:/workbook/mysheet/range(address='c1')
Response
{
"#odata.context": "https://graph.microsoft.com/v1.0/$metadata#workbookRange",
"#odata.type": "#microsoft.graph.workbookRange",
"#odata.id": "/me/drive/root/workbook/worksheets(guid)/range(address=%27c1%27)",
"address": "Sheet1!C1",
"addressLocal": "Sheet1!C1",
"cellCount": 1,
"columnCount": 1,
"columnHidden": false,
"columnIndex": 2,
"formulas": [
[
"=SUM(D1,E1)"
]
],
"formulasLocal": [
[
"=SUM(D1,E1)"
]
],
"formulasR1C1": [
[
"=SUM(RC[1],RC[2])"
]
],
"hidden": false,
"numberFormat": [
[
"General"
]
],
"rowCount": 1,
"rowHidden": false,
"rowIndex": 0,
"text": [
[
"3"
]
],
"values": [
[
3
]
],
"valueTypes": [
[
"Double"
]
]
}
Full details of your operating system (or distribution) e.g. 64-bit Ubuntu 14.04.
Running InfluxDB/Kapacitor/Chronograf as Docker containers on MacOSX, latest Docker.
The version of Kapacitor you are running
Latest, 1.4.
Whether you installed it using a pre-built package, or built it from source.
Official Docker container
We are running into an issue with TICKscript and its groupBy behaviour.
We have two sets of measurements, indoor_temperatures and outdoor_temperatures, which we query with a batch.
The queries look as follows:
var out_temp = batch
|query('SELECT mean(temperature) FROM yyyy')
.every(10s)
.period(120d)
.groupBy(time(1h))
.fill(0)
var in_temp = batch
|query('SELECT mean(temperature) FROM xxxx')
.every(10s)
.period(120d)
.groupBy(time(1h))
.fill(0)
If we HTTP out both of them, they create the following sets of data:
{
"series": [
{
"name": "outdoor_temperatures",
"columns": [
"time",
"mean"
],
"values": [
[
"2017-09-20T17:00:00Z",
0
],
[
"2017-09-20T18:00:00Z",
11.5
]
... the rest
]
}
]
}
{
"series": [
{
"name": "indoor_measurements",
"columns": [
"time",
"mean"
],
"values": [
[
"2017-09-20T17:00:00Z",
585.44012944984
],
[
"2017-09-20T18:00:00Z",
592.94890510949
]
... the rest
]
}
]
}
Now we do a full join of them, which gives us expected results
out_temp
|join(in_temp)
.as('out_temp_mean', 'in_temp_mean')
.tolerance(5m)
.fill(0)
httpOut:
{
"series": [
{
"name": "outdoor_temperatures",
"columns": [
"time",
"in_temp_mean.mean",
"out_temp_mean.mean"
],
"values": [
[
"2017-09-20T17:00:00Z",
586.10175438596,
0
],
[
"2017-09-20T18:00:00Z",
592.94890510949,
11.5
]
... the rest
]
}
]
}
Which looks perfect. The issue raises when we want to round the out_temp_mean.mean down and groupBy it
So we go ahead and extend the script
out_temp
|join(in_temp)
.as('out_temp_mean', 'in_temp_mean')
.tolerance(5m)
.fill(0)
|eval(lambda: string(floor("out_temp_mean.mean")))
.as('bucket')
.tags('bucket')
.keep('out_temp_mean.mean', 'in_temp_mean.mean')
After which the output STILL looks as it should:
{
"series": [
{
"name": "outdoor_temperatures",
"columns": [
"time",
"in_temp_mean.mean",
"out_temp_mean.mean",
"bucket"
],
"values": [
[
"2017-09-20T17:00:00Z",
586.99190283401,
0,
"0"
],
[
"2017-09-20T18:00:00Z",
592.94890510949,
11.5,
"11"
]
]
}
]
}
Now only thing left is to group the values by the new tag bucket:
out_temp
|join(in_temp)
.as('out_temp_mean', 'in_temp_mean')
.tolerance(5m)
.fill(0)
|eval(lambda: string(floor("out_temp_mean.mean")))
.as('bucket')
.tags('bucket')
.keep('out_temp_mean.mean', 'in_temp_mean.mean')
|groupBy('bucket')
After which everything goes awry and we are greeted with series: null
{
"series": null
}
Is this expected behaviour? A bug? Or something else?
Also filed this as https://github.com/influxdata/kapacitor/issues/1765 if someone wonders.
Can someone explain how to create an API with APIC toolkit?
I would like to use this API to work with a Cloudant DB on IBM Bluemix or a local CouchDB to create, read and update of the geoJSON data.
Below is an easy example of typical data to store name and coordinates of point of interests.
[{
"type": "Feature",
"properties": {
"name": "Nice Place 1"
},
"geometry": {
"type": "Point",
"coordinates": [16.45961, 48.23896]
}
}, {
"type": "Feature",
"properties": {
"name": "Nice Place 2"
},
"geometry": {
"type": "Point",
"coordinates": [16.34561, 49.89612]
}
}]
LoopBack supports GeoPoint (i.e. Point in GeoJSON) datatype.
Considering your typical example, let's say you have a model named: Feature, then to use GeoPoint, your Feature.json should look like:
{
"name": "Feature",
"base": "PersistedModel",
"idInjection": true,
"options": {
"validateUpsert": true
},
"properties": {
"name": {
"type": "string"
},
"geometry": {
"type": "geopoint"
}
},
"validations": [],
"relations": {},
"acls": [],
"methods": {}
}
Now, this Feature model, having PersistedModel as base, will have common CRUD methods exposed as REST endpoints and you can store data, for example, using CURL:
curl -X POST --header "Content-Type: application/json" --header "Accept: application/json" -d "{
\"name\": \"Nice Place 1\",
\"geometry\": {
\"lat\": 16.20,
\"lng\": 48.23
}
}" "http://0.0.0.0:3000/api/Features"
Hope that helps with creating an API that supports GeoPoint.
Re: Cloudant db, I am not sure if it supports geo-spatial data out of the box, however there seems support for it: https://cloudant.com/product/cloudant-features/geospatial/
I tried with the model above with a loopback app(using cloudant as ds) and it's explorer:
Create with sample data:
{
"name": "string",
"geometry": {
"lat": 12,
"lng": 13
}
}
And get it from GET/ myGeoModels successfully:
[
{
"name": "string",
"geometry": {
"lat": 12,
"lng": 13
},
"id": "f08301abe833ad427c9c61ffd30df8ef"
}
]
APIC should have same behaviour of loopback.