How can we pull Runscope data into Grafana? - devops

We have grafana 7.x ready and now we want to get data from runscope api to grafana.
We want dashboards for api test performance, any failure or any other issue to monitor.
Thanks

Related

How do we poll data from lighthouse to InfluxDB or Graphite to visualize it in Grafana

I am setting up a Grafana dashboard to visualize the lighthouse / Google Page Speed Insight data. How do I poll the results from lighthouse / GPSI to InfluxDB / Graphite so that I can see the data in Grafana dashboard?
Lighthouse doesn't natively export to Graphite however there are some npm packages that I've come across and used with varying degrees of success.
https://github.com/aykut-rocks/lighthouse-to-graphite
This utility was easy to set up, edit the configuration file with information about your Graphite server, optionally add a prefix to the metric. It generated the following graphite format metrics during a test run on metricfire.com:
metricfire-com.bootup-time
metricfire-com.critical-request-chains
metricfire-com.dom-size
metricfire-com.mainthread-work-breakdown
metricfire-com.redirects
metricfire-com.time-to-first-byte
metricfire-com.total-byte-weight
metricfire-com.unminified-css
metricfire-com.unminified-javascript
metricfire-com.unused-css-rules
metricfire-com.uses-long-cache-ttl
metricfire-com.uses-optimized-images
metricfire-com.uses-request-compression
metricfire-com.uses-responsive-images
metricfire-com.uses-webp-images
Another option is https://www.npmjs.com/package/lighthouse-graphite
Here's a useful Grafana dashboard that you can import which should support the metrics generated by Lighthouse.
Alternatively, instead of using Chrome's audit tool, you could use something like Sitespeed.io
I hope this helps!

Docker image statistics from hub.docker.com

I have a docker image on hub.docker.com. Is there a way to find out who is using my docker image or who is pulling it? Any statistics that hub.docker.com can provide.
You can get the total pull count and star count from the API:
https://hub.docker.com/v2/repositories/$1/$2
For example:
curl -s https://hub.docker.com/v2/repositories/library/ubuntu/ | jq -r ".pull_count"
Only the statistics about number of pulls can be retrieved, at the moment. Then you can use Google Apps Script to record the number of pulls periodically and store it in a google sheet. You can find more about that here https://www.gasimof.com/blog/track_docker_image_pulls
Since the Docker Hub does not have an out-of-the-box way to see the pull trend, I end up implementing a Prometheus exporter for myself and add a dashboard in my Grafana.
Below is the graph from PromQL: docker_hub_pulls{repo="$repo"}.
Here is the Github link to my project: predatorray/docker-hub-prometheus-exporter.
It's an old thread. Just stubling on this.
I also used an exporter (different one as the one from predatorray, which uses a Promotheus-database. Worked fine, only the lastupdated date was in unixformat and I couldn't get it working to show the regular date in a more readable format. It was written in golang.
After having modified a script to store pihole-stats in an InfluxDB I thought to modify that script to get the information from hub.docker.com in my InfluxDB. After a bit of testing I managed. Running it in a docker-container now.https://github.com/pluim003/dockerhub_influx

Can I run grafana outside of firewall

I'll have influxDB storing arduino sensor data. I need to visualize this.
I want admins(around 100 people) go to their browser, type www.example.com, fill in their username and password, and see Grafana 10 visualization belongs to them from 1000 vizualizations. Is it possible with grafana or should I use something else?
In short yes.
To put it simple, Grafana only knows how to plot data, what you need to do is provide those data.
Commonly Grafana is used with Graphite, but you're absolutely not forced to use it, and have in fact various alternative for your datasource; InfluxDB is one of those. If you haven't yet done it, I suggest you to read more at http://docs.grafana.org/datasources/influxdb/
From your instance of Grafana click on Dashboards and select Data Sources; there you'll be able to select InfluxDB 0.9.x or InfluxDB 0.8.x and continue providing required data.
What you'll need is configure your architecture to give access to the InfluxDB from both your Arduino application(s) (to push/store data) and Grafana (to pull data).

find a free,cluster working influxdb

I'm going to use influxdb to store a lot of iot data from sensors.
As the last cluster version of influxdbv0.11 is not ready to use in production, and the Relay HA is too young too, is there another way to scale-out influxdb?
eg:
What are the maturity of the last cluster version of influxdb v0.11? Should I customize v0.11 or try other cost-saving way.
How about use kafka infront of influxdb to buffer data when influxdb got down?
How about sharding?Is there any detailed document about sharding in influxdb( https://influxdata.com/high-availability/)?
Any way, I just want to find a free, cluster working influxdb.
Other than InfluxDB Relay there isn't a free way to scale out InfluxDB.

Capture / Monitor system data of application server in Graphite

I am using graphite server to capture my metrics data and bring down to graphs. I have 4 application servers which is load balancer setup. My aim is capture system data such as cpu usage, memory usage, disk load, etc., for all the 4 application servers. I setup an graphite environment in a separate server and i wanted to push the system data for all the applications servers to graphite and get it display as graphs. I don't know what needs to be done for feeding system data to graphite. My thinking was to install statsd in all application servers and feed the system data to graphite but looks like statsd does not support system data rather application data.
Can anyone help me to catch the right track. Thanks in advance.
Running collectd with a graphite agent would be an excellent start to gather the information your after.
There is an almost unlimited amount of ways to get your data into graphite.
You can find a list of tools that have known to work very well with graphite on the readthedocs.org page: http://graphite.readthedocs.org/en/0.9.10/tools.html
There is also an example script that gathers load average from the system in the carbon project: example-client.py

Resources