I've been using amcharts (a flash component) to produce charts from within my rails application.
I'm curious.. is there a GEM or plugin that allows me to include a charting component in my web app that lets users mix from any data sets they want, and produce basic charts on their own? It would take me ages to script such a tool...
Ideally, I'd like it to read anbunch of xml (or whatever.. perhaps data right out of my database) that has multi-variable data, and the user can use the component to customize his/her own chart with several series, or however they want. A "dummbed" down version of excel, delivered over the web :)
Well you're looking for a graphic images generator ?
There is Gruff which allows that. But I personnaly don't really like the look of the graphs generated.
There is also something language agnostic : Google Charts Api. Which allows you to generate graphics by calling a specific Google url generated with your encoded datas.
So you take your datas from wherever you want (database, xml, ...), you call one of those two libraries and you get your graph.
I too had been searching for Ruby supporting charting libraries. Below are additional charting resources I've not seen mentioned in this post but I've come across in the past.
JQuery sparklines
ProtoChart
XML/SWF Charts
FusionCharts free (Also have a paid version)
Smashing Apps article on charting resources
Enjoy!
Related
In a xpages application I need to mount a label with a certain layout, analogous to the layout of a ticket. Searching, I have verified that the most used practice is to use openoffice to design the odt model and in java to use bilbiotec to JOD Reports. Do you advise to follow this line yourself, or do you have any suggestions?
I would concur with Marcus. The way forward is PDF output. There are a couple of ways to do this, depending on your constraints.
When user must design every aspect of the ticket using openoffice is a suitable approach, however you need a headless openoffice install for the rendering
If everything can be code, then PDFBox is a good way to go. Wrap your code into a managed bean
The middle path would be XSL:FO and Apache FOP. It allows alteration of the layout by providing a different style sheet. I wrote an article series outlining that approach.
Let us know what works for you!
There is also the POI4XPages plugin. You could design your form with Word and then use placeholders to populate the document and output as a pdf.
See https://poi4xpages.openntf.org/main.nsf/project.xsp?r=project/POI%204%20XPages/releases/E80C4FC9FB07E1E4852580E3006E02C7
Download the latest version (1.4) at http://p2.openntf.org/repository.nsf/home.xsp/poi4xpages/snapshots
Howard
I was able to solve my problem, because I discovered that here in the company there is the abcpdf software. Through a web service that uses the APis of this software, I pass the html code of the ticket and the web service returns the pdf document in an array of bytes. I created a managed javabean to consume the web service and display the pdf in the browser.
Thanks to all who have contributed in some way with suggestions.
so i need to develop an app using phonegap that creates a graphical display of solar wind data (exciting stuff i know...) from this website http://services.swpc.noaa.gov/text/ace-swepam.txt with a graph being made for 'ion temperature', 'bulk speed' and 'proton denisity' individually, however im clueless as to where to begin... im assuming i need to make use of the charts.js library or something similar, im assuming i can make a variable for the axis as the data will be changing over time but I'm more stuck on how to pull data from this website though to be included in my charts. Any info on this would be greatly appreciated!
Thanks,
Gerrit
Call the web service to get the data, then pass it to the charting library you're using. You'll use AJAX (XMLHTTPRequest) to get the data. There's all sorts of options out there for simplifying this (jQuery and other libraries make AJAX easy).
The service you're using is giving you the data as a text file - this can work, but you'll have to parse the data client-side which is not fun (or a good use of your phone's capabilities). Look for a service that returns the data as a JSON object, then you'll have the data in a format that can be more easily passed to the charting library.
I'm scripting with VB.net (and sometimes with c#) within Grasshopper (a plug-in for a 3d modeling program called Rhino), and I'd like to interact with Google Docs, specifically with the spreadsheet app.
I want to be able to send data from Grasshopper to populate google spreadsheets.
The data is always either numerical or string.
I'd also like to generate charts from the data.
There is a solution to this at the bottom of this thread on the GH website.
.. And this is a solution for reading that should work in python as to use c# would need you to use other libraries.
You need to publish the spreadsheet as a csv first.
import urllib2
myUrl="https://docs.google.com/spreadsheet/pub?key=0AgIWT_wqd-VmdE1NekRSWFZoUnBQdWJhYUhwcU1vclE&single=true&gid=0&output=csv"
response = urllib2.urlopen(myUrl)
print response.read()
Here's a working GH implementation
[this should probably be a comment, but I can't comment yet]
I'm looking for an open source news feed engine to use in an app I'm developing.
The engine needs to be able to aggregate news (items) from multiple sources a user is following and also optionally group them by news source or news type. A scalable solution in Java or with Java interface would be great.
I have already developed a very simple one, but I would prefer to use a robust and reliable solution instead.
Do you have any suggestion?
I created a backend for this in Java builing on neo4j: It is independent on the number of users and news sources one follows but depends just linear on the amount of items you want to display
find an explaination how it works and benchmarks with social networks with up to 2 mio users at http://www.rene-pickhardt.de/graphity-an-efficient-graph-model-for-retrieving-the-top-k-news-feeds-for-users-in-social-networks/
There is also the source code available: https://github.com/renepickhardt/graphity-evaluation
Check out Rome
http://rometools.org/
In case you also use .NET, Argotic Syndication Framework is for sure the best
http://argotic.codeplex.com/
Yahoo pipes is a very good rss feed which lets you create your own feed aggregator with custom filters.
Note: Python version for Yahoo pipes Pipe2py.
Aonther offering from yahoo is Daper.
There are few more online tools for creating custom feeds which you might want to look at.
FeedDistiller is a free service for aggregating new feeds by subject,
I want to know if there is a better way of extracting info from a web page than parsing the HTML for what i'm searching. ie: Extracting movie rating from 'imdb.com'
I'm currently using the IndyHttp components to get the page and i'm using strUtils to parse the text but the content is limited.
I found plain simple regex-es to be highly intuitive and simple when dealing with good web-sites, and IMDB is a good web site.
For example the movie rating on the IMDB's movie HTML page is in a <DIV> with class="star-box-giga-star". That's VERY easy to extract using a regular expression. The following regular expression will extract the movie rating from the raw HTML into capture group 1:
star-box-giga-star[^>]*>([^<]*)<
It's not pretty, but it does the job. The regex looks for the "star-box-giga-star" class id, then it looks for the > that terminates the DIV, and then captures everything until the following <. To create a new regex like this you should use a web browser that allows inspecting elements (for example Crome or Opera). With Chrome you can simply look at the web-page, right-click on the element you want to capture and do Inspect element, then look around for easily identifiable elements that can be used to create a good regex. In this case the "star-box-giga-star" class is obviously easily identifiable! You'll usually have no problem finding such identifiable elements on good web sites because good web sites use CSS and CSS requires ID's or class'es to be able to style the elements properly.
Processing RSS feed is more comfortable.
As of the time of posting, the only RSS feeds available on the site are:
Born on this Date
Died on this Date
Daily Poll
Yet, you may make a call for adding a new one by getting in touch with the help desk.
Resources on RSS feed processing:
Relevant post here on SO.
Super Object
Wikipedia.
When scraping websites, you cannot rely on the availability of the information. IMDB may detect your scraping and attempt to block you, or they may frequently change the format to make it more difficult.
Therefore, you should always try to use a supported API Or RSS feed, or at least get permission from the web site to aggregate their data, and ensure that you're abiding by their terms. Often, you will have to pay for this type of access. Scraping a website without permission may open you up to liability on a couple legal fronts (Denial of Service and Intellectual Property).
Here's IMDB's statement:
You may not use data mining, robots, screen scraping, or similar
online data gathering and extraction tools on our website.
To answer your question, the better way is to use the method provided by the website. For non-commercial use, and if you abide by their terms, you can download the IMDB database directly and use the data from there instead of scraping their site. Simply update your database frequently, and it's a better solution than scraping the site. You could even wrap your own web API around it. Ratings are available as a standalone table.
Use HTML Tidy to convert any HTML to valid XML and then use an XML parser, maybe using XPATH or developing your own code (which is what I do).
All the answers posted cover well your generic question. I usually follow an strategy similar to the one detailed by Cosmin. I use wininet and regex for most of my web extraction needs.
But let me add my two cents at the specific subquestion on extracting imdb qualification. IMDBAPI.COM provides a query interface returning json code, which is very handy for this type of searches.
So a very simple command line program for getting a imdb rating would be...
program imdbrating;
{$apptype console}
uses htmlutils;
function ExtractJsonParm(parm,h:string):string;
var r:integer;
begin
r:=pos('"'+Parm+'":',h);
if r<>0 then
result:=copy(h,r+length(Parm)+4,pos(',',copy(h,r+length(Parm)+4,length(h)))-2)
else
result:='N/A';
end;
var h:string;
begin
h:=HttpGet('http://www.imdbapi.com/?t=' + UrlEncode(ParamStr(1)));
writeln(ExtractJsonParm('Rating',h));
end.
If the page you are crawling is valid XML, i use SimpleXML to extract infos. Works pretty well.
Resource:
Download link.