Create an offline version of web application - ruby-on-rails

Question :
Where to start to write an application which can work without internet connection? Exactly like this
Explanation :
Say we have an web application which is already deployed. Since internet is not great in INDIA, I would like to create an offline version of same web application which users/people can access without internet as well. I want them to experience similar stuff of web interface without much of the changes.
One idea that came to my mind is to create a tar ball of contents of application and ship to the people/users. Users will have to use that tar ball to install/configure on their machine so that they can use it. Contents of tar ball is also debatable that what should I enclose in that tar ball. Apache, Technology stack etc etc.
I will be happy to write more in case I have not written precisely. My question is not related to any technology stack but this might be of interest to everyone. Since I am not sure which is the right tag to append here, can anybody from stackoverflow team help to tag right tag. :)
My application is actually in RoR. So, Tagging ruby on rails community. May be they can help here?

As long as your web application contains only flat files (HTML, CSS, JS, text data, etc.) and does not depend on any components that need to be installed, then you can simply distribute those files in an archive (.zip will be more cross-platform-friendly) and the user could open the application by opening the front page in a browser. To make it better for the user, a small application which invokes the user's browser with the local URI should also be included.

Related

How to stop embedded google drive on my website from linking to Google when clicked on

After having embedded google drive files on my website (awesome feature), I found a minor drawback.
When clicking on one of the maps in the list, it will redirect/link the viewer on my page to the google drive site. However, I want to keep the viewer on my page and the folder to open within my own website.
Also I want other folders within these folders to open within the borders of my website, and so on and so forth.
The used code is simple:
The used website is Typo3 based.
Does anyone have a solution for this problem?
Thank you very much in advance; all replies and suggestions are highly appreciated!
After a quick search it seems to me this is more a hack than an official google feature, so probably there's no easy way for altering the behaviour of the stuff inside the iframe. I would rather recommend setting an outbound link and accepting the fact that you're hosting the files at Google.
In the future, there might (or might not) be a File Abstraction Layer Adapter for Drive coming up: http://wiki.typo3.org/FAL_Adapters. Well, probably not so soon. But for Dropbox!

Download entire website

I want to be able to download the entire contents of a website and use the data in my app. I've used NSURLConnection to download files in the past, but I don't believe it is capable of downloading all files from an entire website. I'm aware of the app Site Sucker, but don't think there is a way to integrate it's functionality into my app. I looked into AFNetworking & ASIHttpRequest, but didn't see anything useful to me. Any ideas / thoughts? Thanks.
I doubt there is anything out of the box that you can use, but existing libraries that you mentioned (AFNetworking & ASIHttpRequest) will get you pretty far.
The way this works is, you load the main website. Then you go through the source and find any resources that that page uses to display its contents and link to other pages. You then need to recursively download the contents of those resources, as well as its resources.
As you can imagine, there are few caveats to this approach:
You will only be able to download files that are mentioned in the source codes. Hidden files or files that aren't used by any page will not be downloaded as the app doesn't know of their existence.
Be aware of relative and absolute paths: ./image.jpg, /image.jpg, http://website.com/image.jpg, www.website.com/image.jpg, etc. could all link to the same image.
Keep in mind that page1.html could link to page2.html and vice versa. If you don't put any checks in place, this could lead to an infinite loop.
Check for pages that link to external websites--you probably don't want to download those as many websites have links to the outside and here you downloading the entire Internet to an iPhone with 8GB of storage.
Any dynamic pages (the ones that use a server side scripting language, such as PHP) will become static because they lose their server backend to provide them with dynamic data.
Those are the ones I could come up with, but I'm sure that there's more.

I need to take screenshots and upload to a webserver, what technology should I use?

I am building a peice of software that needs to allow a user to take a screenshot of his/her computer which will then be uploaded to a web server.
What technology should I use? I don't think js has access to the appropriate resources, but would like to keep it browser based. Help?
nope ... javascript won't work ...
you need some lib which takes a screenshot and another one which does the upload.
in .net there are builtin libs for both (taking screenshots, upload via ftp)
edit:
choose a technology which is able to create a screenshot and post it to an external resource via POST (or upload it via FTP). therefore you will need some access to local file-system ... well ... what would you think if you, as a novice, get prompted to allow access to local-filesystem (or network-resources)?
edit2:
as far as i know, silverlights support taking screenshots ... and there would be some ftp/post action included as well ..
Javascript by itself cannot handle this, nor should any webpage based technology --- if a website could covertly upload files from it's visitor's computers, it would have great hacking capability. Similarly for doing things like automatically taking a screenshot -- That's too much control for a user to give up to an unknown website.
So, any means of uploading a file will requires directly user interaction. With that, a simple HTML form with a <input type="file" /> element should be all that is needed.

Parsing a website

I want to make a program that takes as user input a website address. The program then goes to that website, downloads it, and then parses the information inside. It outputs a new html file using the information from the website.
Specifically, what this program will do is take certain links from the website, and put the links in the output html file, and it will discard everything else.
Right now I just want to make it for websites that don't require a login, but later on I want to make it work for sites where you have to login, so it will have to be able to deal with cookies.
I'll also want to later on have the program be able to explore certain links and download information from those other sites.
What are the best programming languages or tools to do this?
Beautiful Soup (Python) comes highly recommended, though I have no experience with it personally.
Python.
It's fairly easy to write a simple crawler using python's standard libs, but you'll also be able to find some existing python crawler libraries available on the web.

Get country location of an IP with native PHP

Read on before you say this is a duplicate, it's not. (as far as I could see)
I want to get the county code in php from the client.
Yes I know you can do this using external sites or with the likes of "geoip_record_by_name" but I don't want to be dependent on an external site, and I can't install "pear" for php as im using shard Dreamhost hosting.
I thought I could just do something like this:
$output = shell_exec('whois '.$ip.' -H | grep country | awk \'{print $2}\'');
echo "<pre>$output</pre>";
But dreamhost seems to have an old version of whois (4.7.5), so I get this error on allot of IPs:
Unknown AS number or IP network. Please upgrade this program.
So unless someone knows how to get a binary of a newer version of whois onto dreamhost im stuck.
Or is there another way I could get the country code from the client who is loading the page?
Whois is just a client for the whois service, so technically you are still relying on an outside site. For the queries that fail, you could try falling back to another site for the query, such as hostip.info, who happen to have a decent API and seem friendly:
http://api.hostip.info/country.php?ip=4.2.2.2
returns
US
Good luck,
--jed
EDIT: #Mint Here is the link to the API on hostip.info: http://www.hostip.info/use.html
MaxMind provide a free PHP GeoIP country lookup class (there is also a free country+city lookup one).
The bit you want is what is mentioned under "Pure PHP module". This doesn't require you to install anything, or be dependent on them, nor does it need any special PHP modules installed. Just save the GeoIP data file somewhere, then use their provided class to interact with it.
Can you just install a copy of whois into your home directory and pass the full path into shell_exec? That way you're not bound to their upgrade schedule.
An alternative, somewhat extreme solution to your problem would be to:
Download the CSV format version of MaxMind's country database
Strip out the information you don't need from the CSV with a script and ...
... generate a standard PHP file which contains a data structure containing the IP address as the key and the country code as the value.
Include the resulting file in your usual project files and you now have a completely internal IP => country code lookup table.
The disadvantage is that, regularly, you would need to regenerate the PHP file from the latest version of the database. Also, it's a pretty nasty way of doing it in general and performance might not be the best :)
Consider ipcountryphp (my site, my code, my honour) as it provides a local internet-lifetime freely updated database. It's fast and fully self-contained, pluggable into anything PHP 5.3, SQLite3 and beyond. Very fast seeks and no performance penalties.
Enough with shameless self-promotion, let's get serious:
Relying on querying remote services in real-time to get visitor country can become a major bottleneck for your site's functionality depending on the response speed of the queried server. As a rule of thumb you should never query external services for real-time site functionality (like page loading). Using APIs in the background is great but when you need to query the country of each visitor before the page is rendered, you open yourself up to a world of pain. And do keep in mind you're not the only one abusing free services :)
So queries to 3rd-party services stay in the background while only local functionality that relies on no 3rd-party go into the layers there users interact with. Just my slightly performance paranoid take on this :)
PS: Above mentioned script I wrote has IPv6 support too.
Here is a site with a script i just used. The only problem is that you would probably every now and then need to regenerate IPs by yourself... which might be pain and tahts why everyone is telling you to use external API. But for me that wasnt solution as i was pulling like 50 IPs at once, which means i would probably get banned. So solution was to use my own script or to do saves to DB, but i was again pulling images from external sites. Anyway here is the site i found script on:
http://coding-talk.com/f29/country-flag-script-8882/
Here's a few:
http://api.hostip.info/get_html.php?ip=174.31.162.48&position=true
http://geoiplookup.net/geoapi.php?output=json&ipaddress=174.31.162.48
http://ip-api.com/json/174.31.162.48?callback=yourfunction
http://ipinfo.io/174.31.162.48
All return slightly different results.
here is also one of them. just change the IP to the variable:
http://api.codehelper.io/ips/?callback=codehelper_ip_callback&ip=143.3.87.193

Resources