How could I make an app login in a website and get info in the background? - ios

I think I am mostly struggling with this problem because I do not know what to search for.
I want to make an app that allows the user to enter their gift card number and use that number to login to this website:
https://www.getmybalance.com
I have no idea how to do this without control over the website. Is it even possible to do so?
I don't want to use a UIWebView to show the page.

You should read up on NSURLConnection, you're going to have to execute a post request to login. Then you're going to have to determine whether or not you logged in successfully probably by parsing the returned page. NSURLConnection will handle storing the login cookie the site returns. After you've logged in you're probably going to need to execute another post request to query their system. Once again you will have to probably parse the result out of the HTML page that is returned.
NSURLConnection:
https://developer.apple.com/library/mac/#documentation/Cocoa/Reference/Foundation/Classes/nsurlconnection_Class/Reference/Reference.html
NSURLConnection Delegate Protocol:
https://developer.apple.com/library/mac/#documentation/Foundation/Reference/NSURLConnectionDelegate_Protocol/Reference/Reference.html#//apple_ref/occ/intf/NSURLConnectionDelegate
This all of course assumes that this website doesn't have an API you can use.

Looks like you need to programatically POST in https to the server, then you will get back some DOM document, or JSON, or some weird thing, which you then parse.
POSTing with iOS is pretty easy, look at something like LRResty https://github.com/lukeredpath/LRResty or similar.
When you get the data back, first thing to do is look at it with NSLog. Then if the data is HTML, you will need to wade into the HTML to get the result.
The problem with that approach is that the company hosting the page may change their API at any time. You should ask them to either not ever change anything (if they want to change, then make a new page and leave the old one working, or better, support a simple REST API - which would also help them build nice AJAX/html5 web sites in the future.).

Related

Avoid robots from going into a www.domain.com/thishash when link posted to twitter, facebook

I'm building a service where people gets notified (mails) when they follow a link with the format www.domain.com/this_is_a_hash. The people that use this server can share this link on different places like, twitter, tumblr, facebook and more...
The main problem I'm having is that as soon as the link is shared on any of this platforms a lot of request to the www.domain.com/this_is_a_hash are coming to my server. The problem with this is that each time one of this requests hits my server a notification is sent to the owner of the this_is_a_hash, and of course this is not what I want. I just want to get notifications when real people is going into this resource.
I found a very interesting article here that talks about the huge amount of request a server receives when posting to twitter...
So what I need is to avoid search engines to hit the "resource" url... the www.mydomain.com/this_is_a_hash
Any idea? I'm using rails 3.
Thanks!
If you don’t want these pages to be indexed by search engines, you could use a robots.txt to block these URLs.
User-agent: *
Disallow: /
(That would block all URLs for all user-agents. You may want to add a folder to block only those URLs inside of it. Or you could add the forbidden URLs dynamically as they get created, however, some bots might cache the robots.txt for some time so they might not recognize that a new URL should be blocked, too.)
It would, of course, only hold back those bots that are polite enough to follow the rules of your robots.txt.
If your users would copy&paste HTML, you could make use of the nofollow link relationship type:
cute cat
However, this would not be very effective, as even some of those search engines that support this link type still visit the pages.
Alternatively, you could require JavaScript to be able to click the link, but that’s not very elegant, of course.
But I assume they only copy&paste the plain URL, so this wouldn’t work anyway.
So the only chance you have is to decide if it’s a bot or a human after the link got clicked.
You could check for user-agents. You could analyze the behaviour on the page (e.g. how long it takes for the first click). Or, if it’s really important to you, you could force the users to enter a CAPTCHA to be able to see the page content at all. Of course you can never catch all bots with such methods.
You could use analytics on the pages, like Piwik. They try to differentiate users from bots, so that only users show up in the statistics. I’m sure most analytics tools provide an API that would allow sending out mails for each registered visit.

Forcing a page to POST

This may be a very unusual question, but basically there's a page on another domain (that I can view, but can't edit/change) that has a button. When that button is clicked it generates some unique keys.
I need to pull those unique keys with my web service (using ASP .NET MVC3) I can get the initial HTML of the page, but how can I force the page to "click" the button so that I can get the values after the POST?
Normally, I'd reuse the code to generate keys myself, but I don't have access to the logic.
I hope this makes sense.
Use e.g. firebug to see what POST parameters are sent with form and then make the same POST from your code.
For this you can use WebRequest or WebClient.
See this SO questions that will help you how to do it:
HTTP request with post
Send POST request in C# like a web page does?
How to simulate browser HTTP POST request and capture result in C#
Then just parse the response with technology of your choice (I would use regular expressions - Regex, or LinqToXml if the response is well formed XML).
Note: Keep in mind that your code will be dependent on some service you are not maintaining. So you can get in problems when the service is unavailable, discontinued or if the format of POSTed form or response will be changed.
This really depends on the technology on the targeted site.
If the page is a simple HTML form then you can easily send a POST. You will need to send the expected data to the POST. Then you can parse the data.
If its not so straight forward you will need to look into ways to automate the click. Check Selenium. Also you might need to employ scrapping if the results page is a mess.

Utterly confused about OAuth and Google Calendar Gadget

I'm working on a Google Calendar Gadget and need to load data for the user from a remote server. It's simple stuff, like favorite color, but I need the user's ID. Using makeRequest works in general, but I need to send the account name, or a hash of it, or any sort of identifier to my server so it gets the right data. What's the easiest way to get that info? Currently it asks the user via HTML form, every single time it loads, which is pretty lame.
I've been looking at OAuth stuff, trying examples, and nothing works... I got an OAuth client key but don't know how/where to use it (or if I do use it with a Gadget). I found the Calendar feed/scope URI but I'm not really sure if that's correct to just get a user identifier, maybe I should use accounts. Half the examples are for OAuth 1.0...it's really frustrating.
Does anyone know a way to do this, or a good example/tutorial that explains how, for a Gadget? I think Gadgets are different since they run on Google's servers...but don't really know how this makes them different in this context.
See the answer to this: osapi.people.get() returns 404 in google calendar sidebar gadget. Then associate the google user ID with your internal ID, if different.

Login to a site using the cocoa framework

I am creating an ios app that needs to download a html page and extract some information from it. To get to the page I also need to login. I have looked everywhere for some code on how to login to a site using the cocoa framework, but every answer I see only seems to answer half the question. Here is the login site: romres.ist-asp.com. I need some code for writing something in the first field (the other two are left blank), then submit the form and then I need to be able to see the next page. I believe apps like Facebook should use som of the same technology, where you log in to a facebook and then you can see the contents of your profile.
Basically what you want to do is called scraping.
Scraping is really easy for sites that don't require authentication, but in your case what you should do is to inspect the POST request being made when logging in the site your interested in (try to understand of the service respond) and the POST request made, when already logged in, to retrieve each page.
The purpose of all of this is to have later the possibility to simulate regular HTTP requests that should came from a browser via code.
If you have any doubt ask in the comments.

ajax request changing url

I have a pager on a table using ajax and I would like each such request also to change the browser's url, so when I hit refresh button I won't skip back to first page. I was fighting the Url parameter of AjaxOptions, but it keeps winning over me. Please help.
Trim
You can safely change the URL past the hash mark without redirecting the page. However, the user can (in most browsers) navigate through these changes with the Back and Forwards buttons. This technique is usually called "history."
Because the technique is difficult to get working in all browsers, you'll want to use a framework. Take a look at http://www.mikage.to/jquery/jquery_history.html.
I can also recommend ExtJS's history stuff too. Take a look at this example:
http://www.extjs.com/deploy/dev/examples/history/history.html#main-tabs:tab2
Again, notice that not only does the URL change when the user does stuff, but changing the URL (via Back and Forward) also affects the page. This is good, awesome even, but means it must be done very carefully.
There is not really a quick and easy way to do this, here is an article on the topic. The problem is that not only does the Ajax have to generate the URLs, it also has to take those URLs into account when loading the page to get the appropriate content.

Resources