Check for NSUrl redirects - ios

I have a table view that lists job postings for an app pulling from an RSS feed. Unfortunately, quite a few of the job postings that have expired are still left in the feed. The only piece of information that my app could utilize to know that the job is no longer posted (because it uses a link to go to the job posting page) is the fact that when you click on the link and the UIView is pushed, the link that shows up is expired with a message stating, we apologize for the inconvenience, but this position's status has recently changed. On older posts it doesn't load (these are typically posts that start with docs.sitename.doc) and then there's the File not found message without a noted URL redirect link. Is there any direct method worth looking into to possibly filtering these out? At least to where I could note that the positions have expired?

OK, there is a lot to process here.
Redirect Detection: If you are using NSURLConnection, then look at Handling Redirects and Other Request Changes.
Process the RSS: As you load the data from the RSS into your table test the links. If you do this synchronously, then it will cause longer load times. If you do this asynchronously, then you will end up displaying some bad results, which then disappear as the URLs are processed (much like a CoreData update).
As an alternative, see if you can preprocess the RSS. I don't know what server tools you have access to, but it may be worth your while to have a server read the RSS feed and strip out invalid entries. This would create a new RSS feed (on your server) that contains only valid entries.

Related

Requesting input on conceptual ideas for disguising browser history

I am working with a Domestic Violence support organisation to build a website and have been asked to provide a "Quick Exit" function.
The purpose is to enable the user to exit the site quickly without closing the browser. I have seen such buttons on similar sites and the normal scenario is that they simply cause a Google search page to be shown. (easy but doesn't hide history)
I am looking for ideas to improve on this function to hide/disguise the history stored in the browser as this is currently a fairly significant flaw with the Quick Exit buttons I've seen to date.
I had a concept but I am looking for input on either fleshing out my concept, or other alternative directions to consider.
My concept was to have two domains: let's call them dv-site.com and decoy-site.com. The former being the source of domestic violence support information and the latter being some random content, could be anything, lets just say weather information for the sake of the conversation.
If a user navigates directly to dv-site.com the server redirects to decoy-site.com but also attaches some session specific, or perhaps single use query string or similar.
decoy-site.com validates the query string and, if valid, loads dv-site.com within an iframe or something like that so from the users perspective they are just looking at dv-site.com, though the domain recorded in history is decoy-site.com.
Links within the iframe loaded site would similarly be redirected with the same or a new query string.
If a user was to click on the browser history and go directly to decoy-site.com it would not be able to validate the query string and would just load the decoy site like a normal site. i.e. just showing weather information that exist on that site.
Domestic violence is a serious systemic issue and I would love some input from anyone who has more technical knowledge than I do on fleshing out this concept.
Other aspects I am unsure of how to tackle;
ensuring that dv-site.com can get crawled and ranked by search engines, even though users are all redirected, as it is imperative that it appears in search results so it can be found
technical aspects of a redirect that does not appear in history.
I'm unsure if it's possible to do this without all content and engagement being attributed to the decoy-site..
For the redirect, I believe that HTTP redirects do not get stored in history. You can use a 302 redirect for that. HTTP has a set-cookie header that lets you record a cookie - coupled with the headers here, you can give the decoy site access without recording it in history. Then, delete the cookie.
As far as pagerank goes, you could add a line to robots.txt as described here (the last point) to force the bot to scrape using a query parameter. Then in the backend, return the dv site only if that parameter is passed, otherwise redirect. If the googlebot removes query params when publishing, it will work out. Otherwise, it might fail.
Best of luck.

Twitter - public Stream handling deletion notices

I am using the Twitter public stream API to search for some keywords. I am writing my script in Java and therefore I use twitter4j. Now I stumbled over the information about status deletion notices:
Status deletion notices (delete)
These messages indicate that a given Tweet has been deleted. Client
code must honor these messages by clearing the referenced Tweet from
memory and any storage or archive, even in the rare case where a
deletion message arrives earlier in the stream that the Tweet it
references.
https://dev.twitter.com/docs/streaming-apis/messages#Status_deletion_notices_delete
So I created methods to remove records from my database when such a notice occurs. Unfortunately such a notice never occurs. I searched to find out what I am doing wrong and found some posts in the twitter developer section concerning the same problem:
https://dev.twitter.com/discussions/17393
https://dev.twitter.com/discussions/19943
https://dev.twitter.com/issues/1355
https://dev.twitter.com/discussions/12836
but unfortunately all these discussions got no answer. So for me it seems like I did no mistake with my code but twitter4j never sends me an deletion notice.
I want to respect the privacy of the twitter users - at least for legal reasons. So my question is:
What can I do to respect the privacy of the users ?
What do I have to do to satisfy my legal duties ?
One alternative seems to be to periodically iterate through all saved Tweets in my Database and request them from twitter to see whether I get a result back or not (so they were deleted). But this doesn't seem to be a practicable way because the data will get more and more and therefore at some point of time I will have limitations (in time, allowed twitter requests, ...). So what should I do?
Thanks in advance! Your help is greatly appreciated.
Ludwig
twitter4j v.3.0.6
Given the nature of the volume of tweets, it's unreasonable to assume that you would check to see if all the tweets are still there. You should make sure that you properly act on a delete notice from twitter. The onus is on them to actually send the delete notification.
That being said, I receive delete notifications from twitter. However, we aren't using the public stream, we are using sitestreams, which relies on authorizing specific social accounts and streaming all updates for those accounts (e.g. favorites, follows, blocks, tweets, retweets, etc) to us in realtime.
If you are doing a stream with filters, for example, it's probably not feasible (or at least very taxing) to run all deleted items through the same pipeline as new items. Or perhaps, to guess at which you were sent based on the times that you were running your filter.
As noted in the issue you linked to, the public streaming API will not necessarily send them out. I'd endeavor to handle them, and possibly provide a tool to manually remove any if a request comes in through another channel, but not worry too much about it, given that twitter doesn't provide the proper facility to be notified of such instances.

Handling data request in app, best practice

I am trying to build a iOS based NEWS app. I went through some of the best NEWS app and found out that, when I tap on any menus like Home(for ex.), they request for home data, only once, next time when i tap Home, I think they display cached data because I don't see any sign of request for data, maintaining speed in app.
So how do they maintain the app with recent data, because every time if cached data is displayed, the data may be already changed in server which may not reflect in the app. So what is the best way to handle data request in apps. Is it like I should request data on every tap of menu buttons or should I maintain some timer to request recent data from the server and rest of the time display cached data.
Use CoreData for caching the news and store the timestamp as well and before displaying it to the user, check for the timestamp. If the last updated time is older than 'x' minutes, get the data from server.
Also, you can store the last updated time of the news articles on the server and create an API to just return the article IDs and their timestamps. Then in your app, first query for the time stamps, and fetch only those articles which are missing in your DB or with older timestamps.
The simplest and most popular way is to use Great Http libraries like AFNetwork
or ASIHttp.
This libraries provide support for caching in the most recommended way.
By setting simple cachePolicy you can easily achieve your purpose.
Its not just about caching it can handle many hidden http complexities on its own (cookies,https authentication,Not-Modified http header many more).
I strongly recommend you to use this way as i have already done some of the ios news reading app.

Searching for a song while using multiple API's

I'm going to attempt to create an open project which compares the most common MP3 download providers.
This will require a user to enter a track/album/artist name i.e. Deadmau5 this will then pull the relevant prices from the API's.
I have a few questions that some of you may have encountered before:
Should I have one server side page that requests all the data and it is all loaded simultaneously. If so, how would you deal with timeouts or any other problems that may arise. Or should the page load, then each price get pulled in one by one (ajax). What are your experiences when running a comparison check?
The main feature will to compare prices, but how can I be sure that the products are the same. I was thinking running time, track numbers but I would still have to set one source as my primary.
I'm making this a wiki, please add and edit any issues that you can think of.
Thanks for your help. Look out for a future blog!
I would check amazon first. they will give you a SKU (the barcode on the back of the album, I think amazon calls it an EAN) If the other providers use this, you can make sure they are looking at the right item.
I would cache all results into a database, and expire them after a reasonable time. This way when you get 100 requests for Britney Spears, you don't have to hammer the other sites and slow down your application.
You should also make sure you are multithreading whatever requests you are doing server side. Curl for instance allows you to pull multiple urls, and assigns a user defined callback. I'd have the callback send a some data so you can update your page with as the results come back. GETTUNES => curl callback returns some data for each url while connection is open that you parse it on the client side.

How Can I Verify A Download Has Been Completed?

We're using ASP.NET MVC and our action does this:
pull records from DB
mark records as downloaded
push zipped download to browser
Now the problem comes when the download doesn't complete for some reason - maybe the user clicks "Cancel" or IE pops up that download security bar. I'm wondering if there's an alternative solution.
Could we push the download to the user and then only mark records as downloaded when we're sure they've received the right number of bytes? I have to say that I'm struggling with this one and a solution which is as easy for end users as possible would be fantastic.
There isn't any reliable way to do this without a process running on the client which can verify the transfer completed. Of course, the only process we can reasonably expect the user to already have, or be willing to install, is Flash.
Only Flash 10 supports saving files directly to disk as the user requests. (Previous versions had a "shared object" which was kind of like a very large cookie space more than anything else - not for transferring files but saving reusable application data). Read up here for info on how to interact with the end-user's filesystem via Flash 10.
Essentially there is a method call save() which will push data to a location of the user's choosing. The specific location is hidden from your code; for obvious security reasons, you merely push the file into a black box and Flash handles the rest.
The only real bit of info missing here is how to get your file into the Flash player, but anyone with a little Flash experience should have no trouble figuring that out with a few minutes of research. Without Flash experience you should still have it working in under a day.
Rather than simply redirecting the user to the resource that is to be downloaded (there by causing the popup of would you like to download a file) you might try to two things. Push the resource out of a page as a byte array. Once the download has completed redirect the download page to another page. On this page you can then add to your workflow asking if the download went ok or not. Also, if they got this far you could assume (ass-u-me) that it worked. To actually track how far the download got I don't think is doable as you have nothing on the other end monitoring bytes received.
I don't believe there is. If this is necessary you may need to utilize a Silverlight (Or flash) control in conjunction with your application.
Basically the approach with either one would be to open a socket connection to the HTTP url and save it to the appropriate path on the User's drive. Once the download is complete you could have the control generate a hash value from the file and send that back to some ASP page. If the hash value is never submitted or is incorrect you know they didn't finish the file.
Even checking that all the bytes were sent doesn't really guarantee anything:
The user might still cancel the download before saving it, or their browser might crash, etc.
The recipient might not be the user. It might be a proxy server with a virus scanner that decides to block the transfer, etc.

Resources