iOS app with Django - ios

So we currently have a website that was created using Django. Now, we would like to create a native iOS app that uses the same backend, so we don't have to re-code the whole thing. From my understanding, there are two alternative routes:
1) Call directly Django URLs, which then calls a function. Within that function, create a HTTPResponse, with encoded JSON data and send that back.
2) Create a REST Service from the Django server with something like Tastypie. However, aside from doing straight-forward GET calls to an object, I don't see how we can call custom functions in our Django Models from TastyPie. Can we even do that?
I find it surprising that there is not a lot of information about consuming a web service from iOS with existing backends like Django or RoR. For example, I know that instagram uses Django, but how do they communicate from iOS to their servers?!
Thanks a lot!

I am currently working on an iOS app for iPhone, with Django / Tastypie in the backend. We do both 1 and 2. The resources are offered REST-style (after auth) via Tastypie, and any custom function calls (for example, creating a new user) are handled by views.py at various REST endpoints, which returns JSON.

When you can you should try to use a common way of doing something instead of reinventing the wheel. Given that, REST is a standard style of software architecture for distributed systems and it works very well when you work with entities/objects.
If you have an API where you interact with entities, it is recommended to use REST interfaces. On python you have Tastypie or the newer Django Rest Framework that does almost all the work. As you propose in 2)
If you have an API where you interact with services, like a login, then you should build an RPC service, basically a function with remote access as you explain on 1).
Normally you will need both ways on a robust application. And YES, it is possible to do that. I agree with #sampson-chen, we are doing the same. We have a REST interface with tastypie, and other methods are done with custom RPC services.
The performance in our case is still good, but mostly depends on the methods you call inside your services, for example, a DB query. You have a lot of ways to improve speed, for example using Celery to queue heavy jobs.
Hope it helps.

REST APIs, while very useful, limit you to GET, POST, PUT, DELETE actions, which are performed upon resources. This can make it difficult to express other action types, such as sending an email. There are a few ways I've found to handle this within django/tastypie:
Issue a PUT/PATCH request on an existing resource, setting a flag that lets your backend know to trigger an action. Detecting if a flag was set can be done inside post_save signal handlers (use django-model-utils FieldTracker to see if a field was changed from False to True); this also helps make sure your application logic works the same outside your REST API (such as changes via the admin site, a celery task, an HTML based view, or the Python shell).
Create a non-ORM Resource (e.g. /api/v1/email/) and override the post_list() method, calling your function there.
As mentioned elsewhere, create a subordinate resource (/api/v1/myresource/send/).

Related

IMAP Server Facade - how to make one?

I have implemented a custom email server and web client. The server is just a REST API (similar to google's gmail API) that uses a 3rd party (sendgrid) for sending and receiving. The emails are stored in a database. The web client just talks to the REST client for sending and receiving.
The problem with this approach is it doesn't implement IMAP anywhere, which makes it impossible for standard clients (outlook, iphone, etc.) to connect to and use our email API. This limits customers to using only our client for email.
What I need is some sort of IMAP Server "facade" that will manage the connections to clients and make calls to my REST API for actually handling the requests (get email, send email, etc.).
How can an IMAP facade be implemented? Is there maybe a way to take an existing MailServer and gut it and point all it's "events" to making calls to my API?
tl:dr; write your gateway in Perl; use Net::IMAP::Server; override Net::IMAP::Server::Mailbox; and use one of the many Perl REST clients to talk to your server.
Your best bet for doing this quickly, while maintaining a reasonable amount of code security, is with Perl. You'll need two Perl modules. The first is Net::IMAP::Server, and here is the Github repository for that module. This is a standards-compliant RFC 3501 server that was purposely designed to have a configurable mail store. You will override the default Net::IMAP::Server::Mailbox implementation with your own code that talks to your custom email backend.
For your second module, choose your favorite Perl module(s) to use to speak to your REST server. Your choice depends on how much fine grained control you want to have over the construction and delivery of the REST messages.
Fortunately, here you have tons of choices. One possibility is Eixo::REST, which has a Github repository here. Eixo::REST seems to deal well with asynchronous vs. synchronous REST API calls, but it doesn't provide a lot of control over X509 key management. Depending on how googley your API is, there's also the REST::Google module. Interestingly, this family also has a REST::Google::Apps::EmailSettings module, specifically for setting Gmail-specific funkiness like labels and languages. Lastly, the REST::Consumer module seems to encapsulate a lot of https-specific things like timeout and authentication as parameters to Perl object instantiation.
If you use these existing frameworks, then about 90% of the necessary code should already be done for you.
Don't do this by hacking Dovecot or any other mail server written in C or C++. If you hack together a mail server quickly using a compiled language, your server will sooner or later experience all the joy of buffer overflows and stack smashing and everything else that the Internet does to fuck over mail servers. Get it working safely first, then optimize later.
(This is basically my comment again, but elaborated quite a bit more.)
Some IMAP servers, most notably Dovecot, are structured such that the file access is in a separate module with a defined interface. Dovecot isn't the only one, but it's by far the most popular and its backend interface is known to be appropriate, so I'd take that absent specific concerns.
There already exist non-file modules such as imapc, which proves that it can be done. When a client opens a mailbox backed by imapc, Dovecot parses IMAP commands, calls message access functions in imapc, imapc issues new IMAP commands, parses the server responses, returns C structs to Dovecot, Dovecot fashions new IMAP responses and returns them to the client.
I suggest that you take the dovecot source, look at src/lib-storage/inbox/index/imapc and the other backends in that directory, and implement one that speaks your REST API as a client.
Since you're familiar with .NET, I would suggest hacking either of the following implementations of IMAPv4 servers to your liking:
Lumisoft Mail Server - a very old project indeed (let's call it "mature", huh?). Don't be too turned off by the decade-old website and the lack of a github link - the source is provided under "other downloads".
McNNTP - also an older project and with a major focus on NNTP (as the name says) but very close to what you're trying to achieve in terms of the IMAP component. Take a look, you'll probably find this a good starting point.

Node.js service poller

I have an MVC web app that uses a jquery web request to generate the users notifications in a perceived async way. The notifications are built on request by each user on the site.
However I have been asked to make the notifications readily available as the happen.
This I would traditionally do using a windows service that called the same web method over http. I was thinking that this might be a good candidate functionality for using node.js
Is the any example code to call a http method in a loop and would that scale well ?
found this package and seems to do what i needed
https://npmjs.org/package/node-cron-jobs
I also came across "forever" to run node as a child process that can be re-spawned if there are errors
If you're already running on ASP.NET MVC take a look at SignalR. This was written by Microsoft, is supported by them and provides functionality similar to the Socket.IO, node.js stack.

Communication between Rails apps

I have built two rails apps that need to communicate and send files between each other. For example one rails app would send a request to view a table in the other apps' database. The other app would then render json of that table and send it back. I would also like one app to send a text file stored in its public directory to the other app's public directory.
I have never done anything like this so I don't even know where to begin. Any help would be appreciated. Thanks!
You requirement is common for almost all the web apps irrespective of rails, Communicating with each other is required by most modern web apps. But there is a small understanding that you need to get hold on,
Web sites should not directly access each others internal data (such as tables), (even if they are build by the same language (in this case Rails) by the same developer),
That is where the web-services comes in to play, So you should expose your data through web services so that not only rails application can consume that, but also any app that knows how to consume a web service will get benefit.
Coming back to your question with Rails, rails supports REST web services out of the box, So do some googling about web services, REST web services with rails
HTH
As a starting point, look at ActiveResource.
Railscast
docs
Message queuing systems such as RabbitMQ may be used to communicate things internally between different apps such as a "mailer" app and a main "hub" application.
Alternatively, you can use a shared connection to something like redis stick things onto a "queue" in one app and read them for processing from the other.
In recent Rails versions, it is rather easy to develop API only applications. In the Rails core master, there was even a special application type for these apps briefly (until it got yanked again). But it is still available as a plugin and probably one day becomes actually part of Rails core again. See http://blog.wyeworks.com/2012/4/20/rails-for-api-applications-rails-api-released for more information.
To actually develop and maintain the API of the backend service and make sure both backend and frontend have the same understanding of the resources, you can use ROAR which is great way to build great APIs.
Generally, you should fully define your backend application with an API. Trying to be clever and to skip some of the design steps will only bring you headaches in the long run...
Check out Morpheus. It lets you create RESTful services and use familiar ActiveRecord syntax in the client.

Communicating between Node.Js and ASP.NET MVC Application

I have an existing complex website built using ASP.NET MVC, including a database backend, data layer, as well as the Web UI layer. Rebuilding this website in another language is not a feasible option.
There are some UI elements on some views (client side) which would benefit from live interactivity, involving both push and pull, so rather than implement some kind of custom long polling or websocket server in asp.net, I am looking to leverage node.js for Windows, and Socket.io.
My problem is that I need two way communication between both applications. Each user should only be able to receive data once they are authorised on the ASP.NET website, so I first need communication for this. Secondly, once certain events occur on the ASP.NET website I want to immediately push this data to the Node server, to be broadcast to specific users or groups of users. Thirdly, I would like any data sent to the node.js server to be pushed to the ASP.NET website for processing, as this is where all our business logic lies. The sole reason for adding Node.js is to have the possibility to push data directly to the client, I do not want to build any business logic into it (or as little as possible).
I would like to know what the fastest method of two-way push communication is between Node.Js and ASP.NET. The only good option I'm aware of so far is to create a special listener on a specific port on the node.js server and connect to that, but I was wondering if there's a more elegant or more efficient method? I also know that you could use a database inbetween but surely this would need to be polled and would be less efficient? Both servers will be running on the same server under a Visual Studio project.
Many thanks for any help you can provide.
I'm not an ASP.NET expert, but I think there are multiple ways you can achieve this:
1) As you said, you could make Node listen on a specific port for data and then react based on the data received (TCP)
2) You can make POST requests to Node.js (HTTP) and also send an auth-key in the process to be extra-secure. Like on 1) Node would react to the data you send.
3) Use something like Redis for pub-sub, send messages from ASP.NET (pub) and get them on the Node.js part (sub). This is even better if you want to scale your app across multiple machines etc.
The only good option I'm aware of so far is to create a special
listener on a specific port on the node.js server and connect to that,
but I was wondering if there's a more elegant or more efficient
method?
You can try to look at redis pub/sub model where ASP.NET MVC application and node.js would communicate through separate channels in order to achieve full-duplex communication. Or you can also try to use CouchDB change nofitications.
I also know that you could use a database inbetween but surely this
would need to be polled and would be less efficient?
Former techniques do not require you to poll for changes, but instead they will notify you when the changes happens or channel message arrives.

How to provide your app with a network API

I am going to write a Ruby application that implements a video conversion workflow consisting of multiple audio and video encoding/processing steps.
The application interface has two core features:
queueing new videos
monitoring the progress for each video
The user can access these features using a website written in Ruby on Rails.
The challenge is this: I want make the workflow app a self-sufficient application, not dependent on the existence of the web view.
To enable this separation I think that adding a network API to the workflow application is a good solution because this allows the workflow app to reside on a different server than the web server.
My question is: Which solution do you suggest for such a network API?
A few options are:
implement a simple TCP server and invent my own string based API
use some sort of REST api (I don't know if this is appropriate for this situation)
some sort of web-services solution (SOAP, XML-RPC)
another existing framework
Feel free to share your thoughts on this.
I would suggest two things:
First, use REST as your API. This allows you to write one core application with both a user interface and an API for outside applications to use.
Second, take a look at PandaStream. It's a Merb application that encodes videos from multiple formats into flash. It has a REST API, and there's even a Rails plugin so you can integrate it with your application. It might be a good example codebase, or even a replacement for the one you're trying to build.
Hope my answer helped,
Mike

Resources