Using Pylons as a Web Backend - pylons

I am using Pylons for two things:
1) Serving API requests (returning JSONs describing my SQLAlchemy models)
2) Running a script 24/7 that fetches flight information from the internet (using HTTP) and pushes it into my DB (again using my models).
I am NOT using Pylons as a front end, but as a back end.
What would be the best way for my script to make HTTP request? is urllib / urllib2 my best option here?
How would I run my script constantly and not on a request serving basis? Is Celery / Cronjobs what I am looking for here?
Thanks!

Regarding your first question: yes, urllib/urllib2 is probably the best bet. It has very solid functionality for making HTTP requests to someone else.
Regarding your second question: Use your database. It's not super-scalable, but it's easy to implement a system where you have a flag in the database that is, essentially, an on-off switch for the application. Once that exists, make a page (with whatever security precautions you think prudent) that sets the flag and starts the application on a loop that continues indefinitely as long as the flag is set. A second page clears the flag, should you need to stop the HTTP requests without killing the entire server process. Instead of 'pages,' they could also be shell scripts or short standalone scripts. The important part is that you can implement this without requiring Celery or cron (although if you're already familiar with either, have at).

Related

How to organize RoR server to produce some side work, without blocking main REST functionality?

It's needed to make http requests to appropriate urls each 30 minute. Currently my RoR server handles bunch of simple routes, and I don't wanna my sending service to blocks main routing, in other words - make requests from queue only when there is no other job. I have no idea how Ruby handles threads(came from js world) and don't want to dig deep, just wanna know should I care about it or not? Thanks.

Mock API Requests Xcode 7 Swift Automated UI Testing

Is there a way to mock requests when writing automated UI tests in Swift 2.0. As far as I am aware the UI tests should be independent of other functionality. Is there a way to mock the response from server requests in order to test the behaviour of the UI dependant on the response. For example, if the server is down, the UI tests should still run. Quick example, for login, mock if password failed then UI should show alert, however, if the login is successful the next page should be shown.
In its current implementation, this is not directly possible with UI Testing. The only interface the framework has directly to the code is through it's launch arguments/environment.
You can have the app look for a specific key or value in this context and switch up some functionality. For example, if the MOCK_REQUESTS key is set, inject a MockableHTTPClient instead of the real HTTPClient in your networking layer. I wrote about setting the parameters and NSHipster has an article on how to read them.
While not ideal, it is technically possible to accomplish what you are looking for with some legwork.
Here's a tutorial on stubbing network data for UI Testing I put together. It walks you through all of the steps you need to get this up and running.
If you are worried about the idea of mocks making it into a production environment for any reason, you can consider using a 3rd party solution like Charles Proxy.
Using the map local tool you can route calls from a specific endpoint to a local file on your machine. You can past plain text in your local file containing the response you want it to return. Per your example:
Your login hits endpoint yoursite.com/login
in Charles you using the map local tool you can route the calls hitting that endpoint to a file saved on your computer i.e mappedlocal.txt
mappedlocal.txt contains the following text
HTTP/1.1 404 Failed
When Charles is running and you hit this endpoint your response will come back with a 404 error.
You can also use another option in Charles called "map remote" and build an entire mock server which can handle calls and responses as you wish. This may not be exactly what you are looking for, but its an option that may help others, and its one I use myself.

Reading pending work from PhantomJS

I am building a pool of PhantomJS instances, and I am trying to make it so that each instance is autonomous (it fetches next job to be done).
My concern is to choose between these two:
Right now I have a Rails app that can give to PhantomJS which URL needs to be parsed next. So, I could do an HTTP get call from PhantomJS to my Rails app and Rails would respond with a URL that is pending to be done (most likely Rails would get that from a queue).
I am thinking on building a stand alone Redis server that PhantomJS would access via Webdis, so Rails would push the jobs there, and PhantomJS instances would fetch from it directly.
I am trying to think what would be the correct decision in terms of performance: PhantomJS hitting the Rails server (so Rails needs to get the job from the queue and send it to PhantomJS), or just making PhantomJS to access a Redis server directly.
Maybe I need more info but why isn't the performance answer obvious? Phantom JS hitting the Redis server directly means less stuff to go through.
I'd consider developing whatever is easier to maintain. What's the ballpark req/minute? What sort of company (how funded / resource-strapped are you)?
There's also more OOTB solutions like IronMQ that may ease the pain

Best way to serve files?

I'm a novice web developer with some background in programming (mostly Python).
I'm looking for some basic advice on choosing the right technology.
I need to serve files over the internet (mp3's), but I need to implement some
control on the access:
1. Files will be accessible only for authorized users.
2. I need to keep track on how many times a file was loaded, by whom, etc.
What might be the best technology to implement this? That is, should I
learn Apache, or maybe Django? or maybe something else?
I'm looking for a 'pointer' in the right direction.
Thank!
R
If you need to track/control the downloads that suggests that the MP3 urls need to be routed through a Rails controller. Very doable. At that point you can run your checks, track your stats, and send the file back.
If it's a lot of MP3's, you would like to not have Rails do the actual sending of the MP3 data as it's a waste of it's time and ties up an instance. Look into xsendfile where Rails can send a response header indicating the file path to send and apache will intercept it and do the actual sending.
https://tn123.org/mod_xsendfile/
http://rack.rubyforge.org/doc/classes/Rack/Sendfile.html
You could use Django and Lighttpd as a web server. With Lighttpd you can use mod_secdownload, wich enables you to generate one time only urls.
More info can be found here: http://redmine.lighttpd.net/projects/1/wiki/Docs_ModSecDownload
You can check for permissions in your Django (or any other) app and then redirect the user to this disposable URL if he passed the permission check.

Canceling a request when connection to client is lost

I noticed that in a standard grails environment, a request is always executed to the end, even when the client connection is lost and the result can't be delivered anymore.
Is there a way to configure the environment in such a way that execution of a request is canceled as soon as the client connection is lost?
Update: Thanx fo the answers. Yes - most of the problems I am trying to avoid can be avoided by better coding:
caching can make nearly every page fast
a token can help to avoid submitting something twice
but there are some requests which still could consume some time. Let's take a map service as example. Calculating a route will take some time. One solution to avoid resubmitting the request could be a "calculationInProgress" flag together with a message to the user. But then it is still possible to create a lot of sessions and thus a lot of requests in order to do a DOS attack...
I am still curious: is there no way to configure the server to cancel the request? I used to develop on a system where the server behaved this way and it was great :-)
Probably there is no such way. And I'm sure grails (and your webcontainer) is designed to
accept incoming request
process it on server side
send response
if something happened during phase 2, i'll know about it only on send response phase. Actually you can send data to HttpSerlvetRespone by yourself, handle IOException, etc - but it will be too much low-level way, I think. And it will not help you with canceling your DB operations, while you're preparing data to send.
Btw, it's common pattern to use an web frontend, like nginx, that accepts incomming request and and handle all this problems with cancelled requests, slow requests (i guess it's the real problem?), etc.
According to your comment it is reload and multiple clicks that you are trying to avoid. The proper technique should be to use Grails support for handling multiple form submissions:
http://grails.org/doc/2.0.x/guide/theWebLayer.html#formtokens

Resources