I am creating movie/series focused web application. In this application users would be able to search through movies/series and subscribe them. I want to provide users with the movie/series search engine through API. In my app's database I have no information about movies/series, so when the users is typing title in the search field I need to send a search request to API to fetch movies/series that match the given pattern. The problem is that at this point I don't see any way of implementing this functionality. I thought about another controller e.i. Searcher. That's fine, but I would like to put this search field in my app's layout so it would be available everywhere not only for users that are on particular URL.
First have you an API key from IMDB API ? (https://github.com/18Months/themoviedb-api)
Make it a remote form and catch results through Ajax callback ajax:complete.
When a request is made, show a loader on the view to wait the time you call the api.
Call Your SearchController throught its route.
In your controller Search in you DB, if you do not find it, make an API call, save in DB and serve the result.
(Would be better to call the API in background, and serve results through Websockets but would be over-engineering right now.)
Related
While it seems it is not possible for an app to register incoming calls, I wondered if it would be possible to access the call history of the current user?
It looks like there is a Graph API to get information of a specific call by id, I didn't find anything about getting the call history or the last call.
This could be a workaround for our approach: We want to enable the employees to make several notes on incoming calls and reference them with existing items in another web application.
Is there any way to achieve what I'm trying to do?
There seems to be a new Preview way of and application subscribing to a call event though still no way to get the full history.
Application
CallRecords.Read.All:
Subscribe to new call records (POST /beta/subscriptions).
There are more details here https://learn.microsoft.com/en-us/graph/cloud-communications-callrecords but it suggests you could capture the incoming call and allow notes as you want.
I am working w/ the Event Brite API and I have a need that I am trying to figure out the best approach for. Right now, I have an event that people will be registering for. At the final step of the registration process, I need to ask them some questions that are specific to my event. Sadly, these questions are data-driven from my website, so I am unable to use the packaged surveys w/ Event Bright.
In a perfect world, I would use the basic flow detailed in the Website Workflow of the EB documentation, ending upon the "3rd Party Next Steps" step (redirect method).
http://developer.eventbrite.com/doc/workflows/
Upon landing on that page, I would like to be able to access the order data that we just created in order to update my database and to send emails to each person who purchased a seat. This email would contain the information needed to kick off the survey portion of my registration process.
Is this possible in the current API? Does the redirect post any data back to the 3rd party site? I saw a few SO posts that gave a few keywords that could be included in the redirect URL (is there a comprehensive list?). If so, is there a way to use that data to look up order information for that order only?
Right now, my only other alternative is to set up a polling service that would pull EB API data, check for new values, and then kick off the process on intervals. This would be pretty noisy for all parties involved, create delay for my attendees, and I would like to avoid it if possible. Thoughts?
Thanks!
Here are the full set of parameters which we support after an attendee places an order:
http://yoursite.com/?eid=$event_id&attid=$attendee_id&oid=$order_id
It's possible that order_id and attendee_id would not be a numeric value, in which case it would return a value of "unknown." You'll always have the event_id though.
If you want to get order-specific data after redirecting an attendee to your site, you can using the event_list_attendees method, along with the modified_after parameter. You'll still have to look through the result set for the new order_id, but the result set will be much smaller and easier to navigate. You can get more information here: http://developer.eventbrite.com/doc/events/event_list_attendees/
You can pass the order_id in your redirect URL in order to solve this.
When you define a redirect URL, Evenbrite will automatically swap in the order_id value in place of the string "$order_id".
http://your3rdpartywebsite.com/welcome_back/?order_id=$order_id
or:
http://your3rdpartywebsite.com/welcome_back/$order_id/
When the user completes their transaction, they will be redirected to your external site, as shown here: /http://developer.eventbrite.com/doc/workflows/
When your post-transaction landing page is loaded, grab the order_id from the request URL, and call the event_list_attendees API method to find the order information in the response.
Hi i am a student doing my academic project.I need some guidance in completing my project.
My project is based on grails framework which searches for books from 3 different bookstores and gives d price from all the 3 stores.I need help in searching part.
how to direct the search for those bookstores once user types for required book.
thanks in advance
You need to give more details. By searching bookstores, do you mean searching in a database or are these like Amazon etc?
I would find out if these online bookstores have APIs, or if you have a choice, select the online bookstores that do have APIs that you can use to do your searching. For example, Amazon has a "Product Advertising API" that can be used to perform searching of its catalogue (see http://docs.amazonwebservices.com/AWSECommerceService/latest/DG). You usually have to register as an affiliate to get access these sort of things.
Once you have several online bookstores that are accessible via APIs, it is relatively easy to write some grails code to call them, and coordinate the results. APIs usually take the form of Web requests, either REST or SOAP (e.g. see Amazon - AnatomyOfaRESTRequest). Groovy's HTTPBuilder can be used to call and consume the bookstores' API web services if you can use simple REST, or I believe there are a couple of Grails plugins (e.g. REST Client builder). For SOAP, consider the Grails CXF Client Grails plugin.
You could do the searches on the APIs one by one, or if you want to get more advanced, you could try calling all 3 APIs at the same time asynchronously using the new servlet 3.0 async feature (see how to use from Grails 2.0.x: Grails Web Features - scroll to "Servlet 3.0 Async Features"). You would probably need to coordinate this via the DB, and perhaps poll through AJAX on your result page to check when results come in.
So the sequence would be as follows:
User submits search request from a form on a page to the server
Server creates and saves a DB object to track requests, kicks off API calls asynchronously (i.e. so the request is not blocked), then returns a page back to the user.
The "pending results" page is shown to user and a periodic AJAX update is used to check the progress of results.
Meanwhile your API calls are executing. When they return, hopefully with results, they update the DB object (or better, a related object) to store the results and status of the call.
Eventually all your results will be in the DB, and your periodic AJAX check to the server which is querying the results will be able to return them to the page. It could wait for all of the calls to the 3 bookstores to finish or it could update the page as and when it gets results back.
Your AJAX call updates the page to show the results to the user.
Note if your bookstore doesn't have an API, you might have to consider "web scraping" the results straight from bookstore's website. This is a bit harder and can be quite brittle since web pages obviously change frequently. I have used Geb (http://www.gebish.org/) to automate the browsing along with some simple string matching to pick out things I needed. Also remember to check terms & conditions of the website involved since sometimes scraping is specifically not allowed.
Also note that the above is a server oriented method of accomplishing this kind of thing. You could do it purely on the client (browser), calling out to the webservices using AJAX and processing via JavaScript. But I'm a server man :)
I need to develop an application which should help me in getting all the status,messages from different servers like Twitter,facebook etc in my application and also when i post a message it should gets updated in all the services. I am using authlogic for authentication. Can anyone suggest me what gems/plug-ins i can use..
I need API help to get all the tweets/messages to be displayed in my application and also ways to post the messages to the corresponding services by posting it from my application. Can anyone help me from design point.
Walk through what you'd want to do in your head. Imagine the working site, imagine your webapp working before you start. So your user logs in (handled by authlogic) and sees a textbox called "What are you doing right now?". The user fills in a status message and clicks "post". The status message appears at the top of their previously posted messages.
Start with the easy part. Create a class that posts to two services. Use the twitter gem and rfacebook to post to two already defined services. In the future, you'll want to let the user associate services to their account and you would iterate through the associated services and post the message to each. Once you have this working, you can refactor or polish the UI a bit to round out this feature. I personally would do the "add a social media account to my profile" feature towards the end.
Harder is the reading of the data (strangely enough) because you're going to have to figure out how to store it. You could store nothing but I suspect you'd run into API limits just searching all the time (could design around this). I would keep a little cache of posts associated to the user's social media account. In this way, the data model would look like this:
A user has many social media accounts.
A social media account has many posts. (cache)
Of course, now you need to schedule the caching of the posts. This could be done manually, based on an event (like when they login) or time based. So when the update happens, you load up the posts for that social media account and the user will see the posts the next time they hit the page. For real-time push to the client's browser while they stare at the screen, use faye (non-trivial) and ajax to pull the new posts to the top of the social media stream view.
The time based one is tricky because you'd either have to have a cron job run or have rails handle it all with a gem like clockwork. But then you have to leave rails running. I've also solved this by having a class in /lib do all the work and a simple web call kicks off the update. But it wasn't in a multi-user use case. So that might not work. In any case, you'll want to have some nice reusable code for these problems since update requests can come from many different sources.
You'll also have to deal with the API limits. When pulling down content from twitter, you won't get everything. That will just have to be known by the user or you'll have to indicate a "break in time" somehow.
The UI should be pretty easy (functionally anyway), because you know which source the post/content is coming from. It'd be easy to throw a little icon next to the post to display which social media site it's coming from.
Anyway, good luck, sounds like a fun project.
How would you achieve this route in rails 3 and the last stable version 2.3.9 or soish?
Explained
I don't really care about the followers action. What I'm really after is how to create '!#' in the routing.
Also, What's the point of this. Is it syntax or semantics?
Rails doesnt directly get anything after the #. Instead the index page checks that value with javascript and makes an AJAX request to the server based on the url after the #. What routes they use internally to handle that AJAX request I am not sure.
The point is to have a Javascript powered interface, where everyone is on the same "page" but the data in the hashtag allows it to load any custom data on the fly, and without loading a whole new page if you decide to view a different user, for instance.
The hash part is never sent to the URL, but it is a common practice to manipulate the hash to maintain history, and bookmarking for AJAX applications. The only problem being that by using a hash to avoid page reloads, search engines are left behind.
If you had a site with some links,
http://example.com/#home
http://example.com/#movies
http://example.com/#songs
Your AJAXy JavaScript application sees the #home, #movies, and #songs, and knows what kind of data it must load from the server and everything works fine.
However, when a search engine tries to open the same URL, the hash is discarded, and it always sends them to http://example.com/. As a result the inner pages of your site - home, movies, and songs never get indexed because there was no way to get to them until now.
Google has creating an AJAX crawling specification or more like a contract that allows sites to take full advantage of AJAX while still reaping the benefits of indexing by searching engines. You can read the spec if you want, but the jist of it is a translation process of taking everything that appears after #! and adding it as a querystring parameter.
So if your AJAX links were using #!, then a search engine would translate a URL like,
http://example.com/#!movies
to
http://example.com/?_escaped_fragment_=movies
Your server is supposed to look at this _escaped_fragment_ parameter and respond the same way that your AJAX does.
Note that HTML5's History interface now provides methods to change the address bar path without needing to rely upon the hash fragment to avoid page reloads.
Using the pushState and popState methods
history.pushState(null, "Movies page", "/movies");
you could directly change the URL to http://example.com/movies without causing a page refresh. Search engines can continue to use the same URL that you would be using in that case.
The part after the # in a URI is called the fragment identifier, and it is interpreted by the client, not the server. You cannot route this, because it will never leave the browser.