I'm currently working on a drilldown filter in MVC but I don't really know how to make this the fastest and most flexible as possible.
click here
Now my question is, how do you think they are doing this?
I've really no idea how to make this kind of drilldown but it seems they use some kind of hash they save for quick querying.
Maybe (pseudo)code anyone?
If you're willing to give up a little browser compatibility (it won't work on ancient and some console only browsers but then again neither will anything else), jQuery DataTables is a great way to make drilldowns.
Here is the main site, and Here is a good example of using a dropdown select to filter.
Basically all you have to do is throw all the data into a large <table> and use javascript on the client side to filter. The big benefit is there is no latency when you make a selection, unlike the site you linked.
I think it is not a good idea to put all data on the client side.
It is more reasonable to trust the data filtering to the database server (of couse it depends on your data size).
To speed up receiving the filtered data you can save it in your cache server with hash or select query as tag. Query to cache is faster than to database.
The answer after carefull looking at how they do it:
They send a normal http POST to the server with a querystring of all
choices.
The server sends back a http GET which returns an URL with the
hash.
The server caches the hash with the query so the next time the query is called it is faster.
Thanks everyone for your "usefull" responses.
Related
I wanna fetch the content of a website. But to get the correct content, it is necessary to change a Input Html sroll field on the side?
Many idea how to manage with xcode?
Thanks a lot!!
Lars
If you want to retrieve the HTML that you get after filling in a HTML form, you have to identify precisely what the series of requests looks like to fetch the data. And be careful because it's often not as simple as just looking at the request that the HTML in question generates: unfortunately, it is sometimes a complex series of requests (e.g. retrieving the original HTML is often seamlessly retrieving some critical hidden form fields and/or cookies).
Bottom line, to reverse engineer the required HTTP requests, you often have to pour through HTML code and/or watch the requests with something like Charles. It often takes quite a bit of time to do this with complicated sites.
Before you invest a lot of time here, though, you should first see if the web site provider's Terms of Service permit such usage. They often strictly prohibit this sort of practice. It's much better to contact the web site provider and see if they provide a web service to retrieve the data. That's far easier and will result in a far more robust interface for your app.
But if you're forced programmatically parsing HTML, I'd refer you to How to Parse HTML on iOS on Ray Wenderlich's site.
Good morning,
I have a Kendo UI data source handling my oData request. The data source is sitting client side and as I filter it , it makes the necessary calls for me and I have some custom markup displaying results. I have a limit of ten records being returned and this is working fine.
On the back end I am using WebAPi 4.0.30506.0 and oData 5.6 (we are stuck with .net 4 ). This is resolving the queries nicely. No problems here.
The problem is that the client now wants to have the filtered data exported server side (data eventually going into a pdf or excel report) , does anybody have any ideas on how to cross the filter settings over to another call.
I have had some success chucking the whole ODataQueryOptions into the cache for each user , but this feels dirty and inefficient.
All ideas welcome.
After some serious google bashing I found this nuget package :
https://www.nuget.org/packages/LinqToQuerystring/
Examples here : http://linqtoquerystring.net/examples.html
I can now pull the filter value out as a string an reapply as necessary.
Im trying to send some data to a form on a site were im a member using cURL, but when i look at the headers being sent, they seem to have been encrypted.
Is there a way i can get around this by making the computer / server visit the site and actual add the data to the inputs on the form and then hit submit, so that it would generate the correct data and post the form ?
You have got a few options:
reverse engineer the JavaScript that does the encryption (or possibly just encoding) process
get a browser engine (e.g. the Gecko engine), and add some scripting to it to fill in the forms and push the submit button - of course you would need JavaScript support within the page itself
parse the HTML using an HTML parser, feed the JavaScript in it to a JavaScript runtime with the correct libraries, fill in the "form" and hit the submit button
It's probably easiest to go for the first option. The JavaScript must be in the open to be able to be executed in the browser. But it may take some time to reverse-engineer as it is likely obfuscated.
You can use a framework to automate user interaction on the web pages, like Selenium.
This would enable you to not bother reverse engineering anything.
Selenium has binding in various languages, including Python and java.
Provided the javascript is visible on the website in question, you should be able to simply copy and paste their encryption routines to prepare the headers exactly as they do
A hacky fix if you can isolate the function that encodes the data you type in the form - is to use something like PyV8 to execute the JS inside python.
Use AutoHotKeyIt and actually have it use the Browser Normally. It can read from files, and do repetitive tasks infinitely. Also you can push a flag to make it only happen within that application, which means you can have it minimized and yet still preform the action.
You seem to be having issues with the problem of them encrypting the headers and such, so why not simply use that too your advantage? Your still pushing the same data in, but now your working around their system. With little to no side effect too you.
I'm trying to implement a jqxGrid, using sorting and paging on the server. I don't have access to the server itself. Taking an example from:
http://www.jqwidgets.com/jquery-widgets-documentation/documentation/phpintegration/php-server-side-grid-paging-and-sorting.htm
I implement the client-side and want to use a mock static file as a response. I can't manage to figure out what kind of JSON response format is meant to be returned.
How do I 'catch' and edit/format the JSON response from the server? (Where in the code?)
Is there anywhere a working example of a jqgrid with sorting done on the server, to be viewable online? (So I can observe the data structure returned).
What do you mean exactly? Do you want to edit the data itself or the view of the data? If it is the later one you can use cellsrendered. For a live demo look here. You can also change the value here since you have access to the value field but it is by column.
Yes, look here.
http://www.jqwidgets.com/jquery-widgets-documentation/
You can find there information about the datasource.
http://www.jqwidgets.com/jquery-widgets-documentation/
You can find there (right menu):
PHP Integration
ASP.NET Integration
You can find there what ever yo need (sorting filtering and so)
I understand what progressive enhancement is, I'm just fuzzy on some of the details in actually pulling it off. Of course, that could be because I'm looking at it in the wrong way. Let me try to explain my difficulty with a hypothetical:
ASP.NET MVC site. I have a view that has tabbed navigation. Each tab is for a movie category/genre which displays 5-10 links to movies in that category. The movie data is obtained through Netflix's Odata.
My initial thought is to use Ajax to pull and parse the JSON from the proper OData GET requests when each tab is clicked. How would I provide a non-JavaScript version of that? Is it even possible?
For simpler requests where JSON isn't necessary - like, say, having a user log into the system - I see how I could simply set a cookie and dynamically change the page based on it to reflect the change. But what if I need to return and parse JSON? How do I provide an alternative?
The deal with progressive enhancement is that your server side must be fully capable of generating every last bit of HTML that appears in all of your pages. This is obvious, since otherwise (if JS is turned off) there will be no part of your application capable of doing said rendering.
Since the server side must know how to render everything, it doesn't make much sense to generate things (DOM elements/HTML) on the client side from JSON responses the server gives you. Why repeat yourself?
This brings us to the logical conclusion that when doing dynamic updates on the client, you need to get ready-made HTML from the server (since the rendering logic is over there) and insert it into the DOM as appropriate. You are then free to work on the newly inserted elements with jQuery and enhance them all you want.
So -- forget about parsing JSON on the client, otherwise you 're locking yourself out of progressive enhancement. If you want to call a third party, have the server be your intermediary: call the server with all the necessary information for it to call the third party and get ready-made HTML back.
If you do this, then the server can of course provide non-JS versions of everything on your site with no problem. Total non-reliance on JS achieved.
There is no JSON without JS, by definition (JavaScript Object Notation). Without JS you won't make AJAX calls. Your pages will render as is, just like oldschool sites.
If you need to do this progressively, you will have to call the odata service server-side, and provide .net objects to the site in viewdata, or your viewmodel, and have your views/partials render it.
In ASP.Net MVC actions, the httpcontext available via the controller will have a property on this path: this.HttpContext.Request.IsAjaxRequest() and can be used to test whether you want to return a view or just json data, or whatever type of ActionResult you want. This can be an excellent timesaver for building progressive enhancement style sites.