I have two sites, my main site and a help site with documentation. The main site is rails but the other is simple a wordpress like blog. Currently I have it being pulled into the main site with an iframe, but is there a good way to pull in the html from the other site as some sort of cross-domain (sub-domain actually) partial? Or should I just stick with what works?
If the data sources were all on the same domain, you would be able to utilize straight AJAX to fetch your supplemental content and graft it onto your page. But, since the data is from different domains, the same origin security policy built into web browsers will prevent that from working.
A popular work around is called JSONP, which will let you fetch the data from any cooperating server. Your implementation might look something like this (using jQuery):
$.getJSON(
"http://my.website.com/pageX?callback=?",
function(data) {
$("#help").append(data)
}
)
The only hitch is that the data returned by your server must be wrapped as a javascript function call. For example, if your data was:
<h1>Topic Foo</h1>
Then your server must respond to the JSONP request like this:
callbackFunction("<h1>Topic Foo</h1>")
Related
Snip.ly nicely checks if the entered web address can be used in an iframe.
I'd like to replicate it in ruby. Looking through their code they send an ajax request to their server and thats where they do the validation.
Even after extensive googling couldn't find anything that could help me accomplish that.
My use case is that we let users add news listings to their page, which are shown in iframes, and would like to show it if the entered url can be used in an iframe.
You can figure out some cases by checking the X-Frame-Options header. But as you mentioned in the comments, it does not work all the time.
In my experience, it's best to side-step the problem altogether.
If you reverse-proxy your request through your rails server, then you can display pretty much anything all the time in your iframe.
Following is an example of the process. I'm assuming that your server is your-server.com and the user wants to list a page on user.com/list. The way it works would be:
Set an iframe's src to https://your-server.com/proxy?url=https://user.com/list`
Intercept the request, extract the url: https://user.com/list
Perform an HTTP request on https://user.com/list to fetch the content
Return it to the browser as if it come from your own server
This approach works pretty much all the time, but it then has other limitations:
- you should reverse proxy any asset on that page that has a relative url; otherwise the css/images may be broken
- you must handle ajax requests on that page
You can fix these as well, by transforming the html before step 4.
You could use https://github.com/waterlink/rack-reverse-proxy for step 2 and 3, instead of re-implementing your own reverse proxy.
You could set it up using the following code in config/application.rb:
config.middleware.insert(0, Rack::ReverseProxy) do
reverse_proxy_options timeout: 10 # avoids waiting for pages that take forever to load
reverse_proxy(/proxy\?url=(.*)/, '$1') # reverse proxy on the url parameter
end
What are the pros and cons of using:
PathLocationStrategy - the default "HTML 5 pushState" style.
HashLocationStrategy - the "hash URL" style.
for instance, using HashLocationStrategy will prevent the feature of scrolling to an element by its #ID, but some 3rd party plugins require the HashLocationStrategy or the Hashbang #! in order to work in ajax websites.
I would like to know which one offers more for a webapp.
For me the main difference is that the PathLocationStrategy requires a configuration on the server side to all the paths configured in #RouteConfig to be redirected to the main HTML page of your Angular2 application. Otherwise you will have some 404 errors when trying to reload your application in the browser or try to access it using a particular URL.
Here is a question that could give you some hints about this:
When I refresh my website I get a 404. This is with Angular2 and firebase.
Hope it helps you,
Thierry
# can only be processed on the client, the servers just ignore them. This can cause problems with search engines (SEO), redirects can cause redundant page reloads.
This page https://github.com/browserstate/history.js/wiki/Intelligent-State-Handling has some detailed explanation, while some of the arguments don't apply for Angular applications (for example - doesn't work with JS disabled).
The "disadvantage" of HTML5 pushstate is that is requires server support like explained by Thierry.
According to official docs:
When the router navigates to a new component view, it updates the browser's location and history with a URL for that view. This is a strictly local URL. The browser shouldn't send this URL to the server and should not reload the page.
PathLocationStrategy
Modern HTML5 browsers support history.pushState, a technique that changes a browser's location and history without triggering a server page request. The router can compose a "natural" URL that is indistinguishable from one that would otherwise require a page load.
Here's the HTML5 pushState style URL that routes to the xyz component: localhost:4200/xyz/
HashLocationStrategy
Older browsers send page requests to the server when the location URL changes unless the change occurs after a # (called the hash). Routers can take advantage of this exception by composing in-application route URLs with hashes.
Here's a hash style URL that routes to the xyz component: localhost:4200/src/#/xyz/
I would like to know which one offers more for a webapp.
Almost all Angular projects should use the default HTML5 style as:
It produces URLs that are easier for users to understand.
It preserves the option to do server-side rendering later.
Rendering critical pages on the server is a technique that can greatly improve perceived responsiveness when the app first loads. An app that would otherwise take ten or more seconds to start could be rendered on the server and delivered to the user's device in less than a second.
This option is only available if application URLs look like normal web URLs without hashes (#) in the middle.
Stick with the default unless you have a compelling reason to resort to hash routes.
So I'm writing a post on my wall and type a URL into the main body of the post. As soon as I finish the URL, Facebook creates a little section underneath which has the title, description, and an image from the url I typed.
Without getting too indepth, how is this done and what is the best way of make something similar myself?
jQuery (or some other framework that lets you do Ajax easily) to communicate between browser client and webserver
PHP/ASP.NET/Python (or some other scripting framework on the backend) to fetch the url
Facebook also has a meta data specification you might be interested in, to let developers further define what gets shown in a Facebook page.
I believe Facebook is written in PHP. And PHP does this easily.
FOpen can be used to access files on other sites. There are other functions but this will get you started. Then it's a matter of parsing the html you get from the url to get what you want.
http://php.net/manual/en/function.fopen.php
You have a couple choices. You can fetch it using Ajax from the client; or you can fetch it from your server.
If doing it from your server in asp.net then you need to use HttpWebRequest.
FB does an asynchronous JavaScript call to fetch that data without reloading the window you're on. Lookup ajax and libraries like jquery do this: http://api.jquery.com/category/ajax/
I'm building ASP.NET MVC2 website that lets users store and analyze data about goods found on various online trade sites. When user is filling a form to create or edit an item, he should have a button "Import data" that automatically fills some fields based on data from third party website.
The question is: what should this button do under the hood?
I see at least 2 possible solutions.
First. Do the import on client side using AJAX+jQuery load method.
I tried it in IE8 and received browser warning popup about insecure script actions. Of course, it is completely unacceptable.
Second. Add method ImportData(string URL) to ItemController class. It is called via AJAX, does the import + data processing server-side and returns JSON-d result to client.
I tried it and received server exception (503) Server unavailable when loading HTML data into XMLDocument. Also I have a feeling that dealing with not well-formed HTML (missing closing tags, etc.) will be a huge pain. Any ideas how to parse such HTML documents?
Unfortunately you can't do cross-site loading usting JavaScript without using JSONP. This is a security issue. Your best bet would be to AJAX a request to one of your controller's actions and have it do the web request and return the result to the client.
As far as the 503 Server Unavailable goes, does this happen on every request? It sounds like you're parsing information from WoW Armory. They throttle web requests and will ban you after a certain about of time.
Use http://htmlagilitypack.codeplex.com/ to process HTML on server. Or regexps. Or string.IndexOf. Or import MSHTML via Interop library and use it. Do not load HTML into XML documents unless you're absolutely sure it's pure XHTML.
Also, try to see if 3rd party websites provide more direct access to data - XML, REST, web services.
Trying to figure out a way where I can pass some data/fields from a web page back into my application. This needs to works on Windows/Linux/Mac so I can't use a DLL or ActiveX. Any ideas?
Here's the flow:
1. Application gathers some data and then sends it to a web page using POST that is either imbedded in the app or pops up a new IE window.
2. The web page does some services and then needs to relay the results back to the application.
The only way to do this that I can think of is writing the results locally from the page in a cookie or something like that and have the application monitor for a specific file in that folder.
Alternatively, make a web service that the application hits after passing control to the page and when the page is done the web service will return the data. This sounds like it might have some performance drawbacks.
Can anyone suggest any better solutions for this?
Thanks
My suggestion:
Break the processing logic out of the Web Page into a seperate assembly. You can then create a Web Service that handles all of the processing without needing to pass control over to a page.
Your application can then call the Web Service directly and then serialize the results and work with the data quite easily.
Update
Since the page is supplied by a third party, you obviously can't break anything out. The next best thing would be to handle the entire web request internal to your application (rather than popping a new Window).
With this method, you can get the raw HTTP response (and page markup) and work with it directly. You can then parse the Response stream and gather the required data from it.
During performing an HTTP request you should be able to retrieve the text returned by the page. For instance, if your HTTP POST was to hit a Java servlet, the doPost() method would be fired and you would then perform your actions, you could then use the PrintWriter object from the Response object (PrintWriter out = response.getWriter();) and write text back to the calling application. I'm not sure this helps?
The fact that
web page is hosted by a third party
and they need to be doing the
processing on their servers.
is important to this question.
I like your idea of having the app call a webservice after it passes the data to the third-paty web page. You can always call the webservice asynchronously if you're worried about blocking your application while waiting for results from this webservice.
Another option is that your application implements an XML-RPC server that can be called from the web page using PHP, Python or whatever you use to build the website
A REST server will do the job also...