Asynchronous generation of PDF - error handling - asp.net-mvc

We have some code that uses a third party component to generate a PDF from a URL that we pass it (the URL being a page within our application). In the code, we instantiate the PDF generator and it creates the PDF in an asynchronous fashion.
The problem I have is that if the URL that we pass it has a problem, there is no indication of this from the PDF generator, we just get a PDF created that contains a 404 error page, or our custom error page.
I need some way, within my controller, to first call this URL (which is another view) and check that it does not error out, prior to calling the PDF generation. Can anybody point me in the direction of how I might go about doing this?

You can to a HttpRequest to the URL first, then check the HttpWebResponse.StatusCode
If you get a 404 or a 500 (etc) then you have a problem.

Do an Http Request first against the URL. I use WatIn for all my URL interactions, which I find sufficiently hides the details allowing me to validate a page prior to use. However for this you really just need HttpRequest

Related

Generate PDF from Vue template

I need to create downloadable pdfs of a page which is rendered using Vue. The html to template API we're using is DocRaptor.
API built using Rails
Client built using Vue
Two types of approaches are possible:
Passing in a url to the page, which is then rendered to a PDF.
Problems.
The page is behind our auth, do I pass in the session token in the header?
Page is calling our API, meaning the above wouldn't even matter...I assume you the page will only fetch the raw html, not run JS in the DocRaptor POST request.
Passing in the raw html in the DocRaptor POST request, with styling. Problems
We don't use server side rendering, so don't have access to a nice pre rendered html string
Figuring out how to compile vue to raw html
Am I way off the mark here?
The two options above seem like the way to go. Would love for option 1 to work, but I don't see how - which leaves me with option 2, however no amount of googling has given me answers beyond server side rendering. Can I even do that for single pages? I assume the whole app gets rendered.
Option 1 could work, assuming you have some sort of authentication mechanism in place (for example a short lived token). DocRaptor does indeed execute your javascript, so it should work.
You can render to an invisible element on client (or may be even visible and make user think that this is a preview) and then use old good innerHTML:
let html = document.querySelector('#render-placeholder').innerHTML;
and then post it to server to forward to pdf renderer (to keep service access tokens secret).

Fill the entire form based on selection (onchange)

I have an ingredient form and I want the user to type only the name and then the carb, protein and fat get automatically loaded. I know I need an ajax request but I don't know the path to learn how to accomplish this.
Can anyone gives an example or tell me where I can find it?
Learn Jquery too. Jquery is a widely used Javascript library and you will find a lot of help too.
Coming back to the question, $ajax method will be used to send an ajax request from your .js file. You will find a lot of documentation on it.
If you want to send an ajax request using rails helpers and views use the
remote : true
option. it will process your request as a XmlHttpRequest one. You can check the type of request by naigating to your browsers console and see the type of request. In case of Ajax request it will be XmlHttpRequest type.
This is not a thorough answer since you will find a lot of documentation on internet. But just to get you started.

Transferring Result Data via HTTP Header? - Asp.Net MVC

I am trying to upload file without refreshing the page.
I got 1 form, 1 submit, 1 file input and 1 iframe in order to prevent refreshing.
Form sending data via iframe, so my form have target attribute.
After my c# function's work, I want to return result data, such as message, issuccess etc.
I don't know how to return result data without using http header.
Maybe it's also not possible with http header. I don't know. I am here to learn how to do.
Transferring result data via http header makes sense? Is it preferable way?
Does it occurs vulnerability?
Any other suggestions?
Thanks in advance.
Typically it's hard to get a file over the wire without a form post.
What I've done often is use an invisible iframe for a form post and then have the iframe call a function in the parent page upon load. This assumes you can't just use jquery or a recent version of dojo to take care of this for you.
http://viralpatel.net/blogs/ajax-style-file-uploading-using-hidden-iframe/
If you can use jquery it's much nicer. Edit: this is under the mit license
http://blueimp.github.com/jQuery-File-Upload/
Why not take advantage of the HTML5 File API. These links should point you in the right direction.
http://www.html5rocks.com/en/tutorials/file/dndfiles/
http://timothypoon.com/blog/2011/05/10/ajax-uploading-with-html5s-file-api/

Forcing a page to POST

This may be a very unusual question, but basically there's a page on another domain (that I can view, but can't edit/change) that has a button. When that button is clicked it generates some unique keys.
I need to pull those unique keys with my web service (using ASP .NET MVC3) I can get the initial HTML of the page, but how can I force the page to "click" the button so that I can get the values after the POST?
Normally, I'd reuse the code to generate keys myself, but I don't have access to the logic.
I hope this makes sense.
Use e.g. firebug to see what POST parameters are sent with form and then make the same POST from your code.
For this you can use WebRequest or WebClient.
See this SO questions that will help you how to do it:
HTTP request with post
Send POST request in C# like a web page does?
How to simulate browser HTTP POST request and capture result in C#
Then just parse the response with technology of your choice (I would use regular expressions - Regex, or LinqToXml if the response is well formed XML).
Note: Keep in mind that your code will be dependent on some service you are not maintaining. So you can get in problems when the service is unavailable, discontinued or if the format of POSTed form or response will be changed.
This really depends on the technology on the targeted site.
If the page is a simple HTML form then you can easily send a POST. You will need to send the expected data to the POST. Then you can parse the data.
If its not so straight forward you will need to look into ways to automate the click. Check Selenium. Also you might need to employ scrapping if the results page is a mess.

what IS the og:url metatag?

I am building an iOS app that is supposed to use openGraph objects for users.
As I see it, I need to :
1. create object pages for each of these objects that contains all the metatags that facebook is generating for your created objects in Get Code.
2. Use the iOS app to generate opengraph requests that involve these objects through a single page. ie. a PHP file that uses parameters you might send to it, that would generate links to images and some titles etc... (am I right?)
The thing is that the PHP file on step 2 is supposed to be the object itself, and my object needs a og:url which is either interpreted as a type:wesite, which is wrong because my type is set to my own custom type! or it just throws an error saying that the og:url is not valid.
I can see that facebook is scraping whatever I give it in the og:url, so basically why is this needed in the first place if all metatags are ignored?
You seem to have this correct, basically, to publish an action against an object using the Open Graph APIs, there needs to be an object URL which, when accessed, does one of the following
Contains the complete set of metadata needed to describe whatever type of object you're created and serves this to Facebook's crawler
Contains an og:url meta tag, a <link ref="canonical tag or a HTTP 301 redirect pointing to a URL which does 1)
Having a PHP script which takes input parameters and returns metadata based on those is a common approach, the biggest thing to watch out for is that your og:url tag matches the input parameters, so Facebook's crawler doesn't make a new request out to that URL instead of the one it asked for originally
og:url means: Open Graph Uniform resource locator.
https://developers.facebook.com/docs/opengraph/tutorial/

Resources