Angular 2 - upload of file - post

I need to create upload of images to my webserver in my Angular 2 app. Can anybody provide me some guidance how to achive this?
These are the prerequisities:
ASMX web service communicating in JSON.
post method used for communication.
JPEG / PNG up to 1MB of size.
Concept I wanted to follow (but failed)
Load the content of JPEG to variable, encode it using the Base64 coding and post it to ASMX service that will accept two parameters (token for authentication and encoded data.
What exactly is my problem
Web service was the easy part, it is done and working, but I can't manage to get the file content for enconding. I used this:
component.html
...
<input type="file" (change)="fileChangeEvent($event)" />
...
component.ts
private fileChangeEvent(fileInput: any) {
let image = fileInput.target.files[0] as File;
...
}
As you have probably guessed, the problem is in the File class, because it provides me only basic info about the file (name, size, last modif, ...) but I can't get the content of the file. Or at least I don't know how to get it. I also checked other questions here on SO, but all of the answers had something special that did not met my requirements. And maybe I'm just blind, but I can't see where the content is get.
So, is there anybody, who is able to provide me some guidelines to follow?
Thank you very much in advance.

I have left this question open for experienced guys, who could be able to answer it. There is no answer though and I found out the answer yesterday. So, after some research and modification of search phrase, I found out the answer. There is a FileReader type which can be used for reading the content of the file. Here is the source of the answer:
Getting byte array through input type = file
Thanks to original answer now I know how to do it.

Related

How to get http tag text by id using lua

There is a webpage parser, which takes a page contains several tags, in a certain structure, where divs are badly nested. I need to extract a certain div element, and copy it and all its content to a new html file.
Since I am new to lua, I may need basic clarification for things might seem simple.
Thanks,
The ease of extraction of data is going to largely depend on the page itself. If the page uses the exact same tag information throughout its entirety, it'll be much more difficult to extract than it would if it has named tags.
If you're able to find a version of the page that returns json format, then you're that much better off. Here's a snippet of code on something I wrote to grab definitions from a webpage that did not have json format:
local actualword, definition = string.match(wayup,"<html.-<td class='word'>%c(.-)%c</td>.-<div class=\"definition\">(.-)</div>")
Essentially, this code searched down the page until it found the class "word", and took the word after it (%c is the pattern for control characters). It continued on to "definition" and captured that, as well.
As you can see, it's a bit convoluted, but I had the luck of having specifically named tags for what I wanted.
This is edited to fit your comment. As a side note that I should have mentioned before, if you're familiar with regular expressions, you can use its model to capture what you need. In this case, it's capturing the string in its totality:
local data = string.match(page, "(<div id=\"aa\"><div>.-</div>.-</div>)")
It's rarely the fault of the language, but rather the webpage itself, that makes it hard to data mine anything. Since webpages could literally have hundreds of lines of code, it's hard to pinpoint exactly what you want without coming across garbage information. It's why I prefer a simplified result such as json, since Lua has a json module that can encode/decode and you can get your precise information.

Dart Read support for binary files

there exist some sample code for an Http Server in the Dart:io section.
Now I will distribute images with this server. To achieve this, I read the requested image file and send its content to the client via request.response.write().
The problem is the format of the read data:
Either I read the image file as 16bit-String or as Byte Array. Neither of them is compatible to a raw 8-bit array, which I have to send to the client.
May someone help me?
There exist several kinds of write-methods in the response class.
write
writeCharCode
add
While "write" writes the data 'as seen', "writeCharCode" transforms the data back to raw-format. However, writeCharCode prepends some "magic byte" (C2) at the beginning, so it corrupts the data.
Another function, called add( List < int > ) processes the readAsBytes-result as desired.
Best regards,
Alex

Efficient design of crawler4J to get data

I am trying to get the data from various websites.After searcing in stack overflow, am using crawler4j as many suggested this. Below is my understanding/design:
1. Get sitemap.xml from robots.txt.
2. If sitemap.xml is not available in robots.txt, look for sitemap.xml directly.
3. Now, get the list of all URL's from sitemap.xml
4. Now, fetch the content for all above URL's
5. If sitemap.xml is also not available, then scan entire website.
Now, can you please please let me know, is crawler4J able to do steps 1, 2 and 3 ???
Please suggest any more good design is available (Assuming no feeds are available)
If so can you please guide me how to do.
Thanks
Venkat
Crawler4J is not able to perform steps 1,2 and 3, however it performs quite well for steps 4 and 5. My advice would be to use a Java HTTP Client such as the one from Http Components
to get the sitemap. Parse the XML using any Java XML parser and add the urls into a collection. Then populate your crawler4j seeds with the list :
for(String url : sitemapsUrl){
controller.addSeed(url);
}
controller.start(YourCrawler, nbthreads);
I have never used crawler4j, so take my opinion with a grain of salt:
I think that it can be done by the crawler, but it looks like you have to modify some code. Specifically, you can take a look at the RobotstxtParser.java and HostDirectives.java. You would have to modify the parser to extract the sitemap and create a new field in the directives to return the sitemap.xml. Step 3 can be done in the fetcher if no directives were returned from sitemap.txt.
However, I'm not sure exactly what you gain by checking the sitemap.txt: it seems to be a useless thing to do unless you're looking for something specific.

Youtube API missing field

I have problem getting statistics information from youtube data api. I make a request to http://gdata.youtube.com/feeds/api/videos?q=video_id&alt=json, it works for some, but for some video id, the response does not contain 'entry', 'yt$statistics', 'gd$rating' for example:
zLcbznigfs missing 'entry', aVfN6XjACDY missing 'yt$statistics', fjhQ9Kf4iHk missing 'gd$rating'
After moving around, i found out the solution for this: use &alt=atom instead of using &alt=json, which means that we better read from Atom feed than JSON (and feedparser is an excellent module for doing this). I have checked this with several video id, it works fine.
Hope that help. Thanks.

How to parse a .xfa file

Hoping that someone has some info on how to parse a xfa file. I can parse csv or xml files just fine, but an xfa one has come along and I'm not familar with the format. Looks like tab delimited body with column metadata at the top.
Anyone dealt with these before or can give me a steer on how to parse them?
I use vb.net but the language of any solution isn't too relevant.
Much appreciated.
Mmm, looks like nobody has a clue. The problem is that .xfa doesn't look like a "standard" extension: after all, anybody can create its own extension names, from .xyz to .something...
I looked around a bit, found, unsurprisingly (the 'x') an XML format with this extension, not much more.
Indicating where this kind of file come from, what kind of data it holds, might help. Or not.
You describe the file as being a simple TSV (tab separated values) with a header. It is quite trivial to parse, with a tokenizer or some regex, so I am not sure where you are stuck.
I think you might be talking about this: http://en.wikipedia.org/wiki/XFA_forms
This seemed to be a page that was designed to deal with that template: http://www.w3.org/1999/05/XFA/xfa-template-19990614
That information should be enough to get the ball rolling. If that fails then you can always analyse the file itself for patterns and go from there. I don't see it being too tricky.
Anyway, I hope that helps.
P.S. If you could provide a link to that .xfa we could probably give you more help.
The original post says the content looks like "tab delimited body with column metadata at the top". An XFA form doesn't look anything like that - XFA forms typically use a *.xdp extension and are XML.
Check out the Adobe page:
http://partners.adobe.com/public/developer/xml/index_arch.html
(Adobe XML Forms Architecture, currently 1400 pages)
Let LiveCycle/Acrobat parse it for you.

Resources