Just getting in to this using http_client_sync.cpp. I added a little bit of code to use the
http::file_body type.
I can't seem to figure out how to get the multipart boundaries inserted. Is there something in particular I need to do to make that happen? When I look at the POST in wireshark, the entire multipart/form data is in one big part. Relevant (I hope) snippet below. Modified from oroginal http_client_sync.cpp.
// Set up an HTTP POST request message
http::request<http::file_body> req{http::verb::post, target, version};
// use the full host:port here
req.set(http::field::host, fullhost.c_str());
req.set(http::field::user_agent, BOOST_BEAST_VERSION_STRING);
req.set(http::field::accept,"*/*");
req.set(http::field::content_length,contentLen);
req.set(http::field::content_type,"multipart/form-data; boundary=fd1c38d86c0a42ac933e7319e51882fd");
req.body() = std::move(body);
http::request_serializer<http::file_body, http::fields> sr{req};
// Send the HTTP request to the remote host
http::write(socket, sr);
I did also try this, but the result was the same
size_t len = http::write_header(socket, sr);
while(len!=0)
{
len = http::write_some(socket, sr);
}
The server I'm talking to is expecting multiple parts.
Many Thanks
Ok it turns out this is possible, and cured my issue. Probably not the "right" way to do it, but it works.
size_t len = http::write_header(socket, sr);
// where multipart header is a string containing the first boundary
// and the content disposition
boost::asio::write(socket,boost::asio::buffer( multipartHeader));
while(len!=0)
{
len = http::write_some(socket, sr);
}
// where multipart trailer is a string containing the final boundary
boost::asio::write(socket,boost::asio::buffer( multipartTrailer));
Happy to hear better solutions.
Related
CONCLUSION:
For some reason the flow wouldn't let me convert the incoming message to a BLOB by changing the Message Domain property of the Input Node so I added a Reset Content Descriptor node before the Compute Node with the code from the accepted answer. On the line that parses the XML and creates the XMLNSC Child for the message I was getting a 'CHARACTER:Invalid wire format received' error so I took that line out and added another Reset Content Descriptor node after the Compute Node instead. Now it parses and replaces the Unicode characters with spaces. So now it doesn't crash.
Here is the code for the added Compute Node:
CREATE FUNCTION Main() RETURNS BOOLEAN
BEGIN
DECLARE NonPrintable BLOB X'0001020304050607080B0C0E0F101112131415161718191A1B1C1D1E1F7F808182838485868788898A8B8C8D8E8F909192939495969798999A9B9C9D9E9FA0A1A2A3A4A5A6A7A8A9AAABACADAEAFB0B1B2B3B4B5B6B7B8B9BABBBCBDBEBFC0C1C2C3C4C5C6C7C8C9CACBCCCDCECFD0D1D2D3D4D5D6D7D8D9DADBDCDDDEDFE0E1E2E3E4E5E6E7E8E9EAEBECEDEEEFF1F2F3F4F5F6F7F8F9FAFBFCFDFEFF';
DECLARE Printable BLOB X'20202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020';
DECLARE Fixed BLOB TRANSLATE(InputRoot.BLOB.BLOB, NonPrintable, Printable);
SET OutputRoot = InputRoot;
SET OutputRoot.BLOB.BLOB = Fixed;
RETURN TRUE;
END;
UPDATE:
The message is being parsed as XML using XMLNSC. Thought that would cause a problem, but it does not appear to be.
Now I'm using PHP. I've created a node to plug into the legacy flow. Here's the relevant code:
class fixIncompetence {
function evaluate ($output_assembly,$input_assembly) {
$output_assembly->MRM = $input_assembly->MRM;
$output_assembly->MQMD = $input_assembly->MQMD;
$tmp = htmlentities($input_assembly->MRM->VALUE_TO_FIX, ENT_HTML5|ENT_SUBSTITUTE,'UTF-8');
if (!empty($tmp)) {
$output_assembly->MRM->VALUE_TO_FIX = $tmp;
}
// Ensure there are no null MRM fields. MessageBroker is strict.
foreach ($output_assembly->MRM as $key => $val) {
if (empty($val)) {
$output_assembly->MRM->$key = '';
}
}
}
}
Right now I'm getting a vague error about read only messages, but before that it wasn't working either.
Original Question:
For some reason I am unable to impress upon the senders of our MQ
messages that smart quotes, endashes, emdashes, and such crash our XML
parser.
I managed to make a working solution with SQL queries, but it wasted
too many resources. Here's the last thing I tried, but it didn't work
either:
CREATE FUNCTION CLEAN(IN STR CHAR) RETURNS CHAR BEGIN
SET STR = REPLACE('–',STR,'–');
SET STR = REPLACE('—',STR,'—');
SET STR = REPLACE('·',STR,'·');
SET STR = REPLACE('“',STR,'“');
SET STR = REPLACE('”',STR,'”');
SET STR = REPLACE('‘',STR,'&lsqo;');
SET STR = REPLACE('’',STR,'’');
SET STR = REPLACE('•',STR,'•');
SET STR = REPLACE('°',STR,'°');
RETURN STR;
END;
As you can see I'm not very good at this. I have tried reading about
various ESQL string functions without much success.
So in ESQL you can use the TRANSLATE function.
The following is a snippet I use to clean up a BLOB containing non-ASCII low hex values so that it then be cast into a usable character string.
You should be able to modify it to change your undesired characters into something more benign. Basically each hex value in NonPrintable gets translated into its positional equivalent in Printable, in this case always a full-stop i.e. x'2E' in ASCII. You'll need to make your BLOB's long enough to cover the desired range of hex values.
DECLARE NonPrintable BLOB X'000102030405060708090A0B0C0D0E0F101112131415161718191A1B1C1D1E1F202122232425262728292A2B2C2D2E2F303132333435363738393A3B3C3D3E3F';
DECLARE Printable BLOB X'2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E';
SET WorkBlob = TRANSLATE(WorkBlob, NonPrintable, Printable);
BTW if messages with invalid characters only come in every now and then I'd probably specify BLOB on the input node and then use something similar to the following to invoke the XMLNSC parser.
CREATE LASTCHILD OF OutputRoot DOMAIN 'XMLNSC'
PARSE(InputRoot.BLOB.BLOB CCSID InputRoot.Properties.CodedCharSetId ENCODING InputRoot.Properties.Encoding);
With the exception terminal wired up you can then correct the BLOB's of any messages containing parser breaking invalid characters before attempting to reparse.
Finally my best wishes as I've had a number of battles over the years with being forced to correct invalid message content in the "Integration Layer" after all that's what it's meant to do.
I have a set of RDL reports hosted on the report server instance. Some of the report renders more than 100,000 records on the ReportViewer. So that it takes quite long time to render it on the Viewer. So, we decided to go with Export the content directly from the server based on the user input parameters for the report as well as export file format.
Main thing here, I do not want the user to wait until the export file available for download. Rather, User can submit the action and can proceed to do other works. In the background, the program has to export the file to some physical location. When the download will be available, the user will be informed with some notification about the exported file.
I found the way in this Link. I need to know what are the ways to achieve the above mentioned functionality as well as how to pass the input parameters for the report. Pl suggest me.
Note: I was using XML as datasource for the rdl reports.
EDIT
I found something useful and did the coding like the below,
string path = ServerURL +"?" + _reportFolder + "ReportName&rs:Command=Render&rs:Format=PDF";
WebRequest req = WebRequest.Create(path);
string reportParametersQT = String.Empty;
req.Credentials = CredentialCache.DefaultNetworkCredentials;
WebResponse response = req.GetResponse();
Stream stream = response.GetResponseStream();
//screen.Response.Clear();
string enCodeFileName = HttpUtility.UrlEncode("fileName.pdf", System.Text.Encoding.UTF8);
// The word attachment in Addheader is used to directly show the save dialog box in browser
Response.AddHeader("content-disposition", "attachment; filename=" + enCodeFileName);
Response.BufferOutput = false; // to prevent buffering
Response.ContentType = response.ContentType;
byte[] buffer = new byte[1024];
int bytesRead = 0;
while ((bytesRead = stream.Read(buffer, 0, buffer.Length)) > 0)
{
Response.OutputStream.Write(buffer, 0, bytesRead);
}
Response.End();
I am able to download the exported file. But need to save the file in physical location instead of downloading. I dont know how to do that.
Both of these are very easy to do. You essentially just pass the parameters in the URL that you're calling, for example for a parameter called "LearnerList" you add &LearnerList=12345 to the URL. For exporting, add an additional paramter for Format=PDF (or whatever you want the file as) to get the report to export as a PDF instead of generating in Report Viewer.
Here's an example URL:
https://reporting.MySite.net/ReportServer/Pages/ReportViewer.aspx?/Users+Folders/User/My+Reports/Learner+Details&rs:Format=PDF&LearnerList=202307
Read these two pages, and you should be golden:
https://msdn.microsoft.com/en-us/library/ms155391.aspx
https://msdn.microsoft.com/en-us/library/ms154040.aspx
I'm working with a TIdHTTPServer to serve files to clients, using the ResponseInfo->ServeFile function. This works fine for files that are "static": not being written by some other process. As far as I can see from the code, the ServeFile function internally uses a TIdReadFileExclusiveStream, which disallows me from reading a file being written, but I need to be able to send also files that are being written by some other process.
So, I moved to create a FileStream myself and use the ContentStream property to return it to the client, but I get a 0 bytes file in the client (for any file, being written or not), and I can't see what I'm missing or doing wrong. Here is the code I'm using on the OnCommandGet event handler:
AResponseInfo->ContentStream = new TFileStream(path, fmOpenRead | fmShareDenyNone);
AResponseInfo->ContentStream->Position = 0;
AResponseInfo->ContentLength = AResponseInfo->ContentStream->Size;
AResponseInfo->ResponseNo = 200;
AResponseInfo->WriteHeader();
AResponseInfo->WriteContent();
The ContentLength property at this point has a valid value (i.e., the file size when calling ContentStream->Size), and that's what I would like to send to the client, even if the file changes in between.
I have tried removing the WriteContent() function, the WriteHeader(), but the results are the same. I searched for some examples but the few I found are more or less the same than this code, so I don't know what's wrong. Most examples don't include the WriteContent() call, that's why I have tried removing them, but there doesn't seem to be any difference.
As a side note: the files being written take 24 hours to finish writing, but that's to be expected from the client side: I just need the bytes already written at the time of the request (even somewhat less is valid). The files will never get deleted: they will just keep getting bigger.
Any ideas?
Update
Using Fiddler, I get some warnings on protocol violations, that would be related to this. I get, for instance:
Content-Length mismatch: Response Header indicated 111,628,288 bytes, but server sent 41 bytes
The content length is correct, it's the file size, but I don't know what I'm doing wrong that makes the app sent just 41 bytes.
WriteHeader() and WriteContent() expect the ContentStream to be complete and unchanging at the time they are called. WriteHeader() creates a Content-Length header using the current ContentStream->Size value if the AResponseInfo->ContentLength property is -1 (you are actually setting the value yourself), and WriteContent() sends only as many bytes as the current ContentStream->Size value says. So your client is receiving 0 bytes because the file Size is still 0 at the time you are calling WriteHeader() and WriteContent().
Neither ServeFile() nor ContentStream are suitable for your needs. Since the file is being written live, you do not know the final file size when the HTTP headers are created and sent to the client. So you must use HTTP 1.1's chunked transfer coding to send the file data. That will allow you to send the file data in chunks as the file is being written, and then signal the client when the file is finished.
However, TIdHTTPServer does not natively support sending chunked responses, so you will have to implement it manually, eg:
TFileStream *fs = new TFileStream(path, fmOpenRead | fmShareDenyNone);
try
{
AResponseInfo->ResponseNo = 200;
AResponseInfo->TransferEncoding = "chunked";
AResponseInfo->WriteHeader();
TIdBytes buffer;
buffer.Length = 1024;
do
{
int NumRead = fs->Read(&buffer[0], 1024);
if (NumRead == -1) RaiseLastOSError();
if (NumRead == 0)
{
// check for EOF, unless you have another way to detect it...
Sleep(1000);
NumRead = fs->Read(&buffer[0], 1024);
if (NumRead <= 0) break;
}
// send the current chunk
AContext->Connection->IOHandler->WriteLn(IntToHex(NumRead));
AContext->Connection->IOHandler->Write(buffer, NumRead);
AContext->Connection->IOHandler->WriteLn();
}
while (true);
// send the last chunk to signal EOF
AContext->Connection->IOHandler->WriteLn("0");
// send any trailer headers you need, if any...
// finish the transfer encoding
AContext->Connection->IOHandler->WriteLn();
}
__finally
{
delete fs;
}
The final working code is:
std::unique_ptr< TFileStream >fs(new TFileStream(path, fmOpenRead | fmShareDenyNone));
fs->Position = 0;
__int64 size = fs->Size;
AResponseInfo->ContentLength = size;
AResponseInfo->ResponseNo = 200;
AResponseInfo->WriteHeader();
AContext->Connection->IOHandler->Write(fs.get(), size);
This allows the client to receive up to size bytes of the original file, even if the file is being written to at the same time.
For some reason passing the ContentStream did not return any content to the client, but doing the IOHandler->Write directly (which is what the ServeFile ends doing internally) works fine.
I'm have domain class with property that represents files uploaded on my GSP. I've defined that file as byte array (byte [] file). When some specific action happens I'm sending mail with attachments from. This is part of my SendMail service:
int i = 1;
[requestInstance.picture1, requestInstance.picture2, requestInstance.picture3].each(){
if(it.length != 0){
DataSource image = new ByteArrayDataSource(it, "image/jpeg");
helper.addAttachment("image" + i + ".jpg", image);
i++;
}
}
This works fine with image files. But now I want to be able to work with all file types and I'm wondering how to implement this. Also, I want to save real file name in database. All help is welcomed.
You can see where the file name and MIME type are specified in your code. It should be straightforward to save and restore that information from your database along with the attachment data.
If you're trying to figure out from the byte array of data what the MIME type is and what a good filename would be, that's a harder problem. Try this.
I'm building an nsIProtocolHandler implementation in Delphi. (more here)
And it's working already. Data the module builds gets streamed over an nsIInputStream. I've got all the nsIRequest, nsIChannel and nsIHttpChannel methods and properties working.
I've started testing and I run into something strange. I have a page "a.html" with this simple HTML:
<img src="a.png">
Both "xxm://test/a.html" and "xxm://test/a.png" work in Firefox, and give above HTML or the PNG image data.
The problem is with displaying the HTML page, the image doesn't get loaded. When I debug, I see:
NewChannel gets called for a.png, (when Firefox is processing an OnDataAvailable notice on a.html),
NotificationCallbacks is set (I only need to keep a reference, right?)
RequestHeader "Accept" is set to "image/png,image/*;q=0.8,*/*;q=0.5"
but then, the channel object is released (most probably due to a zero reference count)
Looking at other requests, I would expect some other properties to get set (such as LoadFlags or OriginalURI) and AsyncOpen to get called, from where I can start getting the request responded to.
Does anybody recognise this? Am I doing something wrong? Perhaps with LoadFlags or the LoadGroup? I'm not sure when to call AddRequest and RemoveRequest on the LoadGroup, and peeping from nsHttpChannel and nsBaseChannel I'm not sure it's better to call RemoveRequest early or late (before or after OnStartRequest or OnStopRequest)?
Update: Checked on the freshly new Firefox 3.5, still the same
Update: To try to further isolate the issue, I try "file://test/a1.html" with <img src="xxm://test/a.png" /> and still only get above sequence of events happening. If I'm supposed to add this secundary request to a load-group to get AsyncOpen called on it, I have no idea where to get a reference to it.
There's more: I find only one instance of the "Accept" string that get's added to the request headers, it queries for nsIHttpChannelInternal right after creating a new channel, but I don't even get this QueryInterface call through... (I posted it here)
Me again.
I am going to quote the same stuff from nsIChannel::asyncOpen():
If asyncOpen returns successfully, the
channel is responsible for keeping
itself alive until it has called
onStopRequest on aListener or called
onChannelRedirect.
If you go back to nsViewSourceChannel.cpp, there's one place where loadGroup->AddRequest is called and two places where loadGroup->RemoveRequest is being called.
nsViewSourceChannel::AsyncOpen(nsIStreamListener *aListener, nsISupports *ctxt)
{
NS_ENSURE_TRUE(mChannel, NS_ERROR_FAILURE);
mListener = aListener;
/*
* We want to add ourselves to the loadgroup before opening
* mChannel, since we want to make sure we're in the loadgroup
* when mChannel finishes and fires OnStopRequest()
*/
nsCOMPtr<nsILoadGroup> loadGroup;
mChannel->GetLoadGroup(getter_AddRefs(loadGroup));
if (loadGroup)
loadGroup->AddRequest(NS_STATIC_CAST(nsIViewSourceChannel*,
this), nsnull);
nsresult rv = mChannel->AsyncOpen(this, ctxt);
if (NS_FAILED(rv) && loadGroup)
loadGroup->RemoveRequest(NS_STATIC_CAST(nsIViewSourceChannel*,
this),
nsnull, rv);
if (NS_SUCCEEDED(rv)) {
mOpened = PR_TRUE;
}
return rv;
}
and
nsViewSourceChannel::OnStopRequest(nsIRequest *aRequest, nsISupports* aContext,
nsresult aStatus)
{
NS_ENSURE_TRUE(mListener, NS_ERROR_FAILURE);
if (mChannel)
{
nsCOMPtr<nsILoadGroup> loadGroup;
mChannel->GetLoadGroup(getter_AddRefs(loadGroup));
if (loadGroup)
{
loadGroup->RemoveRequest(NS_STATIC_CAST(nsIViewSourceChannel*,
this),
nsnull, aStatus);
}
}
return mListener->OnStopRequest(NS_STATIC_CAST(nsIViewSourceChannel*,
this),
aContext, aStatus);
}
Edit:
As I have no clue about how Mozilla works, so I have to guess from reading some code. From the channel's point of view, once the original file is loaded, its job is done. If you want to load the secondary items linked in file like an image, you have to implement that in the listener. See TestPageLoad.cpp. It implements a crude parser and it retrieves child items upon OnDataAvailable:
NS_IMETHODIMP
MyListener::OnDataAvailable(nsIRequest *req, nsISupports *ctxt,
nsIInputStream *stream,
PRUint32 offset, PRUint32 count)
{
//printf(">>> OnDataAvailable [count=%u]\n", count);
nsresult rv = NS_ERROR_FAILURE;
PRUint32 bytesRead=0;
char buf[1024];
if(ctxt == nsnull) {
bytesRead=0;
rv = stream->ReadSegments(streamParse, &offset, count, &bytesRead);
} else {
while (count) {
PRUint32 amount = PR_MIN(count, sizeof(buf));
rv = stream->Read(buf, amount, &bytesRead);
count -= bytesRead;
}
}
if (NS_FAILED(rv)) {
printf(">>> stream->Read failed with rv=%x\n", rv);
return rv;
}
return NS_OK;
}
The important thing is that it calls streamParse(), which looks at src attribute of img and script element, and calls auxLoad(), which creates new channel with new listener and calls AsyncOpen().
uriList->AppendElement(uri);
rv = NS_NewChannel(getter_AddRefs(chan), uri, nsnull, nsnull, callbacks);
RETURN_IF_FAILED(rv, "NS_NewChannel");
gKeepRunning++;
rv = chan->AsyncOpen(listener, myBool);
RETURN_IF_FAILED(rv, "AsyncOpen");
Since it's passing in another instance of MyListener object in there, that can also load more child items ad infinitum like a Russian doll situation.
I think I found it (myself), take a close look at this page. Why it doesn't highlight that the UUID has been changed over versions, isn't clear to me, but it would explain why things fail when (or just prior to) calling QueryInterface on nsIHttpChannelInternal.
With the new(er) UUID, I'm getting better results. As I mentioned in an update to the question, I've posted this on bugzilla.mozilla.org, I'm curious if and which response I will get there.