Iterate and write to an ObjectOutputStream - objectoutputstream

When should I close an ObjectOutputStream using the following code? Thank you all...
try{
ObjectOutputStream output = new ObjectOutputStream(client.getOutputStream());
AnObject[] array = new AnObject[4];
for(int i = 0; i < array.length ; i++){
array[i] = new AnObject();
output.writeObject(array[i]);
}
output.flush();
output.close();
}
catch(IOException e){
processException();
}

Whenever you're done using it. You should flush() after sending, to force through all the data added to the stream's buffer to your underlying stream (Object*Streams are wrappers to your client's OutputStream), even if the stream's buffer isn't full.
Also, no need to flush before closing, because output.close() calls flush().
It looks like you close it after sending data, so unless you have more stuff to send through your client's stream, you're closing properly.

Related

Xamarin - WCF Upload Large Files Report progress vis UIProgressView

I have create a WCF Service that allows uploading large files via BasicHttpBinding using streaming and it is working great! I would like to extended this to show a progress bar (UIProgressView) so that when a large file is being uploaded in 65k chunks, the user can see that it is actively working.
The client code calling the WCF Service is:
BasicHttpBinding binding = CreateBasicHttp ();
BTSMobileWcfClient _client = new BTSMobileWcfClient (binding, endPoint);
_client.UploadFileCompleted += ClientUploadFileCompleted;
byte[] b = File.ReadAllBytes (zipFileName);
using (new OperationContextScope(_client.InnerChannel)) {
OperationContext.Current.OutgoingMessageHeaders.Add(System.ServiceModel.Channels.MessageHeader.CreateHeader("SalvageId","",iBTSSalvageId.ToString()));
OperationContext.Current.OutgoingMessageHeaders.Add(System.ServiceModel.Channels.MessageHeader.CreateHeader("FileName","",Path.GetFileName(zipFileName)));
OperationContext.Current.OutgoingMessageHeaders.Add(System.ServiceModel.Channels.MessageHeader.CreateHeader("Length","",b.LongLength));
_client.UploadFileAsync(b);
}
On the server side, I read the file stream in 65k chuncks and do report back to the calling routine "bytes read", etc. A snippet of code for that is:
using (FileStream targetStream = new FileStream(filePath, FileMode.CreateNew,FileAccess.Write)) {
//read from the input stream in 65000 byte chunks
const int chunkSize = 65536;
byte[] buffer = new byte[chunkSize];
do {
// read bytes from input stream
int bytesRead = request.FileData.Read(buffer, 0, chunkSize);
if (bytesRead == 0) break;
// write bytes to output stream
targetStream.Write(buffer, 0, bytesRead);
} while (true);
targetStream.Close();
}
But I don't know how to hook into the callback on the Xamarin side to receive the "bytes read" versus "total bytes to send" so I can update the UIProgressView.
Has anyone tried this or is this even possible?
Thanks In Advance,
Bo

How to properly stream big data from MVC3 without using too much RAM?

I'd like to use HttpResponse.OutputStream together with ContentResult so that I can Flush from time to time to avoid using too much RAM by .Net.
But all examples with MVC FileStreamResult, EmptyResult, FileResult, ActionResult, ContentResult show code that gets all the data into memory and passes to one of those. Also one post suggest that returning EmptyResult together with using HttpResponse.OutputStream is bad idea. How else can I do that in MVC ?
What is the right way to organize flushable output of big data (html or binary) from MVC server ?
Why is returning EmptyResult or ContentResult or FileStreamResult a bad idea ?
You would want to use FileStreamResult if you already had a stream to work with. A lot of times you may only have access to the file, need to build a stream and then output that to the client.
System.IO.Stream iStream = null;
// Buffer to read 10K bytes in chunk:
byte[] buffer = new Byte[10000];
// Length of the file:
int length;
// Total bytes to read:
long dataToRead;
// Identify the file to download including its path.
string filepath = "DownloadFileName";
// Identify the file name.
string filename = System.IO.Path.GetFileName(filepath);
try
{
// Open the file.
iStream = new System.IO.FileStream(filepath, System.IO.FileMode.Open,
System.IO.FileAccess.Read,System.IO.FileShare.Read);
// Total bytes to read:
dataToRead = iStream.Length;
Response.ContentType = "application/octet-stream";
Response.AddHeader("Content-Disposition", "attachment; filename=" + filename);
// Read the bytes.
while (dataToRead > 0)
{
// Verify that the client is connected.
if (Response.IsClientConnected)
{
// Read the data in buffer.
length = iStream.Read(buffer, 0, 10000);
// Write the data to the current output stream.
Response.OutputStream.Write(buffer, 0, length);
// Flush the data to the HTML output.
Response.Flush();
buffer= new Byte[10000];
dataToRead = dataToRead - length;
}
else
{
//prevent infinite loop if user disconnects
dataToRead = -1;
}
}
}
catch (Exception ex)
{
// Trap the error, if any.
Response.Write("Error : " + ex.Message);
}
finally
{
if (iStream != null)
{
//Close the file.
iStream.Close();
}
Response.Close();
}
Here is the microsoft article explaining the above code.

Node.js is running out of memory on large bit-by-bit file read

I'm attempting to write a bit of JS that will read a file and write it out to a stream. The deal is that the file is extremely large, and so I have to read it bit by bit. It seems that I shouldn't be running out of memory, but I do. Here's the code:
var size = fs.statSync("tmpfile.tmp").size;
var fp = fs.openSync("tmpfile.tmp", "r");
for(var pos = 0; pos < size; pos += 50000){
var buf = new Buffer(50000),
len = fs.readSync(fp, buf, 0, 50000, (function(){
console.log(pos);
return pos;
})());
data_output.write(buf.toString("utf8", 0, len));
delete buf;
}
data_output.end();
For some reason it hits 264900000 and then throws FATAL ERROR: CALL_AND_RETRY_2 Allocation failed - process out of memory. I'd figure that the data_output.write() call would force it to write the data out to data_output, and then discard it from memory, but I could be wrong. Something is causing the data to stay in memory, and I've no idea what it would be. Any help would be greatly appreciated.
I had a very similar problem. I was reading in a very large csv file with 10M lines, and writing out its json equivalent. I saw in the windows task manager that my process was using > 2GB of memory. Eventually I figured out that the output stream was probably slower than the input stream, and that the outstream was buffering a huge amount of data. I was able to fix this by pausing the instream every 100 writes to the outstream, and waiting for the outstream to empty. This gives time for the outstream to catch up with the instream. I don't think it matters for the sake of this discussion, but I was using 'readline' to process the csv file one line at a time.
I also figured out along the way that if, instead of writing every line to the outstream, I concatenate 100 or so lines together, then write them together, this also improved the memory situation and made for faster operation.
In the end, I found that I could do the file transfer (csv -> json) using just 70M of memory.
Here's a code snippet for my write function:
var write_counter = 0;
var out_string = "";
function myWrite(inStream, outStream, string, finalWrite) {
out_string += string;
write_counter++;
if ((write_counter === 100) || (finalWrite)) {
// pause the instream until the outstream clears
inStream.pause();
outStream.write(out_string, function () {
inStream.resume();
});
write_counter = 0;
out_string = "";
}
}
You should be using pipes, such as:
var fp = fs.createReadStream("tmpfile.tmp");
fp.pipe(data_output);
For more information, check out: http://nodejs.org/docs/v0.5.10/api/streams.html#stream.pipe
EDIT: the problem in your implementation, btw, is that by doing it in chunks like that, the write buffer isn't going to get flushed, and you're going to read in the entire file before writing much of it back out.
According to the documentation, data_output.write(...) will return true if the string has been flushed, and false if it has not (due to the kernel buffer being full). What kind of stream is this?
Also, I'm (fairly) sure this isn't the problem, but: how come you allocate a new Buffer on each loop iteration? Wouldn't it make more sense to initialize buf before the loop?
I don't know how the synchronous file functions are implemented, but have you considered using the asynch ones? That would be more likely to allow garbage collection and i/o flushing to happen. So instead of a for loop, you would trigger the next read in the callback function of the previous read.
Something along these lines (note also that, per other comments, I'm reusing the Buffer):
var buf = new Buffer(50000),
var pos = 0, bytesRead;
function readNextChunk () {
fs.read(fp, buf, 0, 50000, pos,
function(err, bytesRead){
if (err) {
// handle error
}
else {
data_output.write(buf.toString("utf8", 0, bytesRead));
pos += bytesRead;
if (pos<size)
readNextChunk();
}
});
}
readNextChunk();

Connection Closed When trying to post over 1.5K of url encoded data on 8900, and maybe others

From the simulator, this all works.
I'm using wifi on the device as i'm assuming it's the most stable.
The problem occurs when i try to post more than 1.5K of urlencoded data.
If i send less then it's fine.
It seems to hang the .flush command();
It works on a physical 9700, so i'm presuming that it's possibly device specific
In the example below i'm using form variables, but i've also tried posting the content type json, but still had the same issue
I've written a small testapp, and using the main thread so i know that it's not threads getting confused
If anyone has any ideas that would be great.
private String PostEventsTest()
{
String returnValue = "Error";
HttpConnection hc = null;
DataInputStream dis = null;
DataOutputStream dos = null;
StringBuffer messagebuffer = new StringBuffer();
URLEncodedPostData postValuePairs;
try
{
postValuePairs = new URLEncodedPostData(null, false);
postValuePairs.append("DATA",postData);// postData);
hc = (HttpConnection) Connector.open(postURL, Connector.READ_WRITE);
hc.setRequestMethod(HttpConnection.POST);
hc.setRequestProperty("User-Agent", "BlackBerry");
hc.setRequestProperty("Content-Type", "application/x-www-form-urlencoded");
hc.setRequestProperty("Content-Length", Integer.toString(postValuePairs.getBytes().length));
//hc.setRequestProperty("Content-Length", Integer.toString(postData.length()));
dos = hc.openDataOutputStream();
dos.write(postValuePairs.getBytes());
dos.flush();
dos.close();
// Retrieve the response back from the servlet
dis = new DataInputStream(hc.openInputStream());
int ch;
// Check the Content-Length first
long len = hc.getLength();
if (len != -1)
{
for (int i = 0; i < len; i++)
if ((ch = dis.read()) != -1)
messagebuffer.append((char) ch);
}
else
{ // if the content-length is not available
while ((ch = dis.read()) != -1)
messagebuffer.append((char) ch);
}
dis.close();
returnValue = "Yahoo";
}
catch (Exception ex)
{
returnValue = ex.toString();
ex.printStackTrace();
}
return returnValue;
}
Instead of data streams you should just use the regular input and output streams. So instead of hc.openDataOutputStream() use hc.openOutputStream(). Data streams are for serializing Java objects to a stream, but you just want to write the raw bytes to the stream -- so a regular outputstream is what you want. Same for reading the response - just use the inputstream returned by hc.openInputStream()

Reading HttpURLConnection InputStream - manual buffer or BufferedInputStream?

When reading the InputStream of an HttpURLConnection, is there any reason to use one of the following over the other? I've seen both used in examples.
Manual Buffer:
while ((length = inputStream.read(buffer)) > 0) {
os.write(buf, 0, ret);
}
BufferedInputStream
is = http.getInputStream();
bis = new BufferedInputStream(is);
ByteArrayBuffer baf = new ByteArrayBuffer(50);
int current = 0;
while ((current = bis.read()) != -1) {
baf.append(current);
}
EDIT I'm still new to HTTP in general but one consideration that comes to mind is that if I am using a persistent HTTP connection, I can't just read until the input stream is empty right? In that case, wouldn't I need to read the message length and just read the input stream for that length?
And similarly, if NOT using a persistent connection, is the code I included 100% good to go in terms of reading the stream properly?
I talk about a good way to do it on my blog in a post about using JSON in android. http://blog.andrewpearson.org/2010/07/android-why-to-use-json-and-how-to-use.html. I will post the relevant part of the relevant post below (the code is pretty generalizable):
InputStream in = null;
String queryResult = "";
try {
URL url = new URL(archiveQuery);
HttpURLConnection urlConn = (HttpURLConnection) url.openConnection();
HttpURLConnection httpConn = (HttpURLConnection) urlConn;
httpConn.setAllowUserInteraction(false);
httpConn.connect();
in = httpConn.getInputStream();
BufferedInputStream bis = new BufferedInputStream(in);
ByteArrayBuffer baf = new ByteArrayBuffer(50);
int read = 0;
int bufSize = 512;
byte[] buffer = new byte[bufSize];
while(true){
read = bis.read(buffer);
if(read==-1){
break;
}
baf.append(buffer, 0, read);
}
queryResult = new String(baf.toByteArray());
} catch (MalformedURLException e) {
// DEBUG
Log.e("DEBUG: ", e.toString());
} catch (IOException e) {
// DEBUG
Log.e("DEBUG: ", e.toString());
}
}
Regarding persistent HTTP connections it is just the opposite. You should read everything from the input stream. Otherwise the Java HTTP client does not know that the HTTP request is complete and the socket connection can be reused.
See http://java.sun.com/javase/6/docs/technotes/guides/net/http-keepalive.html:
What can you do to help with Keep-Alive?
Do not abandon a connection by
ignoring the response body. Doing so
may results in idle TCP connections.
That needs to be garbage collected
when they are no longer referenced.
If getInputStream() successfully
returns, read the entire response
body.
Use former -- latter has no real benefits over first one, and is bit slower; reading things byte by byte is inefficient even if buffered (although horribly slow when not buffered). That style of reading input went out of vogue with C; although may be useful in cases where you need to find an end marker of some sort.
Only if you're using the BufferedInputStream-specific methods.

Resources