I am using 0.5.0 webjobs SDK with a very basic code:
public static void AjouterFiligramme2(
[BlobTrigger(#"images-input/{name}")] Stream inputStream,
[Blob(#"images-output/{name}")] Stream outputStream)
{
WebImage image = new WebImage(inputStream);
image.AddTextWatermark("copyright untel", fontSize: 20, fontColor: "red");
var bytes = image.GetBytes();
outputStream.Write(bytes, 0, bytes.Length);
}
But I got a error 404 on the outputStream parameter. InputStream works fine
I checked that images-output container has been created by the SDK so I don't even understand the message
I also checked that the code is working on premises with my tests images
If anybody has ideas
By default (no second parameter) the BlobAttribute makes the stream readable, meaning that the blob must exist. Otherwise you get back a 404.
Use the second parameter to make the stream writable and your code should work.
Related
I am hitting the following url from my android device.
URL : https://www.googleapis.com/youtube/v3/search?key=MY_KEY&channelId=UCoWgc1mqe-bcfb_lem7EyOg&part=snippet,id&order=date&maxResults=50
Response: ������������������]�r�8�}ϯ#e?䋕�}ɗ j����%OO(#�q3���*b�a"�g�o�K��%Y]�em�,uGd8m�ātu��]������/�c���W�_����B���M��;v�������q:��fA�[8KR�����lZ?[t]�JL���`�������G1����h䬑��T[}E�y�]qt�_xl��.��e/v�o���H
For getting the YouTube Videos List.It was working fine before but sometimes its response is not valid while hitting from browsers it works well.
Please help me !!
I found the solution i was using the loop j library (for http request) old version
just changes
compile 'com.loopj.android:android-async-http:1.4.8'
to
compile 'com.loopj.android:android-async-http:1.4.9'
it solved my problem
Please see my related question here (but in C#) and the answers on it in order to understand what I mean. Then you have to search example fordecompress gzip android and look at the existing resources to accomplish this.
Please keep in mind, that the API does not always send the response in gzip-format or not. So please do a check if the response is really in gzip.
For the plain decompression you can use the following method:
public static String decompress(byte[] compressed) throws IOException {
final int BUFFER_SIZE = 32;
ByteArrayInputStream bis = new ByteArrayInputStream(compressed);
GZIPInputStream gis = new GZIPInputStream(bis, BUFFER_SIZE);
StringBuilder s = new StringBuilder();
byte[] data = new byte[BUFFER_SIZE];
int bytesRead;
while ((bytesRead = gis.read(data)) != -1) {
s.append(new String(data, 0, bytesRead));
}
gis.close();
bis.close();
return s.toString();
}
(this code was taken from Vyshnavi's answer)
We noticed that file download get stuck after some time in IE 11 client browser if downloaded from ASP.NET MVC application deployed to Azure service. The file download starts normally in IE 11. For several minutes the downloaded file size get increased. Then the yet downloaded file size get stuck with no more increases. If we continue to wait even more for around 1h, then the IE shows 'download was interrupted' error.
Azure instances have IIS 8.5 on board with app pool in default integrated pipeline mode. Application is ASP.NET MVC 5, targeted .NET 4.5.2.
There are no problems with download from Azure for Chrome browser.
There are no problems with download from local IIS express for both IE 11 and Chrome browsers.
There are no problems with download from local IIS 7.5 for both IE 11 and Chrome browsers.
So it looks like the only problem pair is Azure hosted IIS 8.5 + IE 11.
Application use the following code to stream file to client:
private const int StreamBufferSize = 1024*128; // 128KB
public static async Task StreamData(string fileName, Func<Stream, Task> streamFiller, HttpResponseBase response)
{
response.AddHeader("Content-Disposition", string.Format("attachment;filename=\"{0}\"", fileName));
response.ContentType = MimeMapping.GetMimeMapping(fileName);
response.BufferOutput = false;
using (Stream outputStream = new BufferedStream(response.OutputStream, StreamBufferSize))
{
await streamFiller(outputStream);
}
}
Where 'streamFiller()' is externally passed Func that writes data to the stream.
Please note that file is not so large, around 20Mb, but server do not send it at once. Instead server streams file with buffered chunks (see code above). Time between each chunk get streamed (buffer flashed) may be as long as several minutes.
By looking into Azure IIS I found that the client request looks like the following:
during normal download phase: State: ExecureRequestHandler, Module Name: ManagedPipelineHandler
since the download get stuck: State: SendResponse, Module Name: RemoveUnnecessaryHeadersModule.
To be clear what is RemoveUnnecessaryHeadersModule: we do the 'Server' header removing as following:
public class RemoveUnnecessaryHeadersModule : IHttpModule
{
public void Init(HttpApplication context)
{
// This only works if running in IIS7+ Integrated Pipeline mode
if (!HttpRuntime.UsingIntegratedPipeline) return;
context.PreSendRequestHeaders += (sender, e) =>
{
var app = sender as HttpApplication;
if (app != null && app.Context != null)
{
app.Context.Response.Headers.Remove("Server");
}
};
}
}
I have spend hours to convert an old net v2 project and to port it to WinRT in C#.
Its a RTSP client thats sending commands to a DVBservice (DVBViewer). So, at this moment, the WinForm Program has been renewed and its working prety fine. Connecting, sending and receiving commands to the server and finaly the stream to my localhost UDP port are fully working and the ts stream is fully readable via VLC and RTP protocol.
But now, i'd like to write my Metro app with this stuff. I managed to do the work and seems to be almost finish (at least the Socket and stream stuff).
But now, I'm getting stuck on a stupid problem. I CAN'T communicate with the RTSP Server.
My Stream Reader/Writer is'nt working and I've tried a lot.
The app is based on code from the Uniriotec.DV project, so for any futher info, you can find it by google.
So, here's the point I'm getting stuck:
Its the main handle, thats getting the StreamSocket with the Stream (the messages) together.
//Set input and output stream filters in the main client app
RTSPBufferedReader = new BufferedReader<Stream> (RTSPsocket);
RTSPBufferedWriter = new BufferedWriter<Stream> (RTSPsocket);
namespace RTSP.Client
{
public class BufferedReader<T> : StreamReader where T : Stream
{
private StreamSocket socket;
private T unbufferedStream;
private StreamSocket streamSocket;
public T UnbufferedStream
{
get { return unbufferedStream; }
set { unbufferedStream = value; }
}
public BufferedReader(T myStream)
: base(myStream)
{
unbufferedStream = myStream;
}
public BufferedReader(StreamSocket mySocket)
: base(new Stream(mySocket)) // <== here is the problem, saying "could'nt establish an instance of the abstract or interface class "System.IO.Stream"....
{
this.streamSocket = mySocket;
}
}
}
Do you have an idea where I did the error?
Thanks for answering,
Jo
PS: I need await writer.StoreAsync(); because the answer is send approx 10-15 sec. later, when the server is ready to thread the request and sends me SessionID and so back.
The Stream class is abstract and you cannot instantiate an abstract class directly, like you are doing with the base(new Stream(mySocket)). See here for the System.IO.Stream definition
http://msdn.microsoft.com/en-us/library/windows/apps/system.io.stream.aspx
You need to replace the Stream class with the appropriate derived class, such as DataReader. Here is an example of using a StreamSocket sample that may help:
http://code.msdn.microsoft.com/windowsapps/StreamSocket-Sample-8c573931
I am trying to integrate upload of arbitrary files to Google Docs into an existing application. This used to work before using resumable upload became mandatory. I am using Java client libraries.
The application is doing the upload in 2 steps:
- get the resourceId of the file
- upload the data
To get the resourceId I am uploading a 0-size file (i.e. Content-Length=0). I am passing ?convert=false in the resumable URL (i.e. https://docs.google.com/feeds/upload/create-session/default/private/full?convert=false).
I am passing "application/octet-stream" as content-type. This seems to work, though I do get different resourcesIds - "file:..." resourceIds for things like images, but "pdf:...." resourceIds for PDFs.
The second step constructs a URL based on the resourceId obtained previously and performs a search (getEntry). The URL is in the form of https://docs.google.com/feeds/default/private/full/file%3A.....
Once the entry is found the ResumableGDataFileUploader is used to update the content (0-byte file) with the actual data from the file being uploaded. This operation fails with 401 Unauthorized response when building ResumableGDataFileUploader instance.
I've tried with ?convert=false as well as ?new-revision=true and both of these at the same time. The result is the same.
The relevant piece of code:
MediaFileSource mediaFile = new MediaFileSource(
tempFile, "application/octet-stream");
final ResumableGDataFileUploader.Builder builder =
new ResumableGDataFileUploader.Builder(client, mediaFile, documentListEntry);
builder.executor(MoreExecutors.sameThreadExecutor());
builder.requestType(ResumableGDataFileUploader.RequestType.UPDATE);
// This is where it fails
final ResumableGDataFileUploader resumableGDataFileUploader = builder.build();
resumableGDataFileUploader.start();
return tempFile.length();
The "client" is an instance of DocsService, configured to use OAuth. It is used to find "documentListEntry" immediately before the given piece of code.
I had to explicitly specify request type, since it seems the client library code contains a bug causing NullPointerException for "update existing entry" case.
I have a suspicion that the issue is specifically in the sequence of actions (upload 0-byte file to get the resourceId, then update with actual file) but I can't figure out why it doesn't work.
Please help?
This code snippet works for me using OAuth 1.0 and OAuth 2.0:
static void uploadDocument(DocsService client) throws IOException, ServiceException,
InterruptedException {
ExecutorService executor = Executors.newFixedThreadPool(10);
File file = new File("<PATH/TO/FILE>");
String mimeType = DocumentListEntry.MediaType.fromFileName(file.getName()).getMimeType();
DocumentListEntry documentEntry = new DocumentListEntry();
documentEntry.setTitle(new PlainTextConstruct("<DOCUMENT TITLE>"));
int DEFAULT_CHUNK_SIZE = 2 * 512 * 1024;
ResumableGDataFileUploader.Builder builder =
new ResumableGDataFileUploader.Builder(
client,
new URL(
"https://docs.google.com/feeds/upload/create-session/default/private/full?convert=false"),
new MediaFileSource(file, mimeType), documentEntry).title(file.getName())
.requestType(RequestType.INSERT).chunkSize(DEFAULT_CHUNK_SIZE).executor(executor);
ResumableGDataFileUploader uploader = builder.build();
Future<ResponseMessage> msg = uploader.start();
while (!uploader.isDone()) {
try {
Thread.sleep(100);
} catch (InterruptedException ie) {
throw ie; // rethrow
}
}
DocumentListEntry uploadedEntry = uploader.getResponse(DocumentListEntry.class);
// Print the document's ID.
System.out.println(uploadedEntry.getId());
System.out.println("Upload is done!");
}
I need to provide a file-download feature where the web server retrieves the file from another source (via HTTP) and simultaneously streams it to the browser. I am guessing that using MVC's Controller.File ActionResult will not work, but I wrote a prototype like this anyway:
public ActionResult Download()
{
HttpWebRequest webRequest = (HttpWebRequest)HttpWebRequest.Create("http://somewhere/somefile.pdf");
HttpWebResponse webResponse = (HttpWebResponse)webRequest.GetResponse();
Stream stream = webResponse.GetResponseStream();
var mimeType = "application/pdf";
var fileName = "somefile.pdf";
return File(stream, mimeType, fileName);
}
This works fine, but there is no way to call Close() on the HttpWebResponse and Stream after the return statement. The help on the HttpWebResponse.GetResponseStream method says, "You must call either the Stream.Close or the HttpWebResponse.Close method to close the stream and release the connection for reuse. It is not necessary to call both Stream.Close and HttpWebResponse.Close, but doing so does not cause an error. Failure to close the stream will cause your application to run out of connections."
Should I create an HttpHandler and manually read bytes from the source stream and write them out to the response, along the lines of this or this? Is there another approach I'm not aware of?
While I'm not directly familiar with trying something like this, my first though was to do what you suggested in regards to reading in the stream, closing the connection, then returning the bytes as the response. Being a stream, I don't know how you can get around leaving it open for the sake of returning its contents as you do in your prototype, but then being able to close it when you're done.