I need to show four charts on a grails page in a grid layout with positions 11, 12, 21 and 22. Each chart is build with a code similar to:
<img src="${createLink(controller:'paretoChart', action:'buildParetoChart11')}"/>
The code for the chart building action is:
def buildParetoChart11 = {
def PlotService p11 = PlotService.getInstance()
def poList = paretoChartService.getParetoidPO()
def listCounter = 0
def idPO = poList[listCounter]
idPO.toString()
def String idPOvalue = idPO
def out = response.outputStream
out = p11.paretoPlot(out, idPOvalue)
response.setContentType("image/jpg")
session["idPOList11"] = poList
}
The Java p11.paretoPlot(out, idPOvalue) returns a BufferedImage of the chart inside the OutputStream, but it only works for one chart. The other three charts vary on the the order on each all pour actions are called.
PlotService was written by me, yes. In this implementation, I'm passing the OutputStream out I got from response.outputStream and the String idPOvalue to the Java method. plotPareto's implementation is as follows:
public OutputStream paretoPlot(OutputStream out, String po) throws IOException {
chart = buildParetoChart(po);// here the chart is actually built
bufferedImage = chart.createBufferedImage(350, 275);
ChartUtilities.writeBufferedImageAsJPEG(out, bufferedImage);
}
So, is there a way to make sure one action is completed before firing up the next one?
Thanks in advance!
each request to get an image is handled asynchronously by the browser. Each request runs in its own thread on the server. With img tags, the browser controls the GET requests to get the images, so I don't think you can easily guarantee the order, and nor should you have to.
Are you seeing any errors?
I would look at the firebug or equivalent output to see if the browser is getting an error. for any of the image requests.
I would also try attaching a debugger to your server.
Did you write the PlotService? You need to make sure it is thread safe.
Also, I dont see you reading any params, is there a separate action for each image?
Related
We retrieve information from Elasticsearch 2.7.0 and we allow the user to go through the results. When the user requests a high page number we get the following error message:
Result window is too large, from + size must be less than or equal to:
[10000] but was [10020]. See the scroll api for a more efficient way
to request large data sets. This limit can be set by changing the
[index.max_result_window] index level parameter
The thing is we use pagination in our requests so I don't see why we get this error:
#Autowired
private ElasticsearchOperations elasticsearchTemplate;
...
elasticsearchTemplate.queryForPage(buildQuery(query, pageable), Document.class);
...
private NativeSearchQuery buildQuery() {
BoolQueryBuilder boolQueryBuilder = QueryBuilders.boolQuery();
boolQueryBuilder.should(QueryBuilders.boolQuery().must(QueryBuilders.termQuery(term, query.toUpperCase())));
NativeSearchQueryBuilder nativeSearchQueryBuilder = new NativeSearchQueryBuilder().withIndices(DOC_INDICE_NAME)
.withTypes(indexType)
.withQuery(boolQueryBuilder)
.withPageable(pageable);
return nativeSearchQueryBuilder.build();
}
I don't understand the error because we retreive pageable.size (20 elements) everytime... Do you have any idea why we get this?
Unfortunately, Spring data elasticsearch even when paging results searchs for a much larger result window in the elasticsearch. So you have two options, the first is to change the value of this parameter.
The second is to use the scan / scroll API, however, as far as I understand, in this case the pagination is done manually, as it is used for infinite sequential reading (like scrolling your mouse).
A sample:
List<Pessoa> allItens = new ArrayList<>();
String scrollId = elasticsearchTemplate.scan(build, 1000, false, Pessoa.class);
Page<Pessoa> page = elasticsearchTemplate.scroll(scrollId, 5000L, Pessoa.class);
while (true) {
if (!page.hasContent()) {
break;
}
allItens.addAll(page.getContent());
page = elasticsearchTemplate.scroll(scrollId, 5000L, Pessoa.class);
}
This code, shows you how to read ALL the data from your index, you have to get the requested page inside scrolling.
I need help with opening the result of my mail merge operations directly in an new writer document.
Object mailMergeService = mcf.createInstanceWithContext(mailMergePackage, context);
XPropertySet mmProperties = UnoRuntime.queryInterface(XPropertySet.class, mailMergeService);
mmProperties.setPropertyValue("DocumentURL", templatePath);
mmProperties.setPropertyValue("DataSourceName", dbName);
mmProperties.setPropertyValue("CommandType", mmCommandType);
mmProperties.setPropertyValue("Command", mmCommand);
mmProperties.setPropertyValue("OutputType", mmOutputType);
// mmProperties.setPropertyValue("OutputURL", templateDirectory);
// mmProperties.setPropertyValue("FileNamePrefix", mmFileNamePrefix);
// mmProperties.setPropertyValue("SaveAsSingleFile", mmSaveAsSingleFile);
The mmOutputType is set as MailMergeType.SHELL
The LibreOffice API documentation says
"The output is a document shell.
The successful mail marge returns a XTextDocument based component."
So I've tried something like this
XJob job = UnoRuntime.queryInterface(XJob.class, mailMergeService);
Object mergedTextObject = job.execute(new NamedValue[0]);
String url = "private:factory/swriter";
loader.loadComponentFromURL(url, "_blank", 0, new PropertyValue[0]);
XTextDocument mergedText = UnoRuntime.queryInterface(XTextDocument.class, mergedTextObject);
XTextCursor cursor = mergedText.getText().createTextCursor();
cursor.setString(mergedText.getText().getString());
I guess I have to pass the XTextDocument component to the url-argument of the loadComponentFromURL method but I didnt find the right way to do that.
When I change the OutputType to MailMergeType.FILE the result is generated in a given directory and I can open the file and see that the mail merge succeeded. But this is not what my application should do.
Does someone know how I can open the result of the mail merge directly in an new writer document without saving the result to the hard drive?
Sincerly arthur
Hey guys I've found a simple way to open the result of my mail merge process directly.
The relevant snippets are these
XJob job = UnoRuntime.queryInterface(XJob.class, mailMergeService);
Object mergedTextObject = job.execute(new NamedValue[0]);
XTextDocument mergedText = UnoRuntime.queryInterface(XTextDocument.class, mergedTextObject);
mergedText.getCurrentController().getFrame().getContainerWindow().setVisible(true);
The last line of code made the window appear with the filled mail merge result.
I also don't need this line anymore
loader.loadComponentFromURL("private:factory/swriter", "_blank", 0, new PropertyValue[0]);
The document opens as a new instance of a swriter document. If you want to save the result as a file you can do this
mergedText.getCurrentController().getFrame().getContainerWindow().setVisible(true);
XStorable storeMM = UnoRuntime.queryInterface(XStorable.class, mergedText);
XModel modelMM = UnoRuntime.queryInterface(XModel.class, mergedText);
storeMM.storeAsURL(outputDirectory + outputFilename, modelMM.getArgs());
Sincerly
Arthur
What version of LO are you using? The SHELL constant has only been around since LO 4.4, and it is not supported by Apache OpenOffice yet, so it could be that it isn't fully implemented. However this code seems to show a working test.
If it is returning an XTextDocument, then normally I would assume the component is already open. However it sounds like you are not seeing a Writer window appear. Did you start LO in headless mode? If not, then maybe the process needs a few seconds before it can display.
Object mergedTextObject = job.execute(new NamedValue[0]);
Thread.sleep(10000);
Anyway to me it looks like your code has a mistake in it. These two lines would simply insert the text onto itself:
XTextCursor cursor = mergedText.getText().createTextCursor();
cursor.setString(mergedText.getText().getString());
Probably you intended to write something like this instead:
XTextDocument mergedText = UnoRuntime.queryInterface(XTextDocument.class, mergedTextObject);
String url = "private:factory/swriter";
XComponent xComponent = loader.loadComponentFromURL(url, "_blank", 0, new PropertyValue[0]);
XTextDocument xTextDocument = (XTextDocument)UnoRuntime.queryInterface(XTextDocument.class, xComponent);
XText xText = (XText)xTextDocument.getText();
XTextRange xTextRange = xText.getEnd();
xTextRange.setString(mergedText.getText().getString());
One more thought: getString() might just return an empty string if the entire document is in a table. If that is the case then you could use the view cursor or enumerate text content.
EDIT:
To preserve formatting including tables, you can do something like this (adapted from https://blog.oio.de/2010/10/27/copy-and-paste-without-clipboard-using-openoffice-org-api/):
// Select all.
XController xMergedTextController = mergedText.getCurrentController();
XTextViewCursorSupplier supTextViewCursor =
(XTextViewCursorSupplier) UnoRuntime.queryInterface(
XTextViewCursorSupplier.class, xMergedTextController);
XTextViewCursor oVC = supTextViewCursor.getViewCursor();
oVC.gotoStart(False) // This would not work if your document began with a table.
oVC.gotoEnd(True)
// Copy and paste.
XTransferableSupplier xTransferableSupplier = UnoRuntime.queryInterface(XTransferableSupplier.class, xMergedTextController);
XTransferable transferable = xTransferableSupplier.getTransferable();
XController xController = xComponent.getCurrentController();
XTransferableSupplier xTransferableSupplier_newDoc = UnoRuntime.queryInterface(XTransferableSupplier.class, xController);
xTransferableSupplier_newDoc.insertTransferable(transferable);
I have a set of RDL reports hosted on the report server instance. Some of the report renders more than 100,000 records on the ReportViewer. So that it takes quite long time to render it on the Viewer. So, we decided to go with Export the content directly from the server based on the user input parameters for the report as well as export file format.
Main thing here, I do not want the user to wait until the export file available for download. Rather, User can submit the action and can proceed to do other works. In the background, the program has to export the file to some physical location. When the download will be available, the user will be informed with some notification about the exported file.
I found the way in this Link. I need to know what are the ways to achieve the above mentioned functionality as well as how to pass the input parameters for the report. Pl suggest me.
Note: I was using XML as datasource for the rdl reports.
EDIT
I found something useful and did the coding like the below,
string path = ServerURL +"?" + _reportFolder + "ReportName&rs:Command=Render&rs:Format=PDF";
WebRequest req = WebRequest.Create(path);
string reportParametersQT = String.Empty;
req.Credentials = CredentialCache.DefaultNetworkCredentials;
WebResponse response = req.GetResponse();
Stream stream = response.GetResponseStream();
//screen.Response.Clear();
string enCodeFileName = HttpUtility.UrlEncode("fileName.pdf", System.Text.Encoding.UTF8);
// The word attachment in Addheader is used to directly show the save dialog box in browser
Response.AddHeader("content-disposition", "attachment; filename=" + enCodeFileName);
Response.BufferOutput = false; // to prevent buffering
Response.ContentType = response.ContentType;
byte[] buffer = new byte[1024];
int bytesRead = 0;
while ((bytesRead = stream.Read(buffer, 0, buffer.Length)) > 0)
{
Response.OutputStream.Write(buffer, 0, bytesRead);
}
Response.End();
I am able to download the exported file. But need to save the file in physical location instead of downloading. I dont know how to do that.
Both of these are very easy to do. You essentially just pass the parameters in the URL that you're calling, for example for a parameter called "LearnerList" you add &LearnerList=12345 to the URL. For exporting, add an additional paramter for Format=PDF (or whatever you want the file as) to get the report to export as a PDF instead of generating in Report Viewer.
Here's an example URL:
https://reporting.MySite.net/ReportServer/Pages/ReportViewer.aspx?/Users+Folders/User/My+Reports/Learner+Details&rs:Format=PDF&LearnerList=202307
Read these two pages, and you should be golden:
https://msdn.microsoft.com/en-us/library/ms155391.aspx
https://msdn.microsoft.com/en-us/library/ms154040.aspx
We are trying to send an image as a stream to a method in Grails 2.4.3.
def imgStream
imgStream = servletContext.classLoader.getResourceAsStream("/assets/Logo.jpg")
We call in another service to render the PDF
ByteArrayOutputStream output = licensePlanReportsService.renderLicensingPlanPDF(licensingPlanInstance, internal, imgStream)
We are using com.lowagie.text.Image from iText 2.1.7 and poi 3.9-20121203
if(imgStream) {
Image logo = Image.getInstance(imgStream.getBytes())
logo.scaleAbsolute(128.64, 88.32)
logo.setAbsolutePosition(25, 485)
document.add(logo)
}
The image is not rendering for us on our output the PDF report. Does this seem to be the correct way to render the image to the PDF?
This issue was resolved by changing the initial call by using the following code:
imgStream = grailsAttributes.getApplicationContext().getResource("Logo.jpg").getInputStream()
Instead of the initial way
imgStream = servletContext.classLoader.getResourceAsStream("/assets/Logo.jpg")
Thanks,
Tom
I'm building an nsIProtocolHandler implementation in Delphi. (more here)
And it's working already. Data the module builds gets streamed over an nsIInputStream. I've got all the nsIRequest, nsIChannel and nsIHttpChannel methods and properties working.
I've started testing and I run into something strange. I have a page "a.html" with this simple HTML:
<img src="a.png">
Both "xxm://test/a.html" and "xxm://test/a.png" work in Firefox, and give above HTML or the PNG image data.
The problem is with displaying the HTML page, the image doesn't get loaded. When I debug, I see:
NewChannel gets called for a.png, (when Firefox is processing an OnDataAvailable notice on a.html),
NotificationCallbacks is set (I only need to keep a reference, right?)
RequestHeader "Accept" is set to "image/png,image/*;q=0.8,*/*;q=0.5"
but then, the channel object is released (most probably due to a zero reference count)
Looking at other requests, I would expect some other properties to get set (such as LoadFlags or OriginalURI) and AsyncOpen to get called, from where I can start getting the request responded to.
Does anybody recognise this? Am I doing something wrong? Perhaps with LoadFlags or the LoadGroup? I'm not sure when to call AddRequest and RemoveRequest on the LoadGroup, and peeping from nsHttpChannel and nsBaseChannel I'm not sure it's better to call RemoveRequest early or late (before or after OnStartRequest or OnStopRequest)?
Update: Checked on the freshly new Firefox 3.5, still the same
Update: To try to further isolate the issue, I try "file://test/a1.html" with <img src="xxm://test/a.png" /> and still only get above sequence of events happening. If I'm supposed to add this secundary request to a load-group to get AsyncOpen called on it, I have no idea where to get a reference to it.
There's more: I find only one instance of the "Accept" string that get's added to the request headers, it queries for nsIHttpChannelInternal right after creating a new channel, but I don't even get this QueryInterface call through... (I posted it here)
Me again.
I am going to quote the same stuff from nsIChannel::asyncOpen():
If asyncOpen returns successfully, the
channel is responsible for keeping
itself alive until it has called
onStopRequest on aListener or called
onChannelRedirect.
If you go back to nsViewSourceChannel.cpp, there's one place where loadGroup->AddRequest is called and two places where loadGroup->RemoveRequest is being called.
nsViewSourceChannel::AsyncOpen(nsIStreamListener *aListener, nsISupports *ctxt)
{
NS_ENSURE_TRUE(mChannel, NS_ERROR_FAILURE);
mListener = aListener;
/*
* We want to add ourselves to the loadgroup before opening
* mChannel, since we want to make sure we're in the loadgroup
* when mChannel finishes and fires OnStopRequest()
*/
nsCOMPtr<nsILoadGroup> loadGroup;
mChannel->GetLoadGroup(getter_AddRefs(loadGroup));
if (loadGroup)
loadGroup->AddRequest(NS_STATIC_CAST(nsIViewSourceChannel*,
this), nsnull);
nsresult rv = mChannel->AsyncOpen(this, ctxt);
if (NS_FAILED(rv) && loadGroup)
loadGroup->RemoveRequest(NS_STATIC_CAST(nsIViewSourceChannel*,
this),
nsnull, rv);
if (NS_SUCCEEDED(rv)) {
mOpened = PR_TRUE;
}
return rv;
}
and
nsViewSourceChannel::OnStopRequest(nsIRequest *aRequest, nsISupports* aContext,
nsresult aStatus)
{
NS_ENSURE_TRUE(mListener, NS_ERROR_FAILURE);
if (mChannel)
{
nsCOMPtr<nsILoadGroup> loadGroup;
mChannel->GetLoadGroup(getter_AddRefs(loadGroup));
if (loadGroup)
{
loadGroup->RemoveRequest(NS_STATIC_CAST(nsIViewSourceChannel*,
this),
nsnull, aStatus);
}
}
return mListener->OnStopRequest(NS_STATIC_CAST(nsIViewSourceChannel*,
this),
aContext, aStatus);
}
Edit:
As I have no clue about how Mozilla works, so I have to guess from reading some code. From the channel's point of view, once the original file is loaded, its job is done. If you want to load the secondary items linked in file like an image, you have to implement that in the listener. See TestPageLoad.cpp. It implements a crude parser and it retrieves child items upon OnDataAvailable:
NS_IMETHODIMP
MyListener::OnDataAvailable(nsIRequest *req, nsISupports *ctxt,
nsIInputStream *stream,
PRUint32 offset, PRUint32 count)
{
//printf(">>> OnDataAvailable [count=%u]\n", count);
nsresult rv = NS_ERROR_FAILURE;
PRUint32 bytesRead=0;
char buf[1024];
if(ctxt == nsnull) {
bytesRead=0;
rv = stream->ReadSegments(streamParse, &offset, count, &bytesRead);
} else {
while (count) {
PRUint32 amount = PR_MIN(count, sizeof(buf));
rv = stream->Read(buf, amount, &bytesRead);
count -= bytesRead;
}
}
if (NS_FAILED(rv)) {
printf(">>> stream->Read failed with rv=%x\n", rv);
return rv;
}
return NS_OK;
}
The important thing is that it calls streamParse(), which looks at src attribute of img and script element, and calls auxLoad(), which creates new channel with new listener and calls AsyncOpen().
uriList->AppendElement(uri);
rv = NS_NewChannel(getter_AddRefs(chan), uri, nsnull, nsnull, callbacks);
RETURN_IF_FAILED(rv, "NS_NewChannel");
gKeepRunning++;
rv = chan->AsyncOpen(listener, myBool);
RETURN_IF_FAILED(rv, "AsyncOpen");
Since it's passing in another instance of MyListener object in there, that can also load more child items ad infinitum like a Russian doll situation.
I think I found it (myself), take a close look at this page. Why it doesn't highlight that the UUID has been changed over versions, isn't clear to me, but it would explain why things fail when (or just prior to) calling QueryInterface on nsIHttpChannelInternal.
With the new(er) UUID, I'm getting better results. As I mentioned in an update to the question, I've posted this on bugzilla.mozilla.org, I'm curious if and which response I will get there.