Liferay is not generating preview for PDF file only. It generates preview for all other types of files. I had linux server + liferay 6.2 GA3 + Imagemagick + Ghost + openoffice. I had running openoffice. Conversation from one file to other working fine, I have only problem for pdf file preview. Previously it was generating preview. The surprising thing is no log for PDF in machine but log for other files.
I am able to conver PDF to png whith help of convert command.
Below are logs while uploading PDF
tail -f /opt/trianz-portal/tomcat-7.0.42/logs/catalina.out
14:03:15,404 ERROR [ajp-bio-8009-exec-38][PollerServlet:63] No channel exists with user id 80601
14:03:15,428 ERROR [ajp-bio-8009-exec-38][status_jsp:752] No channel exists with user id 80601
14:03:18,868 ERROR [ajp-bio-8009-exec-40][PollerServlet:63] No channel exists with user id 80601
14:03:18,890 ERROR [ajp-bio-8009-exec-40][status_jsp:752] No channel exists with user id 80601
14:03:21,115 ERROR [ajp-bio-8009-exec-38][PollerServlet:63] No channel exists with user id 80601
14:03:21,138 ERROR [ajp-bio-8009-exec-38][status_jsp:752] No channel exists with user id 80601
14:03:34,851 ERROR [ajp-bio-8009-exec-33][PollerServlet:63] No channel exists with user id 50659
14:03:34,925 ERROR [ajp-bio-8009-exec-33][status_jsp:752] No channel exists with user id 50659
14:03:58,000 WARN [liferay/scheduler_dispatch-3][RestStorageService:221] Content-Length of data stream not set, will automatically determine data length in memory
14:04:36,238 WARN [ajp-bio-8009-exec-39][RestStorageService:221] Content-Length of data stream not set, will automatically determine data length in memory
Below logs for other files where preview genarate successfully for other files
14:07:19,844 WARN [ajp-bio-8009-exec-9][RestStorageService:221] Content-Length of data stream not set, will automatically determine data length in memory
14:07:21,668 INFO [liferay/document_library_pdf_processor-1][GhostscriptImpl:71] Excecuting command '/usr/local/bin/gs -dBATCH -dSAFER -dNOPAUSE -dNOPROMPT -sFONTPATH/usr/local/bin:/usr/bin:/usr/local/share/ghostscript/fonts -sDEVICE=png16m -sOutputFile=/opt/trianz-portal/tomcat-7.0.42/temp/liferay/document_preview/2662941.1.0-%d.png -dPDFFitPage -dTextAlphaBits=4 -dGraphicsAlphaBits=4 -r300 -dDEVICEWIDTH1000 /opt/trianz-portal/tomcat-7.0.42/temp/liferay/document_conversion/2662941.1.0.pdf '
14:07:23,429 WARN [liferay/document_library_pdf_processor-1][RestStorageService:221] Content-Length of data stream not set, will automatically determine data length in memory
14:07:24,199 WARN [liferay/document_library_pdf_processor-1][RestStorageService:221] Content-Length of data stream not set, will automatically determine data length in memory
14:07:24,966 WARN [liferay/document_library_pdf_processor-1][RestStorageService:221] Content-Length of data stream not set, will automatically determine data length in memory
14:07:25,028 WARN [liferay/document_library_pdf_processor-1][RestStorageService:221] Content-Length of data stream not set, will automatically determine data length in memory
14:07:25,180 INFO [liferay/document_library_pdf_processor-1][PDFProcessorImpl:423] Ghostscript generated 4 preview pages for assign.doc in 3512 ms
14:07:25,194 INFO [liferay/document_library_pdf_processor-1][GhostscriptImpl:71] Excecuting command '/usr/local/bin/gs -dBATCH -dSAFER -dNOPAUSE -dNOPROMPT -sFONTPATH/usr/local/bin:/usr/bin:/usr/local/share/ghostscript/fonts -sDEVICE=png16m -sOutputFile=/opt/trianz-portal/tomcat-7.0.42/temp/liferay/document_thumbnail/2662941.1.0.png -dFirstPage=1 -dLastPage=1 -dPDFFitPage -dTextAlphaBits=4 -dGraphicsAlphaBits=4 -r300 -dDEVICEWIDTH1000 /opt/trianz-portal/tomcat-7.0.42/temp/liferay/document_conversion/2662941.1.0.pdf '
14:07:26,360 WARN [liferay/document_library_pdf_processor-1][RestStorageService:221] Content-Length of data stream not set, will automatically determine data length in memory
14:07:26,392 INFO [liferay/document_library_pdf_processor-1][PDFProcessorImpl:438] Ghostscript generated a thumbnail for assign.doc in 1198 ms
I suppose you are using community edition(not mentioned in question).
As per the logs,the pdf file is not even read correctly,leave alone being uploaded,due to the channel errors.As per the given thread https://web.liferay.com/community/forums/-/message_boards/message/52746434 this is due to one of bugs LPS-51390.
It should be fixed when you update with patch for this issue.
CE does not have patching tool available.
You would need to manually do the patching by either looking for the changes for the given commit done for LPS-51390 on github or upgrade to CE GA4 Try below link for ref Liferay Upgrade
Related
In one of the legacy application I am working, we are trying to get extracted zip file name through Open3 utility. This is divided into two commands. First command unzip the compressed file and the second one tries to list and grep filename with specific extension. But recently we started encountering error while reading IO buffer.
# unzip with 7z
Open3.capture3("7z", "x", my_zip.path, "-p#{my_pass}", "-o#{tmp_dir}")
# extract file name with .dbf extension
resp, _threads = Open3.pipeline_r(["7z", "-slt", "l", "-p#{my_pass}", my_zip.path], ["grep", "-oP", "(?<=Path = ).+dbf"])
# read IO stream for filename
res_filename = resp.readline.strip
# close stream
# resp.close
at resp.readline.strip we are encoutering EOFError. I tried to using ::gets instead of readline to avoid EOFError but it will lead to empty/nil value.
What am I missing? Is there any way to convert IO stream to String object or something like that?
Observed that this is working fine from console but raising exception when job invoked from sidekiq dashboard
I am dynamically creating a PS file (based on timestamp from input file) on the Cobol program and using it as output file to write transaction details, but if the input file s received twice, job is failing with duplicate dataset. So I thought of rewrite the file if the file is already allocated. To do this i have tried all these options I-O/Extend/Output to open the output file but it is failing with Filestatus 98
Any idea how can to do this?
DISPLAY '5000-B-->'
PERFORM 5000-ALLOCATE-ACK-FILE [ Allocate File Dynamically]
DISPLAY '5000-A-->'
DISPLAY 'I-O--B-->'
OPEN OUTPUT OUT-ACK-FILE
DISPLAY 'WS-OUTACK-STATUS-->' WS-OUTACK-STATUS
DISPLAY 'I-O--A-->'
Error:
ALLOCATE OK 00
5000-A-->
I-O--B-->
IGZ0255W Dynamic allocation failed for ddname OUTACK while processing file OUT-A
return code from the dynamic allocation was X'4', error code X'FFFFFFFF
information code X'0'.
WS-OUTACK-STATUS-->98
I-O--A-->
You don't show details of how the allocation is done, or how you check for existence of the data set.
Anyway, on z/OS the DISP parameter of the allocation of a PS data set determines whether the data written replaces the current content (if any), or whether it is appended to the current content (if any).
DISP=NEW and DISP=OLD replace
DISP=MOD appends.
The OPEN parameters don't vary.
I've been trying to cache based on response size of varnish.
Other answers suggested using Content-Length to decide whether or not to cache but I'm using InfluxDB (Varnish reverse proxies to this) and it responds with a Transfer-Encoding:Chunked which omits the Content-Length header and I am not able to figure out the size of the response.
Is there any way I could access response body size and make decision in vcl_backend_response?
Cache miss: chunked transfer encoding
When Varnish processes incoming chunks from the origin, it has no idea ahead of time how much data will be received. Varnish streams the data through to the client and stores the data byte per byte.
Once the 0\r\n\r\n is received to mark the end of the stream, Varnish will finalize the object storage and calculate the total amount of bytes.
Cache hit: content length
The next time the object is requested, Varnish no longer needs to use Chunked Transfer Encoding, because it has the full object in cache and knows the size. At that point a Content-Length header is part of the response, but this header is not accessible in VCL because it seems to be generated after sub vcl_deliver {} is executed.
Remove objects after the fact
It is possible to remove objects after the fact by monitoring their size through VSL.
The following command will look at the backend request accounting field of the VSL output and check the total size. If the size is greater than 5MB, it generates output
varnishlog -g request -i berequrl -q "BereqAcct[5] > 5242880"
Here's some potential output:
* << Request >> 98330
** << BeReq >> 98331
-- BereqURL /
At that point, you know that the / resource is bigger than 5 MB. You can then attempt to remove it from the cache using the following command:
varnishadm ban "obj.http.x-url == / && obj.http.x-host == domain.com"
Replace domain.com with the actual hostname of your service and set / to the URL of the actual endpoint you're trying to remove from the cache.
Don't forget to add the following code to your VCL file to ensure that the x-url and x-host headers are available:
sub vcl_backend_response {
set beresp.http.x-url = bereq.url;
set beresp.http.x-host = bereq.http.host;
}
sub vcl_deliver {
unset resp.http.x-url;
unset resp.http.x-host;
}
Conclusion
Although there's no turn-key solution to access the size of the body in VCL, but the hacky solution I suggested where we remove objects after the fact is the only thing I can think of.
I have a problem, as I think, with my prosody configuration. When I am sending files (for example photos) more the ~2 or 3 megabytes (as I established experimentally) using Converstions 2.* version (android IM app) it transfers this files using peer to peer connection instead of uploading this file to server and sending a link to my interlocutor. Small files transfers well using http upload. And I couldn't find a reason for such behavior.
Here are some lines for http_upload module from my config, that I took from official documentation (where I hadn't found a setup for turning off peer to peer files transfer):
http_upload_file_size_limit = 536870912 -- 512 MB in bytes
http_upload_expire_after = 604800 -- 60 * 60 * 24 * 7
http_upload_quota = 10737418240 -- 10 GB
http_upload_path = "/var/lib/prosody"
And this is my full config: https://pastebin.com/V6DNYrhe
Small files are transferred well using http upload. And I couldn't
find a reason for such behavior.
TL;DR: You put options in the wrong place. The default 1MB limit
applies. This is advertised to clients so they know about it and can use
more efficient p2p transfer methods for very large files.
http_upload_path = "/var/lib/prosody"
This line makes Prosodys data directory public, allowing anyone easy
access to all user data. You really don't want to do that. You are
lucky you did not put that in the correct section.
And this is my full config: https://pastebin.com/V6DNYrhe
"http_upload" is in the global modules_enabled list which will load
it onto all VirtualHost(s).
You have added options to the end of the config file, putting them under
a Component section. That makes those options only apply to that
Component.
Thus, the VirtualHost where mod_http_upload is loaded sees no options
set and will use the defaults.
http_upload_file_size_limit = 536870912 -- 512 MB in bytes
Don't do this. Prosodys built-in HTTP server is not optimized for very
large uploads. There is a safety limit on HTTP request size that will
cap HTTP upload size limit to 10M to prevent DoS attacks.
While that limit can be changed, I would strongly suggest you look at
https://modules.prosody.im/mod_http_upload_external.html instead.
I'm working on a system to convert a bunch of .mov files to H.264 (using HandBrakeCLI) and webm (using ffmpeg) as the .mov files are created. In general, things are going very well. I'm hung up on a bit of error detection. I want to know if one of the encodings failed so that we can investigate, try again, etc.
To test encoding failure, I copied a text file into a file with a .mov extention, and set the programs about trying to encode it. Naturally, they both fail to encode the file (I'm not sure what "success" would mean in this context...) However, while ffmpeg reports this failure by setting its exit code to 1, HandBrakeCLI sets the exit code to 0, because it exited cleanly. This is consistent with the HandBrakeCLI documentation but it leaves me wondering how I can tell if HandBrakeCLI knows if it failed to encode anything. That same documentation page suggests "If you want to monitor HandBrake's process, you should monitor the standard pipes", so I'm now getting the effect that I want by doing something like this:
HandBrakeCLI --preset 'Normal' --input bad.mov --output out.mv4 2>&1 | grep 'Encode done'
grep then sets its exit code to 0 if it found a match, and 1 if it didn't. But, this seems rather barbaric: for instance, the text "Encode done!" could change in a future release of HandBrake.
So, anyone have a better way to tell if HandBrake encoded something or not?
Some edited shell output is included below for reference...
$ ffmpeg -i 'develqueuedir/B_BH_120409.mov' 'develqueuedir/B_BH_120409.webm'
FFmpeg version 0.6.4-4:0.6.4-0ubuntu0.11.04.1, Copyright (c) 2000-2010 the Libav Developers
[snip]
develqueuedir/B_BH_120409.mov: Invalid data found when processing input
$ echo $?
1
$ HandBrakeCLI --preset 'Normal' --maxWidth 720 --optimize --input 'develqueuedir/B_BH_120409.mov' --output 'develqueuedir/B_BH_120409.mv4'
Output format couldn't be guessed from file name, using default.
[11:45:45] hb_init: starting libhb thread
HandBrake 0.9.6 (2012022900) - Linux x86_64 - http://handbrake.fr
Opening develqueuedir/B_BH_120409.mov...
[snip]
[11:45:45] libhb: scan thread found 0 valid title(s)
No title found.
HandBrake has exited.
$ echo $?
0
Short answer is no , you can find detailed explanation at HandBrake forum https://forum.handbrake.fr/viewtopic.php?f=12&t=18559&p=85529&hilit=return+code#p85529
adddition:
I think that there is a patch from user fonkprop that is rejected by developers , if you really need it contact that guy
Good news! It appears that this feature is about to be implemented in HandBrake-CLI 0.10. As you can read on the roadmap for the 0.10 milestone:
Basic support for return codes from the CLI. (0 = No Error, 1 = Cancelled, 2 = Invalid Input, 3 = Initialization error, 4 = Unknown Error")