ABCPDF Font Printing Layout - Machine Dependent - printing

I am using ABCPDF to print a PDF file to a local printer via EMF file. I've based this very closely on ABC PDF's sample "ABCPDFView" project. My application worked fine on my Windows 7 and Windows XP dev boxes, but when I moved to a Windows 2003 test box, simple embedded fonts (like Times New Roman 12) rendered completely wrong (wrong spot, and short and squat, almost like the DPI's were crazily wrong).
Note that I've hardcoded the DPI to 240 here b/c I'm using a weird mainframe print driver that forces 240x240. I can discount that driver as the culprit as, if I save the EMF file locally during print, it shows the same layout problems. If I render to PNG or TIFF files, this looks just fine on all my servers using this same code (put .png in place of .emf). Finally, if I use the ABCPDFView project to manually add in a random text box to my PDF, that text also renders wrong in the EMF file. (Side note, if I print the PDF using Acrobat, the text renders just fine)
Update: I left out a useful point for anyone else having this problem. I can work around the problem by setting RenderTextAsText to "0" (see code below). This forces ABCPDF to render the text as polygons and makes the problem go away. This isn't a great solution though, as it greatly increases the size of my EMF files, and those polygons don't render nearly as cleanly in my final print document.
Anyone have any thoughts on the causes of this weird font problem?
private void DoPrintPage(object sender, PrintPageEventArgs e)
{
using (Graphics g = e.Graphics)
{
//... omitted code to determine the rect, used straight from ABC PDF sample
mDoc.Rendering.DotsPerInch = 240 ;
mDoc.Rendering.ColorSpace = "RGB";
mDoc.Rendering.BitsPerChannel = 8;
mDoc.SetInfo(0, "RenderTextAsText", "0");//the magic is right here
byte[] theData = mDoc.Rendering.GetData(".emf");
using (MemoryStream theStream = new MemoryStream(theData))
{
using (Metafile theEMF = new Metafile(theStream))
{
g.DrawImage(theEMF, theRect);
}
}
//... omitted code to move to the next page
}

Try upgrading to the new version of abcpdf 8, it has its own rendering engine based on Gecko and so you can bypass issues like this when abcpdf is using the inbuilt server version of IE for rendering.

I was originally RDPing in with 1920x1080 resolution, by switching to 1024x768 res for RDP, the problem went away. My main program runs as a service, and starting this service from an RDP session w/ 1024x768 fixes it.
I have an email out w/ ABC PDF to see if they can explain this and offer a more elegant solution, but for now this works.
Please note that this is ABC PDF 7, I have no idea if this issue applies to other versions.
Update: ABC PDF support confirmed that its possible the service is caching the display resolution from the person that started the process. They confirmed that they've seen some other weird issues with Remote Desktop and encouraged me to use this 1024x768 workaround and/or start the service remotely.

Related

Ghostscript with a Zebra ZM600

I have created a program which takes a PDF and prints it using Ghostscript.NET.
The printer in question is a Zebra ZM600, and the printing works fine. However I would like to do some adjustments to the labels, and this can be done using the driver. When I print using Adobe Reader, the adjustments work, but when I use the program which I have created to print the file, it seems like GhostScript.NET does something with the properties as this doesn't work at all (it is printed but the adjustments doesn't work).
using (GhostscriptProcessor processor = new GhostscriptProcessor())
{
List<string> switches = new List<string>();
switches.Add("-empty");
switches.Add("-dPrinted");
switches.Add("-dBATCH");
switches.Add("-dNoCancel");
switches.Add("-dNOPAUSE");
switches.Add("-dNOSAFER");
switches.Add("-dNumCopies=1");
switches.Add("-sDEVICE=mswinpr2");
switches.Add("-sOutputFile=%printer%" + printerName);
switches.Add("-f");
switches.Add(path);
processor.StartProcessing(switches.ToArray(), null);
_logger.Info("After starting process");
}
The question is, does anybody know if there is a way to force Ghostscript to use the adjustments on the printer driver? Or if there are any better alternatives for printing PDFs on a label printer?

Zebra Printer - Cut on last page

I've a Zebra ZT610 and I want to print a label, in pdf format, containing multiple pages and then have it cut on the last page. I've tried using the delayed cut mode and sending the ~JK command but I'm using a self written java application to do the invocation of printing. I've also tried to add the string "${^XB}$" into the PDF document before each page break, except the last, and used the pass-through setting in the driver to inhibit the cut command but that seems to not work either as the java print job is rendering such text as an image.
I've tried the official Zebra driver as well as using the NiceLabel zebra driver too in the hope that they may have more "Custom Commands" options in the settings but nothing has yet come to light.
After we had the same issues for several weeks and neither the vendor nor google nor Zebra's own support came up with a FULL working solution, we've worked out the following EASY 5 step solution for this (apparently pretty common) Zebra Cutter issue/problem:
Step 1:
Set Cutter-Mode to Tear-Off in the settings.
This will disable the auto-cutting after every single page.
Step 2: Go to Customer-Commands in the settings dialog (Allows ZPL coding).
Step 3: Set the first drop-down to "DOCUMENT".
Step 4: Set the Start-Section to "TEXT" and paste in
^XA^MMD^XZ^XA^JUS^XZ
MMD enables PAUSE-Mode. The JK command is only available in Pause-Mode and many Zebra printers do not support the much easier command CN (Cut-Now).
JUS saves the setting to the printer.
Step 5: Set the End-Section to "ANALYZED TEXT" and paste in
˜JK˜PS
JK sets the cut command to the end of the document, PS disables the pause mode (and thus starts printing immediately). When everything looks as described above, hit "APPLY" and your Zebra printer will automatically cut after the end of each document you send to it. You just send your PDF using sumatra or whatever you prefer. The cutter handling is now automatically done by the printer settings.
Alternatively, if you want to do this programmaticaly, use the START and END codes at the corresponding positions in your ZPL code instead. Note that ˜CMDs cannot be send in combination with ^CMDs, thats why there's no XA...XZ block to reset any settings (which is not necessary in this scenario as it only affects the print session and PS turns the pause mode back to OFF).
I had similar concern but as the print server was CUPS, I wasn't able to use Windows drivers and utilities (settings dialog). So basically, I did the following:
On the printer, set Cutter mode. This will cut after each printed label.
In my Java code, thanks to Apache PDFBox lib, open the PDF and for each page, render it as a monochrome BufferedImage, get bytes array from it, and get its hex representation.
Write a few ZPL commands to download hex as graphic data, and add the ^XB command before the ^XZ one, in order to prevent a cut here, except for the last page, so that there is a cut only at the end of the document.
Send the generated ZPL code to the printer. In my case, I send it as a raw document through IPP, using application/vnd.cups-raw as mime-type, thanks to the great lib ipp-client-kotlin, but it is also possible to use Java native printing API with bytes.
Below in a snippet of Java code, for demo purpose:
public void printPdfStream(InputStream pdfStream) throws IOException {
try (PDDocument pdDocument = PDDocument.load(pdfStream)) {
PDFRenderer pdfRenderer = new PDFRenderer(pdDocument);
StringBuilder builder = new StringBuilder();
for (int pageIndex = 0; pageIndex < pdDocument.getNumberOfPages(); pageIndex++) {
boolean isLastPage = pageIndex == pdDocument.getNumberOfPages() - 1;
BufferedImage bufferedImage = pdfRenderer.renderImageWithDPI(pageIndex, 300, ImageType.BINARY);
byte[] data = ((DataBufferByte) bufferedImage.getData().getDataBuffer()).getData();
int length = data.length;
// Invert bytes
for (int i = 0; i < length; i++) {
data[i] ^= 0xFF;
}
builder.append("~DGR:label,").append(length).append(",").append(length / bufferedImage.getHeight())
.append(",").append(Hex.getString(data));
builder.append("^XA");
builder.append("^FO0,0");
builder.append("^XGR:label,1,1");
builder.append("^FS");
if (!isLastPage) {
builder.append("^XB");
}
builder.append("^XZ");
}
IppPrinter ippPrinter = new IppPrinter("ipp://printserver/printers/myprinter");
ippPrinter.printJob(new ByteArrayInputStream(builder.toString().getBytes()),
documentFormat("application/vnd.cups-raw"));
}
}
Important: hex data can (and should) be compressed, as mentioned in ZPL Programming Guide, section Alternative Data Compression Scheme for ~DG and ~DB Commands. Depending on the PDF content, it may drastically reduce the data size (by a factor 10 in my case!).
Note that Zebra's support provides a few more alternatives in order to controller the cutter, but this one worked immediately.
Zebra Automatic Cut - Found another solution.
Create a file with the name: Delayed Cut Settings.txt
Insert the following code: ^XA^MMC,N^XZ
Send it to the printer
After you do the 3 steps above, all the documents you send to the printer will be cut automatically.
(To disable that function send again the 'Delayed Cut Setting.txt' with the following code:^XA^MMD^XZ )
The first document you send to the printer, you need to ADD (just once) the command ^MMC,N before the ^XZ
My EXAMPLE TXT:
^XA
^FX Top section with logo, name and address.
^CF0,60
^FO50,50^GB100,100,100^FS
^FO75,75^FR^GB100,100,100^FS
^FO93,93^GB40,40,40^FS
^FO220,50^FDIntershipping, Inc.^FS
^CF0,30
^FO220,115^FD1000 Shipping Lane^FS
^FO220,155^FDShelbyville TN 38102^FS
^FO220,195^FDUnited States (USA)^FS
^FO50,250^GB700,3,3^FS
^FX Second section with recipient address and permit information.
^CFA,30
^FO50,300^FDJohn Doe^FS
^FO50,340^FD100 Main Street^FS
^FO50,380^FDSpringfield TN 39021^FS
^FO50,420^FDUnited States (USA)^FS
^CFA,15
^FO600,300^GB150,150,3^FS
^FO638,340^FDPermit^FS
^FO638,390^FD123456^FS
^FO50,500^GB700,3,3^FS
^FX Third section with bar code.
^BY5,2,270
^FO100,550^BC^FD12345678^FS
^FX Fourth section (the two boxes on the bottom).
^FO50,900^GB700,250,3^FS
^FO400,900^GB3,250,3^FS
^CF0,40
^FO100,960^FDCtr. X34B-1^FS
^FO100,1010^FDREF1 F00B47^FS
^FO100,1060^FDREF2 BL4H8^FS
^CF0,190
^FO470,955^FDCA^FS
^MMC,N
^XZ

open office crashes after some time giving garbled font in converted PDF

We are converting word to pdf using the openoffice(3.4.1 version) in java with JODConverter.
below is the code used.
OpenOfficeConnection connection =
new SocketOpenOfficeConnection(2100);
try
{
connection.connect();
DocumentConverter converter =
new OpenOfficeDocumentConverter(connection);
converter.convert(inputFile, outputFile);
connection.disconnect();
return "Sucess " + DestinationPath + DestinationFileName;
}
catch (Exception localException1) {
}
The problem is that after random no of days the converted PDF contains the garbled fonts.
like # # ! $ $ " % &
The only solution we have so far is to restart the server. System guys are saying the the problem is with Open Office.
We are using open office to convert the document since it converts the doc files exactly including all the formatting and table structure.
what could be the solution to this.
So OpenOffice can be a little temperamental when running on a server, especially as it isn't multi-threaded and you end up having to run a pool of OpenOffice processes - see How can I use OpenOffice in server mode as a multithreaded service?.
Added to that often the rendering is off when converting to PDF - see https://forum.openoffice.org/en/forum/viewtopic.php?f=7&t=68865 which is why you may want to consider using a conversion service to automate the conversion tasks for you ?
For complete transparency I work for Zamzar (an online file conversion service), we have recently released a developer API - https://developers.zamzar.com/ that allows you to convert between a multitude of file types, specifically applicable to you here in that we support both doc and docx to pdf with little or not loss in the way the PDF is rendered. It maybe worth a look to see if this is a better alternative to trying to run your own solution through OpenOffice on a server.

Converting PDF to PNG with transparent background

We have a Ruby on Rails application that needs to convert a PDF into a PNG with a transparent background. We're using rmagick 2.13.1. On our development machines the following code works exactly how we want it.
pages = Magick::Image.from_blob(book.to_pdf.render){ self.density = 300 }
page = pages[0]
image_file = Tempfile.new(['preview_image', '.png'])
image_file.binmode
image_file.write( page.to_blob { |opt| opt.format = "PNG" } )
We thens save the image_file and all is peachy. When we deployed to a review server on Heroku, though, the generated image has a white background. It turns out that Heroku's cedar stack is using imagemagick ImageMagick 6.5.7-8 2010-12-02 where we're using ImageMagick 6.7.5-7 2012-05-08 on our development machines.
I've scoured the net for older posts that might apply to the older version to try and figure out how to generate the transparent PNGs. It's surely supported, but, so far I haven't been able to figure out the right combination of settings.
To verify that it wasn't the PDF generation that was the problem, I downloaded a PDF generated on Heroku and successfully converted it using the above code (slightly modified to read the file in instead of generate it) to a transparent PNG.
Some of the things I've tried in various combinations are:
page.matte = true
page.format = "PNG32"
page.background_color = "none"
page.transparent_color = "white"
page.transparent("white")
So, the question is "is this possible?". If so, which settings do I need to set on the image before writing it out?
I'm also investigating including a compiled binary of a more up to date Imagemagick on Heroku.
Any help is appreciated.
This should no longer be an issue, as Heroku has ImageMagick versions 6.7-6.9 on their various stacks.

How to write a simple .txt content processor in XNA?

I don't really understand how Content importer/processor works in XNA.
I need to read a text file (Content/levels/level1.txt) of the form:
x x
x x
x x
where x's are just integers, into an int[,] array.
Any tips on writting a SIMPLE .txt importer??? By searching google/msdn I only found .x/.fbx file importer examples. And they seem too complicated.
Do you actually need to process the text file? If not, then you can probably skip most of the content pipeline.
Something like:
string filename = "Content/TextFiles/sometext.txt";
string path = Path.Combine(StorageContainer.TitleLocation, filename);
string lineOfText;
StreamReader sr = new StreamReader(path);
while ((lineOfText = sr.ReadLine()) != null)
{
// do something
}
Also, be sure to set the "Build Action" to "None" and the "Copy to Output Directory" to "Copy if newer" on the text files you've added. This tells the content pipeline not to compile the text file but rather copy it to the output directory for use as is.
I got this (more or less) from the RacingGame sample provided by Microsoft. It foregoes much of the content pipeline and simply loads and processes text files (XML) for much of its level data.
XNA 4.0 uses
System.IO.Stream stream = TitleContainer.OpenStream("tilename.txt");
See http://msdn.microsoft.com/en-us/library/bb199094.aspx and also http://blogs.msdn.com/b/shawnhar/archive/2010/12/09/reading-files-in-xna-game-studio-4-0.aspx
There doesn't seem to be a lot of info out there, but this blog post does indicate how you can load .txt files through code using XNA.
Hopefully this can help you get the file into memory, from there it should be straightforward to parse it in any way you like.
XNA 3.0 - Reading Text Files on the Xbox
http://www.ziggyware.com/readarticle.php?article_id=69 is probably a good place to start. It covers creating a basic content processor.

Resources