How do I access printer specific fonts in .NET? - printing

This question is along the same lines as Retrieving Device Context from .NET print API...
I have a Datacard 295 embosser / mag stripe encoder. In order to write to the Mag Stripe or Embosser wheel, you must write your text in a special "pseudo-font", which the printer driver will recognize and handle appropriately. There are multiple fonts, depending on whether you want to write to track 1, track 2, big embosser letters or small.
Unfortunately, .NET only directly supports OpenType and TrueType fonts.
Unlike the question I referenced, I have no tech guide to tell me what to transmit. The easiest way for me to handle the issue is to find a way to use the printer fonts from .NET, whatever that takes. How can I access and use printer fonts in .NET?

You can't do this directly from .NET, so you have to use Win32 calls on the device context to render using the "pseudo-font". The sample code available here shows how to do this:
' As we're using a device font, we need to write directly on the device context
' as the System.Drawing.Font class which is used to write on a graphics object
' does not support device fonts
Dim hdcLabel As IntPtr
hdcLabel = e.Graphics.GetHdc
' Create the new device font
Dim hfEPC As IntPtr
hfEPC = WinAPI.GDI32.CreateFont(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, "Track1")
' Select the font on the device context, getting a handle on the font that is being replaced
Dim hReplacedFont As IntPtr
hReplacedFont = WinAPI.GDI32.SelectObject(hdcLabel, hfEPC)
' Draw the text using the printer font
Dim intDrawTextReturn As Integer
intDrawTextReturn = WinAPI.User32.DrawText(hdcLabel, "Track 1 Data", ("Track 1 Data").Length, New Rectangle(20, 20, 300, 300), 0)
' Re-Select the original font on the device context
WinAPI.GDI32.SelectObject(hdcLabel, hReplacedFont)
' Dispose of the EPC font
WinAPI.GDI32.DeleteObject(hfEPC)
' Release the device context
e.Graphics.ReleaseHdc(hdcLabel)

Related

Direct2D: How to save content of ID2D1RenderTarget to an image file?

The question is very similar to this, but that one didn't get answered yet. My question is, I have a D2D DXGI RenderTarget from d2dfactory->CreateDxgiSurfaceRenderTarget(), and I want to save its content to an image file using WIC. I was just reading this and this, so it looks to me that I can not just create a ID2D1Bitmap on a WIC render target and use ID2D1Bitmap::CopyFromRenderTarget() to copy from the input render target I want to save, because they are using different resources. So here is what I came up with using ID2D1RenderTarget::CreateSharedBitmap():
HRESULT SaveRenderTargetToFile(
ID2D1RenderTarget* pRTSrc,
LPCWSTR uri
)
{
HRESULT hr = S_OK;
ComPtr<IWICBitmap> spWICBitmap;
ComPtr<ID2D1RenderTarget> spRT;
ComPtr<IWICBitmapEncoder> spEncoder;
ComPtr<IWICBitmapFrameEncode> spFrameEncode;
ComPtr<IWICStream> spStream;
//
// Create WIC bitmap to save and associated render target
//
UINT bitmapWidth = static_cast<UINT>(pRTSrc->GetSize().width + .5f);
UINT bitmapHeight = static_cast<UINT>(pRTSrc->GetSize().height + .5f);
HR(m_spWICFactory->CreateBitmap(
bitmapWidth,
bitmapHeight,
GUID_WICPixelFormat32bppPBGRA,
WICBitmapCacheOnLoad,
&spWICBitmap
));
D2D1_RENDER_TARGET_PROPERTIES prop = D2D1::RenderTargetProperties();
prop.pixelFormat = D2D1::PixelFormat(
DXGI_FORMAT_B8G8R8A8_UNORM,
D2D1_ALPHA_MODE_PREMULTIPLIED
);
prop.type = D2D1_RENDER_TARGET_TYPE_DEFAULT;
prop.usage = D2D1_RENDER_TARGET_USAGE_NONE;
HR(m_spD2D1Factory->CreateWicBitmapRenderTarget(
spWICBitmap,
prop,
&spRT
));
//
// Create a shared bitmap from this RenderTarget
//
ComPtr<ID2D1Bitmap> spBitmap;
D2D1_BITMAP_PROPERTIES bp = D2D1::BitmapProperties();
bp.pixelFormat = prop.pixelFormat;
HR(spRT->CreateSharedBitmap(
__uuidof(IWICBitmap),
static_cast<void*>(spWICBitmap.GetRawPointer()),
&bp,
&spBitmap
)); // <------------------------- This fails with E_INVALIDARG
//
// Copy the source RenderTarget to this bitmap
//
HR(spBitmap->CopyFromRenderTarget(nullptr, pRTSrc, nullptr));
//
// Draw this bitmap to the output render target
//
spRT->BeginDraw();
spRT->Clear(D2D1::ColorF(D2D1::ColorF::GreenYellow));
spRT->DrawBitmap(spBitmap);
HR(spRT->EndDraw());
//
// Save image to file
//
HR(m_spWICFactory->CreateStream(&spStream));
WICPixelFormatGUID format = GUID_WICPixelFormat32bppPBGRA;
HR(spStream->InitializeFromFilename(uri, GENERIC_WRITE));
HR(m_spWICFactory->CreateEncoder(GUID_ContainerFormatPng, nullptr, &spEncoder));
HR(spEncoder->Initialize(spStream, WICBitmapEncoderNoCache));
HR(spEncoder->CreateNewFrame(&spFrameEncode, nullptr));
HR(spFrameEncode->Initialize(nullptr));
HR(spFrameEncode->SetSize(bitmapWidth, bitmapHeight));
HR(spFrameEncode->SetPixelFormat(&format));
HR(spFrameEncode->WriteSource(spWICBitmap, nullptr));
HR(spFrameEncode->Commit());
HR(spEncoder->Commit());
HR(spStream->Commit(STGC_DEFAULT));
done:
return hr;
}
Anything wrong with this code? (I'm sure there's a lot :)) Somewhere on MSDN it says that WIC render target only supports software mode, while DXGI render target only supports hardware mode. Is this the reason why the above call to CreateSharedBitmap() fails? How should I save a DXGI surface content to an image file with D2D then?
With some limitations, you can use D3DX11SaveTextureToFile. Use QI on your surface to get the ID3D11Resource.
On the same page they are recommending DirectXTex library as a replacement, CaptureTexture then SaveToXXXFile (where XXX is WIC, DDS, or TGA). So that's another option.
Also, if your surface has been created as GDI compatible, you can use IDXGISurface1::GetDC. (Use QI on your IDXGISurface to get the IDXGISurface1). Saving DC to a file is left as an exercise to the reader.
Remember to use the Debug Layer for help with cryptic return codes like E_INVALIDARG.
You could try this (I haven't):
Make your old DXGISurface.
Make an auxiliary ID2D1DeviceContext render target.
Use ID2D1DeviceContext::CreateBitmapFromDxgiSurface to create an ID2D1Bitmap1 associated to the DXGI surface.
Draw on your DXGISurface. You should get the same on the ID2D1Bitmap1.
Use ID2D1Bitmap1::Map to get a memory pointer to the pixeldata.
Copy the pixeldata to file, or to a wicbitmap for encoding (jpeg, tiff, etc.)
Perhaps this:(succeed running)
D2DFactory->CreateHwndRenderTarget(D2D1::RenderTargetProperies(D2D1_RENDER_TARGET_TYPE_SOFTWARE,D2D1::Pixel Format(DXGI_FORMAT_B8G8R8A8_UNORM,D2D1_ALPHA_MODE_PREMULTIPLIED)), ……)
your RenderTarget should be set the static of SOFTWARE the same as WICRenderTarget.

iTextSharp preserve html formatting on pdf

I am using some basic styles in ckeditor bold, italic, etc. to allow my users to style their text for report writing.
When this string is passed to iTextSharp I am removing the html otherwise the html is printed on the pdf. I am removing this with
Regex.Replace(item.DevelopmentPractice.ToString(), #"<[^>]*>| ", String.Empty)
Is there a way to format the text on the pdf to preserve the bold but not display
<strong></strong>
UPDATE
I have provided full code below as requested.
public FileStreamResult pdf(int id)
{
// Set up the document and the Memory Stream to write it to and create the PDF writer instance
MemoryStream workStream = new MemoryStream();
Document document = new Document(PageSize.A4, 30, 30, 30, 30);
PdfWriter.GetInstance(document, workStream).CloseStream = false;
// Open the pdf Document
document.Open();
// Set up fonts used in the document
Font font_body = FontFactory.GetFont(FontFactory.HELVETICA, 10);
Font font_body_bold = FontFactory.GetFont(FontFactory.HELVETICA, 10, Font.BOLD);
Chunk cAreasDevelopmentHeading = new Chunk("Areas identified for development of practice", font_body_bold);
Chunk cAreasDevelopmentComment = new Chunk(item.DevelopmentPractice != null ? Regex.Replace(item.DevelopmentPractice.ToString(), #"<[^>]*>| ", String.Empty) : "", font_body);
Paragraph paraAreasDevelopmentHeading = new Paragraph();
paraAreasDevelopmentHeading.SpacingBefore = 5f;
paraAreasDevelopmentHeading.SpacingAfter = 5f;
paraAreasDevelopmentHeading.Add(cAreasDevelopmentHeading);
document.Add(paraAreasDevelopmentHeading);
Paragraph paraAreasDevelopmentComment = new Paragraph();
paraAreasDevelopmentComment.SpacingBefore = 5f;
paraAreasDevelopmentComment.SpacingAfter = 15f;
paraAreasDevelopmentComment.Add(cAreasDevelopmentComment);
document.Add(paraAreasDevelopmentComment);
document.Close();
byte[] byteInfo = workStream.ToArray();
workStream.Write(byteInfo, 0, byteInfo.Length);
workStream.Position = 0;
// Setup to Download
HttpContext.Response.AddHeader("content-disposition", "attachment; filename=supportform.pdf");
return File(workStream, "application/pdf");
This really is not the best way to do HTML to PDF - iText or no iText. Try to look for a different method, you are not actually converting HTML to PDF, you are inserting scraped text to PDF using Chunks.
The most common way to do iText HTML2PDF seems to be to use HTMLWorker (I think it might be XMLWorker in newer versions), but people complain about that too; see this. It looks like you are building the PDF using non-converted iText elements without HTML and want to use HTML within those elements and I'm guessing that it will be very, very hard.
In the linked HTML worker example, have a look at the structure of the program. They do a HTML2PDF conversion - but if that fails, they create the PDF using the other iText methods, like Paragraph and Chunk. They there set the Chunk to have some styling as well.
I guess that you would have to parse the incoming HTML, divide it to chunks yourself, convert the s to Chunks with styling and only then vomit them onto the PDF. Now imagine doing that with a data source like CKE - even with a very strict ACF it would be a nightmare. If anyone knows of any other way than this, I want to know too (I do basically CKE to PDF for a living)!
Do you have any options, such as creating your own editor or using some other PDF technique? I use wkhtmltopdf but my situation is very different. I would use PrinceXML but it's too expensive.

pixFRET - running the plugin for time lapse images // looping?

I just recently started to work with ImageJ (and thus do not have much experience with macro programming) to analyze my microscopy pictures.
In order to generate FRET pixel-by-pixel images that are corrected for spectral bleed through I am using the plug in: pixFRET. This plug in requires a stack of 3 images to work: FRET, Donor, Acceptor. So far, I have to open every picture myself and this is REALLY inconvenient for large time stacks (> 1000 images). I am looking for a way to loop the plug in or create some kind of macro to do this.
A short description of my Data structure:
workfolder\filename_t001c1 (Channel 1 Image - Donor at time point 001),
filename_t001c2 (Channel 2 Image - FRET at time point 001),
...t001c3 (can be neglected)
...t001c4 (Channel 4 Image - Acceptor at time point 001).
I would have to create a stack of C2/C1/C4 at each time point that is automatically analyzed by pixFRET (with set parameters) and the result should be saved in an output folder.
I am grateful for every suggestion as my biggest problem is the looping of this whole stack generation/pixFRET analysis (can only do this manual right now).
Thanks
David
I did not find a way to directly include the parameters and commands from the pixFRET PlugIn. However, here I show a work around that works with IJ_Robot to add these commands. I further included some stuff to perform a alignment of the camera channels based on the first images of the time series.
// Macro for creating time resolved pixFRET images with a alignment of both cameras used
// a separate setting file is required for pixFRET -> put this into the same folder as the pixFRET plugin
// the background region has to be set manually in this macro
// IJ_robot uses cursor movements - DO NOT move the cursor while excuting the macro + adjust IJ_robot coordinates when changing the resolution/system.
dir = getDirectory("Select Directory");
list = getFileList(dir);
//single alignment
run("Image Sequence...", "open=[dir] number=2 starting=1 increment=1 scale=100 file=[] or=[] sort");
rename(File.getName(dir));
WindowTitle=getTitle()
rename(WindowTitle+toString(" Main"))
MainWindow=getTitle()
NSlices=getSliceNumber()
xValue=getWidth()/2
yValue=getHeight()/2
//setTool("rectangle");
makeRectangle(0, 0, xValue, yValue);
run("Align slices in stack...", "method=5 windowsizex="+toString(xValue*2-20)+" windowsizey="+toString(yValue*2-20)+" x0=10 y0=10 swindow=0 ref.slice=1 show=true");
selectWindow("Results");
XShift=getResult("dX", 0);
YShift=getResult("dY", 0);
File.makeDirectory(toString(File.getParent(dir))+toString("\\")+"test"+" FRET");
for(i=0;i<list.length;i+=4){
open(dir+list[i+1]);
run("Translate...", "x=XShift y=YShift interpolation=None stack");
open(dir+list[i]);
open(dir+list[i+3]);
run("Translate...", "x=XShift y=YShift interpolation=None stack");
wait(1000);
run("Images to Stack", "name=Stack title=[] use");
selectWindow("Stack");
makeRectangle(15, 147, 82, 75); //background region
run("PixFRET...");
run("IJ Robot", "order=Left_Click x_point=886 y_point=321 delay=500 keypress=[]");
run("IJ Robot", "order=Left_Click x_point=874 y_point=557 delay=500 keypress=[]");
selectWindow("NFRET (x100) of Stack");
save(toString(File.getParent(dir))+toString("\\")+"test"+" FRET"+toString(i) +".tif");
selectWindow("Stack");
close();
selectWindow("FRET of Stack");
close();
selectWindow("NFRET (x100) of Stack");
close();
run("IJ Robot", "order=Left_Click x_point=941 y_point=57 delay=300 keypress=[]");
}
Thanks for your help Jan. If you can think of a way to call these pixFRET commands directly rather than using Ij_robot, please let me know.
Take this tutorial from Fiji (is just ImageJ) as a starting point, and use the macro recorder (Plugins > Macros > Record...) to get the neccessary commands.
Your macro code could then look something like this:
function pixfret(path, commonfilename) {
open(path + commonfilename + "c2");
open(path + commonfilename + "c1");
open(path + commonfilename + "c4");
run("Images to Stack", "name=Stack title=[] use");
run("PixFRET"); // please adjust this to your needs
}
setBatchMode(true);
n_timepoints = 999;
dir = "/path/to/your/images/";
for (i = 0; i < n_timepoints; i++)
pixfret(dir, "filename_t" + IJ.pad(i, 4));
setBatchMode(false);
Hope that helps.

How to set error correction level for QR code when using the new createbitmap method

This question is in reference to the API documentation link, http://www.blackberry.com/developers/docs/7.0.0api/net/rim/device/api/barcodelib/BarcodeBitmap.html
They specify that the old method
public static Bitmap createBitmap(ByteMatrix byteMatrix,
int maxBitmapSizeInPixels)
is deprecated.
But by using the new method,
public static Bitmap createBitmap(ByteMatrix byteMatrix)
they haven't specified a way to specify the error correction level for the QR code in Multiformatwriter. I haven't been able to find a way either, looking through various member functions.
Has anyone tried this?
Thanks for your help.
Here is my code, and I have checked with my phone, the error correction level is set correctly according to my phone.
Hashtable hints = new Hashtable();
switch (comboBox1.Text)
{
case "L":
hints.Add(EncodeHintType.ERROR_CORRECTION, ErrorCorrectionLevel.L);
break;
case "Q":
hints.Add(EncodeHintType.ERROR_CORRECTION, ErrorCorrectionLevel.Q);
break;
case "H":
hints.Add(EncodeHintType.ERROR_CORRECTION, ErrorCorrectionLevel.H);
break;
default:
hints.Add(EncodeHintType.ERROR_CORRECTION, ErrorCorrectionLevel.M);
break;
}
MultiFormatWriter mw = new MultiFormatWriter();
ByteMatrix bm = mw.encode(data, BarcodeFormat.QR_CODE, size, size, hints);
Bitmap img = bm.ToBitmap();
pictureBox1.Image = img;
When encoding, you can pass in hints
Map<EncodeHintType, Object> hints = new Hastable<EncodeHintType, Object>();
Add the error correction setting to the hints (for example to level M)
hints.put(EncodeHintType.ERROR_CORRECTION, ErrorCorrectionLevel.M);
ZXing uses error correction level L by default (the lowest, meaning the QR Code will still be readable even after a max of 7% damage)
Just looked to the documentation.
It says to use createBitmap(ByteMatrix byteMatrix) in conjunction with MultiFormatWriter. That has method encode(String contents, BarcodeFormat format, int width, int height, Hashtable hints) where you could specify width, height and error level.
To specify error level put to hints hashtable key EncodeHintType.ERROR_CORRECTION with value new Integer(level).
Unfortunately I didn't find any constants for these values as described here. but probably you could find it in axing sources.

get handle to current active window in OpenCV

Are there OpenCV equivalents of the GLUT glutGetWindow()/glutSetWindow() functions, which allows the current active window to be identified and switched from your own codes?
Basically, I'd like to able to identify the current active window from a within a mouse callback function registered with all windows, and have it call another processing function with different parameters for each window.
Any help would be appreciated.
There's no function to do that in OpenCV, however, the signature of cvSetMouseCallback() allows you to register one callback per window.
You will have to register individual callbacks to achieve what you need to do.
Here is the complete list of features supported by the HIGHGUI module.
Another (hardcore) alternative is to dive into the native API of the OS you are working with and search for methods that accomplish this. The problem is that this solution is not cross-platform.
Actually, cvGetWindowHandle(const char* windowname) is available up in opencv/highgui/highgui_c.h. This is available up until openCV 4 when this answer was written.
I suggest that you add
#include <opencv/highgui/highgui_c.h>
and use
cvGetWindowHandle(window_name_.c_str())
Include <opencv / highgui / highgui_c.h> could be a solution, but it really won't let you turn to Opencv4 +.
For those of you who are still using Opencv in MFC DialogBox, there is a different solution
FindWindows returns the Parent Window handle, and MFC works with the child window, so you'll need FindWindow and FindWindowEx.
New source code for MFC and Opencv4+
namedWindow(windowname, WINDOW_AUTOSIZE);
////// This will work on opencv 4.X //////
HWND hParent = (HWND)FindWindow(NULL, windowname.c_str());
HWND hWnd = (HWND)FindWindowEx(hParent, NULL, L"HighGUI class", NULL);
::SetParent(hWnd, GetDlgItem(IDC_PICTURE)->m_hWnd);
::ShowWindow(hParent, SW_HIDE);
CWnd* pWnd = new CWnd();
pWnd->CWnd::Attach(hParent);
Maybe you're still in troubles because string to LPCWSTR conversion fails, and hParent returns NULL. There is many ways to convert string to LPCWSTR, but because you are using MFC, try
namedWindow(windowname, WINDOW_AUTOSIZE);
////// This will work on opencv 4.X //////
CString CstrWindowname = windowname.data();
HWND hParent = (HWND)FindWindow(NULL, CstrWindowname);
HWND hWnd = (HWND)FindWindowEx(hParent, NULL, L"HighGUI class", NULL);
::SetParent(hWnd, GetDlgItem(IDC_PICTURE)->m_hWnd);
::ShowWindow(hParent, SW_HIDE);
CWnd* pWnd = new CWnd();
pWnd->CWnd::Attach(hParent);
The new code should replace this old code
namedWindow(windowname, WINDOW_AUTOSIZE);
///// OLD version. Used on opencv 3.X on MFC Dialog Box /////
HWND hWnd = (HWND) cvGetWindowHandle(windowname.c_str());
HWND hParent = ::GetParent(hWnd);
::SetParent(hWnd, GetDlgItem(IDC_PICTURE)->m_hWnd);
::ShowWindow(hParent, SW_HIDE);
CWnd* pWnd = new CWnd();
pWnd->CWnd::Attach(hParent);
Try,
Well, there is no OpenCV API for retreiving focused window, but OS GUI Shell usually provides. Using this approach would be better because mouse callbacks can't detect ALT-TAB and programmatic focusing.
Here's some example code on python for windows that gets the job done:
import ctypes
import cv2
user32 = ctypes.windll.user32
def exists_cv_window(title):
# seems to work on python-opencv version 4.6.0
return cv2.getWindowProperty(title, cv2.WND_PROP_VISIBLE) != 0.0
def get_active_cv_window():
focused_window_handle = user32.GetForegroundWindow()
length = user32.GetWindowTextLengthW(focused_window_handle)
buffer = bytes([0]) * 2 * length
buff_pointer = ctypes.c_char_p(buffer)
user32.GetWindowTextW(window_handle, buff_pointer, length)
active_window_title = buffer.decode('utf-16')
if exists_cv_window(active_window_title):
return active_window_title
# example use case for the function
def main():
im1 = cv2.imread('cookie.png')
im2 = cv2.imread('cat.png')
cv2.imshow('figure 1', im1)
cv2.imshow('figure 2', im2)
while True:
key = cv2.waitKey(10)
if key == 23: # CTRL + W
title = get_active_cv_window()
if title is not None:
cv2.destroyWindow(title)
# in the example above the ability to target active window allows applying
# CTRL + W shortcut to a specific figure
It's a shame this is not part of OpenCV

Resources