Inscriber Technology Via Builder and Delphi - Alpha Channel support - delphi

I'm working with Via Builder, from Inscriber Technology. This app merges a TGA sequence animation into one single .via file, making it much better to load large sequences, as this file is optimized. There are plugins to use this with some Adobe products.
I'm working on Delphi, and my problem is that I can't get back the original alpha channel from the frames. Using their VIACODECLib_TLB library, I have the following function:
function GetFrameBitmap(Frame: Integer): Integer;
from tha IViaFile interface. This function is supposed to return a handle to a frame bitmap from the original sequence. The following code could work:
viaObject: IViaFile;
bmp: TBitmap;
index: Integer;
bmp.Handle := ViaObject.GetFrameBitmap(index);
But the resulting bitmap is the original frame with no alpha channel. Actually, its alpha channel is zero for the entire image.
Assuming I was doing something wrong, I tried using the GetDIBits function, to be sure there was an alpha channel somewhere. So I allocated memory long enough to store the bitmap assuming it had 4 channels and used the GetDIBits function. I got the same result as before: normal frame, alpha channel zero for the entire image.
Just to note, Inscriber (whose forums are dead), claims that its Via Builder has full alpha support. I know someone who managed to load the frames correctly, on C++, using the GetDIBits function, but "translating" the code to Delphi didn't work.
Any help would be much appreciated.
Thank you.

I suggest you take a closer look at your colleague's C++ code that supposedly works. You probably missed some detail. How much of the code was Windows API, and how much of it was some vendor-specific graphics code? The API stuff should be a cinch to translate to Delphi.
You might find that Delphi's TBitmap class doesn't support transparency, so you would need to use some other graphic-support library instead of plain old GDI. But if you're fetching the raw bitmap data as with GetDIBits, you should at least be able to see that the alpha-channel data is there. (You'd still need to find a way of displaying the bitmap properly, but at least you'd know you had the right data to start with.)

Related

(Roblox scripting) help needed for "GetMaterialColor" or "SetMaterialColor"

I am new to roblox scripting, and I am working on a game. I am trying to make an exploration game with multiple planets. I want the colors on the surfaces of the planets to vary, but I also wish to use smooth terrain, as it is easier to use and looks nice. from reading a bit online, i have figured out i need to use "GetMaterialColor" or "SetMaterialColor". however, "SetMaterialColor", the one i needed specifically, requires two bits of information- the material and the color.
The issue comes from the "Material" part of this, as I have no idea how to make the script recognize what material I want to change. i tried multiple things, including but not limited to:
(grass, #,#,#)
(grass) (#,#,#)
("Grass"), (#,#,#)
("Grass", #,#,#)
or even just (#,#,#), without trying to get a specific material at all
so yeah, I need some help
here is the code:
local function onTouch(hit)
game.Workspace.Terrain:SetMaterialColor
end
script.Parent.Touched:connect(onTouch)
(there should be stuff after SetMaterialColor, that is what i need help with)
If you read the documentation on Terrain:SetMaterialColor(), you'll see that the first argument is a Material type, which is an Enum. So the method expects an Enum (or number to be more accurate), not a string denoting the material.
At the same time the second argument is a Color3, so (#,#,#) isn't apt, using it with the constructor Color3.fromRGB(#,#,#) is. If you are ever confused about what a method returns or expects, try referring to its documentation on https://developer.roblox.com/.
Here's an example of correct usage:
workspace.Terrain:SetMaterialColor(Enum.Material.Grass, Color3.fromRGB(123,123,123))
And ofcourse, Event:Connect() instead of Event:connect()

Measuring the height of text according to CSS rules – _without a browser rendering_ – for use with a virtualized list, to specify heights in advance

I've been implementing a chat client in Electron (Chrome) and React. Our top priority is speed. It behooves us, then, to use a virtualized list component (also known as "buffered render" or "window render"). We've explored react-virtualized, react-window, and react-infinite, among others.
One issue all these components have in common is that if supporting list elements of variable heights, the heights need to be known in advance. Now, some chats are very long, and others are very short, so that presents a challenge for us. (Images and video are easy thanks to EXIF data and ffprobe).
So, we're faced with the challenge of measuring heights while also straining to be extremely performant. One obvious technique is to put the elements in a browser container off-viewport, perform the measurements, and then render the list. But that hurts us on the performance requirement aspect. Software like react-virtualized/CellMeasurer (which is no longer maintained by the original author) and react-window make us of this technique, built in to the library, but performance is somewhat slow as well as unreliable. A similar idea that might be more performant would be to use a background Electron Browser window to do the rendering and measuring, but my intuition is that wouldn't be that much faster.
I submit that there must be some solved way to figure out string height in advance, according to word wrap, max width, and font rules.
My current idea is to use a library like string-pixel-width in order to calculate row heights as soon as we get the text data through our API. Basically, the library uses this piece of code to generate a map of character widths [*]. Then, once we know how wide each text, we separate each line when it maxes out the computed max row width, and finally infer list element height through row count. It's going to require a little bit of algorithmic fiddling due to break-word but there are libraries to help with that – css-line-break seems promising.
[*] We would have to modify it a bit to account for all Unicode character ranges, but that is trivial.
Some options I haven't fully explored yet include the python weasyprint project and the facebook-yoga project. I'm open to your ideas!
Using the canvas capabilities to measure text could solve this problem in a performant way.
Electrons canvas text is calculated the same as the regular text, there are some diffrences in rendering though especially in reguard of anti-aliasing but that does not affect the calculation.
You can get the TextMetrics from any text with
const canvas = document.getElementById('canvas')
const ctx = canvas.getContext('2d')
// Set your font parameters
// Docs: https://developer.mozilla.org/en-US/docs/Web/CSS/font
ctx.font = "30px Arial";
// returns a TextMetrics object
// Docs: https://developer.mozilla.org/en-US/docs/Web/API/TextMetrics
const text = ctx.measureText('Hello world')
This does not include line breaks and word wraps, for this feature I would recommend you to use the text package from pixijs, it uses this method already. In addition you could fork the source (MIT licence) and modify it for additional performance by enabling the experimental chromium TextMetrics features in electron and make use of it.
This can be done when creating a window
new BrowserWindow({
// ... rest of your window config ...
webPreferences: {
experimentalFeatures: true
}
})
Now to the part I mentioned in the comments since I don't know your codebase, your calculations and everything should be happening in the Render Process. If that is not the case you definitely should move your code from the main process over to the render process, if you do file access operations or anything node specific you should still do this but in a so-called preload script
it's a additional parameter in the webPreferences
webPreferences: {
preload: path.join(__dirname, 'preload.js')
experimentalFeatures: true
}
In this script you have full access to node including native node modules without the use of IPC calls. The reason I discourage IPC calls for any type of function that gets called multiple times is that it is slow by nature, you need to serialize/deserialize to make them work. The default behaviour for electron is even worse since it uses JSON, except you use ArrayBuffers.

Override the stringValue property of AVMetadataMachineReadableCodeObject or create an alternative that outputs binary data

Okay, I get it, possible duplicate of Read binary QR Code with AVFoundation but I'll try to tackle this issue from a different angle.
I'm trying to scan a barcode (in this case, Aztec) in my swift app. It works for barcodes that have regular string data encoded. For my app though, I need to be able to scan a certain type of barcode (read more about this on SO) that stores the data in binary format.
Sadly, stringValue of AVMetadataMachineReadableCodeObject is (per Apple's docs)
The value of this property is an NSString created by decoding the
binary payload according to the format of the machine-readable code
so the output gets garbled, truncated and unusable (It's a zlib-encoded data stream).
My question is: is there a way to get to this binary payload other that stringValue? Can I override part of AVMetadataMachineReadableCodeObject and add my own binaryValue or something like it.
I'm willing to try anything, but I'd love this to work natively without resorting to ZXing or some other library, as this is a pretty compact project. If you know this to be working with a library, feel free to add a comment though.
Disclaimer: I'm coding this in Swift, but I think I could manage to abstract this from Obj-C code as well, if that is what you know.
With ZXing I found a solution (that is still work in progress, but I managed to inflate the deflated content with success using libz).
Go to ZXAztecDecoder.m and in the method + (NSString *)encodedData:(ZXBoolArray *)correctedBits you can use code given from + (int)readCode:(ZXBoolArray *)rawbits startIndex:(int)startIndex length:(int)length to create unsigned char.
So wherever I found a line of code trying to add characters to the NSString I also filled up a buffer of unsigned char (casted from int returned fro the readCode:rawBits:startIndex:length method).
I used the Aztec barcode from the question about parsing Deutche Ban tickets with Python that you posted. I was able to inflate the content and everything. The only problem is that inflate does not work for my Railway Company... So I'm left with figuring out what kind of algorithm that should inflate a stream that starts with 55 4E or 55 8E ...
Good luck! Your other questions I found helped a lot, so thanks.
EDIT:
Some code that might help: https://gist.github.com/johanhar/a421a14bef2f06ee2340
Inspired by answers to this question and other sites, I have created a gist that allows to extract binary from QR code or Aztec code, without using private APIs nor other library. It is a AVMetadataMachineReadableCodeObject extension presenting a binaryValue.
However, it only runs on iOS 11 and later, because of the CIQRCodeDescriptor use.
It is available here : https://gist.github.com/PetrusM/267e2ee8c1d8b5dca17eac085afa7d7c
For QR codes, it works only with 100% binary ones. But if they contain further parts, you can easily adapt it.
As it turns out, the raw binary data does exist in the AVMetadataMachineReadableCodeObject and can be read into an NSData object, but the method comes with a catch.
In my case I was trying to read an Aztec barcode, but it seems it works with anything iOS can detect. This SO answer has the solution:
Read binary QR Code with AVFoundation

Get Direct3D device from Direct2D render target

I'm using Direct2D to render my user interface.
What I would like is to be more easily able to profile my ui rendering (since I'm using several panels using Graphics debugger is a bit cumbersome).
Since I know that Direct2D uses a Direct3D device (exactly d3d11 device using 10_0 feature level) under the hood, I'd like to know if it is possible to retrieve either a ID310Device or ID3D11Device instance from ID2D1RenderTarget or ID2D1Factory object.
In that case I would easily be able to attach a timestamp query on the BeginDraw/EndDraw calls.
I tried several QueryInterface calls, but none of them have been sucessful so far.
An interesting undocumented secret is that any ID2D1RenderTarget you get from ID2D1Factory will also be an ID2D1DeviceContext (it seems to be intentional from what I've gathered, just accidentally undocumented?). Just call IUnknown::QueryInterface() to snag it. From there you can toy around with methods like GetDevice() and GetTarget(). If you can get the target then you may be able to weasel your way to obtaining the IDXGISurface which supports IDXGIDeviceSubObject::GetDevice() https://msdn.microsoft.com/en-us/library/windows/desktop/bb174529(v=vs.85).aspx (I haven't verified this part)
And in Win10 it looks like ID2D1Device2 gives you precisely what you want: GetDxgiDevice() https://msdn.microsoft.com/en-us/library/windows/desktop/dn917489(v=vs.85).aspx . So in that case, your ID2D1RenderTarget is cast to an ID2D1DeviceContext via IUnknown::QueryInterface(), and then you get an ID2D1Device via ID2D1DeviceContext::GetDevice() and then cast it to an ID2D1Device2 via another call to IUnknown::QueryInterface().

cvRetrieveFrame intricacies - openCV

The Documentation of OpenCV mentions that "the returned image (by cvRetrieveFrame) should not be released or modified by the user" ...
Link: http://opencv.willowgarage.com/documentation/c/highgui_reading_and_writing_images_and_video.html#retrieveframe
I am trying to debug my code, which involves the following steps:
Retrieve frame from video using cvRetrieveFrame()
Do some processing on the frame
output results
My instinct says that something is wrong with cvRetrieveFrame() because if I manually input frames using cvLoadImage, the program works fine. But I am not getting same results while using cvRetrieveFrame().
Since the documentation mentions such a restriction, any reason for such a restriction ? And, any alternatives ?
Have a great day
Before you call this function, you should have used another function which is cvGrabFrame() in order to be able to use the mentioned function, which you can use it for doing any necessary processing on the frame (such as the decompression stage in
the codec) and then return an IplImage* pointer that points to another internal buff er
(so do not rely on this image, because it will be overwritten the next time you call
cvGrabFrame()).

Resources