Change ID3D11Texture2D pixel format - directx

I have texture, loaded from a .DDS file using the method D3DX11CreateTextureFromFile(). The DDS is created using Block Compression 1 compression, so when I query IDXGISurface1 from the ID3D11Texture2D, the pixel format of the surface is DXGI_FORMAT_BC1_UNORM.
So my question is: can I change (convert) the format of the surface to DXGI_FORMAT_B8G8R8A8_UNORM. I tried ID3D11DeviceContext::CopyResource method but it seems it is unable to convert from BC1 to 32bppBGRA.
Any suggestions are appreciated.

If this is a one-time or build-time process, use the TexConv tool included in DirectXTex. If you need to do this at run-time, you can either render the image to a B8G8R8A8 render target, or use the CPU conversion code included in the DirectXTex library.

Related

Brachiograph and Turtle

I am currently in the process of building a Brachiograph plotter. I am 75 years old and have a minor disability with my hands. I would like to find out if anybody can tell me how I can output Turtle Graphics to a file that can be read by the Brachiograph plotter. I believe that the linedraw.py converts a .svg to a .json file that is read by the Brachiograph. I would like to create some fractal image files and print them with the Brachiograph.
Thank you for any help that you can offer with this project.
Dick Burkhartzmeyer
Welcome to Stackoverflow. In your OP you state "I believe that the linedraw.py converts a .svg to a .json file[...]". Looking at the documentation it looks like linedraw.py will convert a bitmap to an svg file and a json file.
From the documentation:
The main use you will have for linedraw is to take a bitmap image file
as input, and save the vectorised version as:
an SVG file, to check it
a JSON file, to draw with the BrachioGraph.
The way I would approach this is to create an svg with Turtle and then in your Python script convert that to a png. Then linedraw.py can be used to convert that to your JSON BrachioGraph file. I found a solution to do that in another SO thread. The answer in that thread is using Cairosvg to convert the SVG to PNG format.

Edit Photos via Photoshop on a server

I wart to create a web app where a user enters certain data via a form and then receives a custom rendered image. The image is from a smart object in a psd. It's kind of like a mock-up which definitely requires needs some photoshop filters to be properly rendered.
This should all happen in real time and should be doable from my understanding since the rendering of a single images doesn't need much computing power
I've done some research and haven't really found a solution the matches my problem. Is it necessary to run Photoshop on a server and then remotely run a photoshop script and then upload the generated image somewhere else?
I've used The After Effects Plugin Template by DataClay in the past which offers similar functionality but for video.
Looking forward to hearing your ideas.
Thanks
You can use the Dataclay plugin to handle still image exports out of After Effects. Make a single-frame duration composition in After Effects and rig the layers with the Templater plugin. Then use the PNG Sequence output module to render out a single frame.
From Dataclay's forums:
Exporting
A few extra steps are required to correctly render a project file as a PNG sequence using Templater. By default, a file rendered as a PNG sequence will have the frame number appended to the end of the file name, i.e.:
filename.png00000, filename.png00001, filename.png00002, etc.
In order to designate where in the filename the frame number should be added, we’ll need to use the output column. First, add a column named output to your data source. Next, add a filename with a set of brackets with five # signs to designate where the frame numbering should be added. For example:
filename[#####] would result in filename00001.png
or
[#####]filename would result in 00001filename.png

Store bitmap in text file

I want to store small bitmaps in a text file similar to the way Delphi does it with it's dfm files.
Is there a function in the RTL or VCL that I could use to do this?
I suggest that you do the following:
Save to an in memory stream. Use TMemoryStream, and call SaveToStream on the bitmap.
Compress the stream, perhaps using the zlib unit. This step is optional.
Encode the stream using base64. For example you can use the functionality provided by Soap.EncdDecd.
And in the opposite direction, well you just reverse the steps.
Textual DFMs use the BinToHex() function to format binary data.
You can simply use Win32.WriteFile to write your bitmap-buffer into a file.

How to inject exif metadata into an image, without copying the image?

I have previously asked this question: How to write exif metadata to an image.
I now have found a way to inject metadata. However, it results in a copy of the image into memory. With large images, and the need to already have a copy in memory, this is going to have performance, and possibly cause a memory crash.
Is there a correct way to inject metadata without having to make a copy of the image? Perhaps it could be tacked on to a file, after it is written to disk?
I would prefer native implementations, without having to resort to a third party library just for this, if at all possible.
This question could require a small or large amount of code depending on what you need. EXIF data is stored in a JPEG APP1 marker (FFE1). It looks very much like a TIFF file with a TIFF header, IFD and individual tags with the data. If you can build your own APP1 marker segment, then inserting it or replacing it in a JPEG file is trivial. If you are looking to read the metadata from an existing file, add some new tags and then write it back, that can be more involved. The tricky part of EXIF data are those tags which require more than 4-bytes. Each TIFF tag is 12 bytes: 2-byte tag, 2-byte data type, 4-byte count, 4-byte data. If the data doesn't fit completely in the 4 bytes of the tag, then the tag specifies an absolute offset into the file of where to find the data. If the existing data has any tags with data like this (e.g. make, model, capture date, capture time, etc), you will need to repack that data by fixing the offsets and then add your own. In a nutshell:
1) If you are adding a pre-made APP1 marker to a JPEG file, this is simple and requires little code.
2) If you need to read the existing meta-data from a JPEG file, add your own and write it back, the code is a bit more involved. It's not "difficult", but it involves a lot more than just reading and writing blocks of data.
Start by reading the TIFF 6.0 spec to understand the tag and directory structure:
TIFF 6.0 spec
Next, take a look at the JPEG EXIF spec:
EXIF 2.2 Spec
I would expect the existing exif manipulator software can do it, but haven't tested.
Links:
http://www.exiv2.org/
http://libexif.sourceforge.net/
http://www.kraxel.org/blog/linux/fbida/
CGImageSourceRef could be used to get image properties including its thumbnail without loading all image data into memory. This way memory is not wasted by UIImage and NSData.
CGImageSourceRef imageSource = CGImageSourceCreateWithURL((CFURLRef)[NSURL fileURLWithPath:path], NULL);
Then save CGImageDestinationRef adding the source image and exif data.
CGImageDestinationAddImageFromSource (destRef,
imageSource,
0,
(CFDictionaryRef)propertes );//exif
BOOL success = CGImageDestinationFinalize(destRef);

How to make a change in Qualcomm's Vuforia Sample App

I have been looking through the threads at the Qualcomm Forums but no luck since I don't know exactly how to look for what I want.
I'm working with the ImageTargets Sample for iOS and I want to change the teapot to another image (a text rather) I had.
I already have the render and I got the .h using opengl library but I can't figure out what do I need to change to make this work and since this is the very basic and I haven't been able to make it work I really haven't ventured to try anything else.
Could anyone please help me out?
I would paste code here but it's a whole project so I don't know exactly what to put if needed please let me know.
If the case is still valid, here's what you have to do:
get header file for 3D object
get texture image for this object
in EAGLView.mm make this changes:
import "yourobject3d.h"
add your texture to textureFilenames array(this should be at the begining of EAGLView
eventually take care about kObjectScale (by deafult it was about 3.0f, for one object I did have to change it even up to 120.0f)
in setup3dObjects method assign proper arrays of vertices/normals/texture coords (check in "yourobject3d.h" file for proper arrays and naming) to Object3D *object
make this change in renderFrameQCAR
//glDrawElements(GL_TRIANGLES, obj3D.numIndices, GL_UNSIGNED_SHORT, (const GLvoid*)obj3D.indices);
glDrawArrays(GL_TRIANGLES, 0, obj3D.numVertices);
I believe that is all... if something take a look at Vuforia's forum, i.e. here: https://developer.vuforia.com/node/2047669
NOTE: default teapot.h does (!) have indices, which are not present in banana.h (from comment below) so take care about that too
Have a look at the EAGLView.mm file. There you'll have to load the textures (images) and 3d objects (you'll need to import your .h instead of teapot.h and modify setup3dObjects accordingly).
They are finally rendered by calling the renderFrameQCAR function.
Actually, teapot is not an image. It's a 3D model stored in .h format which includes Vertices, Normals, and Texture coordinates. You should have a good knowledge of OpenGL ES to understand those codes in sample app.
An easier way to change the 3D model to whatever you want is to use a rendering engine which facilitates the drawing and rendering stuffs and you don't need to bother OpenGL APIs. I've done it with jPCT-AE for Android platform but for iOS there is a counterpart called OpenFrameworks engine. It has some plugins to load 3Ds or MD2 files and since it's written in C++ you can easily integrate it with QCAR.
This is a short video of my result with jPCT and QCAR:
Qualcomm Vuforia + jPCT-AE test video

Resources