My BlackBerry application should fetch an image from a web service and display the image as a thumbnail. Can anyone give me an idea on how to achieve this?
petteri is right about using EncodedImage and scaleImage32(). Specifically, you'll want to use createEncodedImage(byte[] data, int offset, int length) with the bytes returned by the webservice.
Be aware that scaleImage32 takes 'int' arguments, but they are fixed-point numbers, in contrast to the more widely known floating-point numbers. To get the fixed-point value you want, use the utility methods in Fixed32
Finally, if you don't need the original image in the BlackBerry application, you will have a better overall experience if the webservice does the scaling. This will reduce the number of bytes transferred to the device, and it will reduce the computation done on device to scale the image. Scaling on the server will likely result in a higher quality scaled image as well, as scaleImage32() uses a fairly basic algorithm.
I'm not totally familiar with BB either but since nobody else is answering your question, check out EncodedImage class and there method scaleImage32() should return you the scaled version.
This code can help you
connection = (HttpConnection) Connector.open(fullUrl.toString(),
Connector.READ_WRITE, true);
InputStream is = hc.openInputStream();
DataInputStream dis = new DataInputStream(is);
ByteArrayOutputStream bStrm = new ByteArrayOutputStream();
int ch;
while ((ch = dis.read()) != -1) {
// System.out.println((char) ch);
// msg = msg + (char) ch;
bStrm.write(ch);
}
bb = bStrm.toByteArray();
This will generate Byte Array from your web service url. here bb is byte array.
There are two classes that handles image in BB. EncodedImage and Bitmap, both have constructors that generate image from byte array. I recommend use Bitmap, it has easy image re size capability.
Related
So I'm getting frames in CV420YpCbCr8BiPlanarFullRange pixel format. That is a bi-planar 4:2:0 thingy.
I need to convert it to some of the formats consumed by a third party library, for example this one.
At first sight it seems that it's the same format, but unfortunately the picture is mangled. Any ideas where is the problem? And what are some libraries that allow for conversion between the formats?
video_frame.p_data = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);
doesn't work. Neither does getting the base address of the structure.
I am not familiar with directx, but I ran into a problem in a small project, part of which involves capturing directx data. I hope, below I make some sense.
General question:
I would like to know what factors determine the DXGI_FORMAT of a texture in the backbuffer (hardware?, OS?, application?, directx version?). And more importantly, when capturing a texture from the backbuffer, is it possible to receive a texture in the desired format by supplying the format as a parameter, having the format automatically converted if necessary.
Specifics about my problem :
I capture screens from games using Open Broadcaster Software(OBS) and process them using a specific library(OpenCV) prior to streaming. I noticed that, following updates to both Windows and OBS, I get 'DXGI_FORMAT_R10G10B10A2_UNORM' as the DXGI_FORMAT. This is a problem for me, because as far as I know OpenCV does not provide a convenient way for building an OpenCV object when colors are 10bits. Below are a few relevant lines from the modified OBS source file.
d3d11_copy_texture(data.texture, backbuffer);
...
hlog(toStr(data.format)); // prints 24 = DXGI_FORMAT_R10G10B10A2_UNORM
...
ID3D11Texture2D* tex;
bool success = create_d3d11_stage_surface(&tex);
if (success) {
...
HRESULT hr = data.context->Map(tex, subresource, D3D11_MAP_READ, 0, &mappedTex);
...
Mat frame(data.cy, data.cx, CV_8UC4, mappedTex.pData, (int)mappedTex.RowPitch); //This creates an OpenCV Mat object.
//No support for 10-bit coors. Expects 8-bit colors (CV_8UC4 argument).
//When the resulting Mat is viewed, colours are jumbled (Probably because 10-bits did not fit into 8-bits).
Before the updates (when I was working on this a year ago), I was probably receiving DXGI_FORMAT = DXGI_FORMAT_B8G8R8A8_UNORM, because the code above used to work.
Now I wonder what changed, and whether I can modify the source code of OBS to receive data with the desired DXGI_FORMAT.
'create_d3d11_stage_surface' method called above sets the DXGI_FORMAT, but I am not sure if it means 'give me data with this DXGI_FORMAT' or 'I know you work with this format, give me what you have'.
static bool create_d3d11_stage_surface(ID3D11Texture2D **tex)
{
HRESULT hr;
D3D11_TEXTURE2D_DESC desc = {};
desc.Width = data.cx;
desc.Height = data.cy;
desc.Format = data.format;
...
I hoped that, overriding the desc.Format with DXGI_FORMAT_B8G8R8A8_UNORM would result in that format being passed as argument in the ID3D11DeviceContext::Map call above, and I would get data with specified format. But that did not work.
The choice of render target is up to the application, but they need to pick one based on the Direct3D hardware feature level. Formats for render targets in swapchains are usually display scanout formats:
DXGI_FORMAT_R8G8B8A8_UNORM
DXGI_FORMAT_R8G8B8A8_UNORM_SRGB
DXGI_FORMAT_B8G8R8A8_UNORM
DXGI_FORMAT_B8G8R8A8_UNORM
DXGI_FORMAT_R10G10B10A2_UNORM
DXGI_FORMAT_R16G16B16A16_FLOAT
DXGI_FORMAT_R10G10B10_XR_BIAS_A2_UNORM (rare)
See the DXGI documentation for the full list of supported formats and usages by feature level.
Direct3D 11 does not do format conversions when you copy resources such as copying to staging render textures, so if you want to do a format conversion you'll need to handle that yourself. Note that CPU-side conversion code for all the DXGI formats can be found in DirectXTex.
It is the application that decides that format. The simplest one would be R8G8B8A8, which simply represents RGB and alpha values. But, if developer decides that he will be using HDR, the backbuffer would probably be R11B11G10, because you can store way more precise data there, without alpha channel information. If the game is for example black and white, there's no need to keep all RGB channels in the back buffer, you could use simpler format. I hope this helps.
When I am trying to read image from file, then after load Mat.Data array is alway null. But when I am looking into Mat object during debug there is byte array in which are all data from image.
Mat image1 = CvInvoke.Imread("minion.bmp", Emgu.CV.CvEnum.LoadImageType.AnyDepth);
Do you have any idea why?
I recognize this question is super old, but I hit the same issue and I suspect the answer lies in the Emgu wiki. Specifically:
Accessing the pixels from Mat
Unlike the Image<,> class, where memory are pre-allocated and fixed, the memory of Mat can be automatically re-allocated by Open CV function calls. We cannot > pre-allocate managed memory and assume the same memory are used through the life time of the Mat object. As a result, Mat class do not contains a Data > property like the Image<,> class, where the pixels can be access through a managed array. To access the data of the Mat, there are a few possible choices.
The easy way and safe way that cost an additional memory copy
The first option is to copy the Mat to an Image<,> object using the Mat.ToImage function. e.g.
Image<Bgr, Byte> img = mat.ToImage<Bgr, Byte>();
The pixel data can then be accessed using the Image<,>.Data property.
You can also convert the Mat to an Matrix<> object. Assuming the Mat contains 8-bit data,
Matrix<Byte> matrix = new Matrix<Byte>(mat.Rows, mat.Cols, mat.NumberOfChannels);
mat.CopyTo(matrix);
Note that you should create Matrix<> with a matching type to the Mat object. If the Mat contains 32-bit floating point value, you should replace Matrix in the above code with Matrix. The pixel data can then be accessed using the Matrix<>.Data property.
The fastest way with no memory copy required. Be caution!!!
The second option is a little bit tricky, but will provide the best performance. This will usually require you to know the size of the Mat object before it is created. So you can allocate managed data array, and create the Mat object by forcing it to use the pinned managed memory. e.g.
//load your 3 channel bgr image here
Mat m1 = ...;
//3 channel bgr image data, if it is single channel, the size should be m1.Width * m1.Height
byte[] data = new byte[m1.Width * m1.Height * 3];`
GCHandle handle = GCHandle.Alloc(data, GCHandleType.Pinned);`
using (Mat m2 = new Mat(m1.Size, DepthType.Cv8U, 3, handle.AddrOfPinnedObject(), m1.Width * 3))`
CvInvoke.BitwiseNot(m1, m2);`
handle.Free();
At this point the data array contains the pixel data of the inverted image. Note that if the Mat m2 was allocated with the wrong size, data[] array will contains all 0s, and no exception will be thrown. So be really careful when performing the above operations.
TL;DR: You can't use the Data object in the way you're hoping to (as of version 3.2 at least). You must copy it to another object which allows use of the Data object.
I'm using the SharpDX Toolkit, and I'm trying to create a Texture2D programmatically, so I can manually specify all the pixel values. And I'm not sure what pixel format to create it with.
SharpDX doesn't even document the toolkit's PixelFormat type (they have documentation for another PixelFormat class but it's for WIC, not the toolkit). I did find the DirectX enum it wraps, DXGI_FORMAT, but its documentation doesn't give any useful guidance on how I would choose a format.
I'm used to plain old 32-bit bitmap formats with 8 bits per color channel plus 8-bit alpha, which is plenty good enough for me. So I'm guessing the simplest choices will be R8G8B8A8 or B8G8R8A8. Does it matter which I choose? Will they both be fully supported on all hardware?
And even once I've chosen one of those, I then need to further specify whether it's SInt, SNorm, Typeless, UInt, UNorm, or UNormSRgb. I don't need the sRGB colorspace. I don't understand what Typeless is supposed to be for. UInt seems like the simplest -- just a plain old unsigned byte -- but it turns out it doesn't work; I don't get an error, but my texture won't draw anything to the screen. UNorm works, but there's nothing in the documentation that explains why UInt doesn't. So now I'm paranoid that UNorm might not work on some other video card.
Here's the code I've got, if anyone wants to see it. Download the SharpDX full package, open the SharpDXToolkitSamples project, go to the SpriteBatchAndFont.WinRTXaml project, open the SpriteBatchAndFontGame class, and add code where indicated:
// Add new field to the class:
private Texture2D _newTexture;
// Add at the end of the LoadContent method:
_newTexture = Texture2D.New(GraphicsDevice, 8, 8, PixelFormat.R8G8B8A8.UNorm);
var colorData = new Color[_newTexture.Width*_newTexture.Height];
_newTexture.GetData(colorData);
for (var i = 0; i < colorData.Length; ++i)
colorData[i] = (i%3 == 0) ? Color.Red : Color.Transparent;
_newTexture.SetData(colorData);
// Add inside the Draw method, just before the call to spriteBatch.End():
spriteBatch.Draw(_newTexture, new Vector2(0, 0), Color.White);
This draws a small rectangle with diagonal lines in the top left of the screen. It works on the laptop I'm testing it on, but I have no idea how to know whether that means it's going to work everywhere, nor do I have any idea whether it's going to be the most performant.
What pixel format should I use to make sure my app will work on all hardware, and to get the best performance?
The formats in the SharpDX Toolkit map to the underlying DirectX/DXGI formats, so you can, as usual with Microsoft products, get your info from the MSDN:
DXGI_FORMAT enumeration (Windows)
32-bit-textures are a common choice for most texture scenarios and have a good performance on older hardware. UNorm means, as already answered in the comments, "in the range of 0.0 .. 1.0" and is, again, a common way to access color data in textures.
If you look at the Hardware Support for Direct3D 10Level9 Formats (Windows) page you will see, that DXGI_FORMAT_R8G8B8A8_UNORM as well as DXGI_FORMAT_B8G8R8A8_UNORM are supported on DirectX 9 hardware. You will not run into compatibility-problems with both of them.
Performance is up to how your Device is initialized (RGBA/BGRA?) and what hardware (=supported DX feature level) and OS you are running your software on. You will have to run your own tests to find it out (though in case of these common and similar formats the difference should be a single digit percentage at most).
I am writing an application in XNA that relies on render targets that are rendered to just once, and then used indefinitely afterwards. The problem I've encountered is that there are certain situations where the render targets' contents are lost or disposed, such when the computer goes to sleep or the application enters full-screen mode.
Re-rendering to each target when content is lost is an option, but likely not the best option, as it could be fairly costly when there are many targets.
I could probably save each result as a PNG image and then load that PNG back up as a texture, but that adds a lot of I/O overhead.
Suggestions?
Most likely option I've found so far is to use GetData() and SetData() to copy from the RenderTarget2D to a new Texture2D.
Since I want a mip mapped texture, I found that I had to copy each mip level individually to the new texture, like so. If you don't do this, the object will turn black as you move away from it since there's no data in the mip maps. Note that the render target must also be mip mapped.
Texture2D copy = new Texture2D(graphicsDevice,
renderTarget.Width, renderTarget.Height, true,
renderTarget.Format);
// Set data for each mip map level
for (int i = 0; i < renderTarget.LevelCount; i++)
{
// calculate the dimensions of the mip level.
// Math.Max because dimensions always non-zero
int width = (int)Math.Max((renderTarget.Width / Math.Pow(2, i)), 1);
int height = (int)Math.Max((renderTarget.Height / Math.Pow(2, i)), 1);
Color[] data = new Color[width * height];
renderTarget.GetData<Color>(i, null, data, 0, data.Length);
copy.SetData<Color>(i, null, data, 0, data.Length);
}
I believe
Presetnationparameters pp = graphics.PresentationParameters;
pp.RenderTargetUsage = RenderTargetUsage.PreserveContents;
Should do the trick. It has to do with how shadermodels work on PC and Xbox, and how shadermodel 2+ made it equal. (Something about Xbox overwriting its output buffer by default, hence old rendertargets clear, whilst PC just uses some other memory)