Initialising texture of MTLPixelFormatR32Float in metal - ios

I have a buffer initialised with a single-channel floating point image, which I need to get into a floating point format texture (MTLPixelFormatR32Float). I've tried creating the texture with that format and doing the following:
float *rawData = (float*)malloc(sizeof(float) * img.cols * img.rows);
for(int i = 0; i < img.rows; i++){
for(int j = 0; j < img.cols; j++){
rawData[i * img.cols + j] = img.at<float>(i, j);
}
}
MTLTextureDescriptor *textureDescriptor = [MTLTextureDescriptor texture2DDescriptorWithPixelFormat:MTLPixelFormatR32Float
width:img.cols
height:img.rows
mipmapped:NO];
[texture replaceRegion:region mipmapLevel:0 withBytes:&rawData bytesPerRow:bytesPerRow];
where rawData is my buffer with the necessary floating point data. This doesn't work, I get an EXC_BAD_ACCESS error on the [texture replaceRegion...] line. I've also tried the MTKTextureLoader, which also returns nil instead of the texture.
Help would be appreciated. I would be most grateful if anyone has a working method of how to initialise the MTLPixelFormatR32Float texture with custom floating point data for data-parallel computation purposes.

The bytes that you pass to replaceRegion should point to your data. You are incorrectly passing a pointer to a pointer.
To fix it, replace withBytes:&rawData with withBytes:rawData

Related

opencv cv::mat not returning the same result

int sizeOfChannel = (_width / 2) * (_height / 2);
double* channel_gr = new double[sizeOfChannel];
// filling the data into channel_gr....
cv::Mat my( _width/2, _height/2, CV_32F,channel_gr);
cv::Mat src(_width/2, _height/2, CV_32F);
for (int i = 0; i < (_width/2) * (_height/2); ++i)
{
src.at<float>(i) = channel_gr[i];
}
cv::imshow("src",src);
cv::imshow("my",my);
cv::waitKey(0);
I'm wondering why i'm not getting the same image in my and src imshow
update:
I have changed my array into double* still same result;
I think it is something to do with steps?
my image output
src image output
this one works for me:
int halfWidth = _width/2;
int halfHeight = _height/2;
int sizeOfChannel = halfHeight*halfWidth;
// ******************************* //
// you use CV_321FC1 later so it is single precision float
float* channel_gr = new float[sizeOfChannel];
// filling the data into channel_gr....
for(int i=0; i<sizeOfChannel; ++i) channel_gr[i] = i/(float)sizeOfChannel;
// ******************************* //
// changed row/col ordering, but this shouldnt be important
cv::Mat my( halfHeight , halfWidth , CV_32FC1,channel_gr);
cv::Mat src(halfHeight , halfWidth, CV_32FC1);
// ******************************* //
// changed from 1D indexing to 2D indexing
for(int y=0; y<src.rows; ++y)
for(int x=0; x<src.cols; ++x)
{
int arrayPos = y*halfWidth + x;
// you have a 2D mat so access it in 2D
src.at<float>(y,x) = channel_gr[arrayPos ];
}
cv::imshow("src",src);
cv::imshow("my",my);
// check for differences
cv::imshow("diff1 > 0",src-my > 0);
cv::imshow("diff2 > 0",my-src > 0);
cv::waitKey(0);
'my' is array of floats but you give it pointer to arrays of double. There no way it can get data from this array properly.
It seems that the constructor version that you are using is
Mat::Mat(int rows, int cols, int type, const Scalar& s)
This is from OpenCV docs. Seems like you are using float for src and assigning from channel_gr (declared as double). Isn't that some form of precision loss?

xcode CVpixelBuffer shows negative values

I am using xcode and is currently trying to extract pixel values from the pixel buffer using the following code. However, when i print out the pixel values, it consists of negative values. Anyone has encountered such problem before?
part of the code is as below
- (void)captureOutput:(AVCaptureOutput*)captureOutput didOutputSampleBuffer:
(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection*)connection
{
CVImageBufferRef Buffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(Buffer, 0);
uint8_t* BaseAddress = (uint8_t*)CVPixelBufferGetBaseAddressOfPlane(Buffer, 0);
size_t Width = CVPixelBufferGetWidth(Buffer);
size_t Height = CVPixelBufferGetHeight(Buffer);
if (BaseAddress)
{
IplImage* Temporary = cvCreateImage(cvSize(Width, Height), IPL_DEPTH_8U, 4);
Temporary->imageData = (char*)BaseAddress;
for (int i = 0; i < Temporary->width * Temporary->height; ++i) {
NSLog(#"Pixel value: %d",Temporary->imageData[i]);
//where i try to print the pixels
}
}
The issue is that imageData of IplImage is a signed char. Thus, anything greater than 127 will appear as a negative number.
You can simply assign it to an unsigned char, and then print that, and you'll see values in the range between 0 and 255, like you probably anticipated:
for (int i = 0; i < Temporary->width * Temporary->height; ++i) {
unsigned char c = Temporary->imageData[i];
NSLog(#"Pixel value: %u", c);
}
Or you can print that in hex:
NSLog(#"Pixel value: %02x", c);

Using loaded .raw image data as a IDirect3DTexture9 texture in DirectX9?

Im trying to make use of a simple .raw loader as an easy way to load images into a program to be used as textures by DirectX9.
I have a problem in that the D3DX functions are not available to me at all, nor can i find them anywhere. I have constructed my own matrix routines fine, but can't use the D3DX Texture file function without some pointers.
I've done my homework, so i'm thinking what i need is to use the CreateTexture function and some code to marry my unsigned char image with IDirect3DTexture9 *DXTexture.
IDirect3DTexture9 *DXTexture;
unsigned char texture;
loadRawImage(&texture, "tex", 128, 128);
g_pD3DDevice->CreateTexture(128,128,0,D3DUSAGE_DYNAMIC,D3DFMT_A8R8G8B8,
D3DPOOL_DEFAULT, &DXTexture,NULL);
//code required here to marry my unsigned char image with DXTexture
g_pD3DDevice->SetTexture(0, texture);
I've seen this page, looks sort of like what i need..
http://www.gamedev.net/topic/567044-problem-loading-image-data-into-idirect3dtexture9/
IDirect3DTexture9* tempTexture = 0;
HRESULT hr = device->CreateTexture(this->width,this,>height,0,D3DUSAGE_DYNAMIC,
D3DFMT_A8R8G8B8, D3DPOOL_DEFAULT,&tempTexture,0);
//assignment pointer
D3DCOLOR *Ptr;
unsigned char *tempPtr = 0; // increment pointer
int count = 0; //index into color data
//lock texture and get ptr
D3DLOCKED_RECT rect;
hr = tempTexture->LockRect(0,&rect,0,D3DLOCK_DISCARD);
tempPtr = (unsigned char*)rect.pBits; // assign to unsigned char
// pointer to make pointer arithmetic
// smooth
for(unsigned int i = 0; i < this->height; i++)
{
tempPtr += rect.Pitch; //move to next line in texture
Ptr = (D3DCOLOR*)tempPtr;
for(unsigned int j = 0; j < this->width; j++)
{
Ptr[j] = D3DCOLOR_XRGB(this->imageData[count++],
this->imageData[count++],
this->imageData[count++]);
}
}
tempTexture->UnlockRect(0);
Any pointers would be appreciated. This is for a small demo so code is being kept down to a minimum.
EDIT to respond to drop
Basically my question is how can I use the loaded .raw image data as a DirectX9 texture? I know there must be some internal byte format in which IDirectTexture9 textures are arranged, I just need some pointers on how to convert my data to this format.This is without using D3DX functions.
Have a try using below approach
D3DLOCKED_RECT rect;
ppTexture->LockRect( 0, &rect, 0, D3DLOCK_DISCARD );
unsigned char* dest = static_cast<unsigned char*>(rect.pBits);
memcpy(dest, &pBitmapData[0], sizeof(unsigned char) * biWidth * biHeight * 4);
ppTexture->UnlockRect(0);

OpenCV C++: how access pixel value CV_32F through uchar data pointer

Briefly, I would like to know if it is possible to directly access pixel value
of a CV_32F Mat, through Mat member "uchar* data".
I can do it with no problem if Mat is CV_8U, for example:
// a matrix 5 columns and 6 rows, values in [0,255], all elements initialised at 12
cv:Mat A;
A.create(5,6, CV_8UC1);
A = cv::Scalar(12);
//here I successfully access to pixel [4,5]
uchar *p = A.data;
int value = (uchar) p[4*A.step + 5];
The problem is when I try to do the same operation with the following matrix,
// a matrix 5 columns, 6 rows, values in [0.0, 1.0], all elements initialised at 1.2
cv::Mat B;
B.create(5,6, CV_32FC1);
B = cv::Scalar(1.2);
//this clearly does not work, no syntax error but erroneous value reported!
uchar *p = B.data;
float value = (float) p[4*B.step + 5];
//this works, but it is not what I want to do!
float value = B.at<float>(4,5);
Thanks a lot, Valerio
You can use ptr method which returns pointer to matrix row:
for (int y = 0; y < mat.rows; ++y)
{
float* row_ptr = mat.ptr<float>(y);
for (int x = 0; x < mat.cols; ++x)
{
float val = row_ptr[x];
}
}
You can also cast data pointer to float and use elem_step instead of step if matrix is continous:
float* ptr = (float*) mat.data;
size_t elem_step = mat.step / sizeof(float);
float val = ptr[i * elem_step + j];
Note that CV_32F means the elements are float instead of uchar. The "F" here means "float". And the "U" in CV_8U stands for unsigned integer. Maybe that's why your code doesn't give the right value. By declaring p as uchar*, p[4*B.step+5] makes p move to the fifth row and advance sizeof(uchar)*5, which tend to be wrong. You can try
float value = (float) p[4*B.step + 5*B.elemSize()]
but I'm not sure if it will work.
Here are some ways to pass the data of [i, j] to value:
value = B.at<float>(i, j)
value = B.ptr<float>(i)[j]
value = ((float*)B.data)[i*B.step+j]
The 3rd way is not recommended though, since it's easy to overflow. Besides, a 6x5 matrix should be created by B.create(6, 5, CV_32FC1), I think?

Converting 16-bit short to 32 bit float

In the tone generator example for iOS:http://www.cocoawithlove.com/2010/10/ios-tone-generator-introduction-to.html
I am trying to convert a short array to Float32 in iOS.
Float32 *buffer = (Float32 *)ioData->mBuffers[channel].mData;
short* outputShortBuffer = static_cast<short*>(outputBuffer);
for (UInt32 frame = 0, j=0; frame < inNumberFrames; frame++, j=j+2)
{
buffer[frame] = outputShortBuffer[frame];
}
For some reasons, I am hearing an added noise when played back from the speaker. I think that there is a problem with conversion from short to Float32?
Yes, there is.
Consider that the value-range for floating point samples is -1.0 <= Xn <= 1.0 and for signed short is -32767 <= Xn <= +32767. Merely casting will result in clipping on virtually all samples.
So taking this into account:
Float32 *buffer = (Float32 *)ioData->mBuffers[channel].mData;
short* outputShortBuffer = static_cast<short*>(outputBuffer);
for (UInt32 frame = 0, j=0; frame < inNumberFrames; frame++, j=j+2)
{
buffer[frame] = ((float) outputShortBuffer[frame]) / 32767.0f;
}
[Note: this is not the optimal way of doing this].
However, are you sure your frames are mono? If not this might also be a cause of audio corruption as you'll only be copying one channel.
As an aside, why, if your output buffer is floats are you not using them throughout?

Resources