I am trying to implement Variable Rate Shading in the app based on DirectX 11.
I am doing it this way:
UINT dwRtWidth = 2560;
UINT dwRtHeight = 1440;
D3D11_TEXTURE2D_DESC srcDesc;
ZeroMemory(&srcDesc, sizeof(srcDesc));
int sri_w = dwRtWidth / NV_VARIABLE_PIXEL_SHADING_TILE_WIDTH;
int sri_h = dwRtHeight / NV_VARIABLE_PIXEL_SHADING_TILE_HEIGHT;
srcDesc.Width = sri_w;
srcDesc.Height = sri_h;
srcDesc.ArraySize = 1;
srcDesc.Format = DXGI_FORMAT_R8_UINT;
srcDesc.SampleDesc.Count = 1;
srcDesc.SampleDesc.Quality = 0;
srcDesc.Usage = D3D11_USAGE_DEFAULT; //Optional
srcDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE; //Optional
srcDesc.CPUAccessFlags = 0;
srcDesc.MiscFlags = 0;
D3D11_SUBRESOURCE_DATA initialData;
UINT* data = (UINT*)malloc(sri_w * sri_h * sizeof(UINT));
for (int i = 0; i < sri_w * sri_h; i++)
data[i] = (UINT)0;
initialData.pSysMem = data;
initialData.SysMemPitch = sri_w;
//initialData.SysMemSlicePitch = 0;
HRESULT hr = s_device->CreateTexture2D(&srcDesc, &initialData, &pShadingRateSurface);
if (FAILED(hr))
{
LOG("Texture not created");
LOG(std::system_category().message(hr));
}
else
LOG("Texture created");
When I try to create texture with initial data, it is not being created and HRESULTS gives message: 'The parameter is incorrect'. Doesn't say which one.
When I create texture without initial data it's created successfully.
What's wrong with the initial data? I also tried to use unsigned char instead of UINT as it has 8 bits, but result was the same, texture was not created.
Please help.
Aftr some time I found a solution to the problem. I needed to add a line:
srcDesc.MipLevels = 1;
With this change the texture was finally created with initial data
Related
I'm trying to convert a Java class to a C# one using EmguCV. It's for a class in Unsupervised Learning. The teacher made a program using OpenCV and Java. I have to convert it to C#.
The goal is to implement a simple Face Recognition algorithm.
The method I'm stuck at:
Mat sample = train.get(0).getData();
mean = Mat.zeros(/*6400*/sample.rows(), /*1*/sample.cols(), /*CvType.CV_64FC1*/sample.type());
// Calculating it by hand
train.forEach(person -> {
Mat data = person.getData();
for (int i = 0; i < mean.rows(); i++) {
double mv = mean.get(i, 0)[0]; // Gets the value of the cell in the first channel
double pv = data.get(i, 0)[0]; // Gets the value of the cell in the first channel
mv += pv;
mean.put(i, 0, mv); // *********** I'm stuck here ***********
}
});
So far, my C# equivalent is:
var sample = trainSet[0].Data;
mean = Mat.Zeros(sample.Rows, sample.Cols, sample.Depth, sample.NumberOfChannels);
foreach (var person in trainSet)
{
var data = person.Data;
for (int i = 0; i < mean.Rows; i++)
{
var meanValue = (double)mean.GetData().GetValue(i,0);
var personValue = (double)data.GetData().GetValue(i, 0);
meanValue += personValue;
}
}
And I am not finding the put equivalent in C#. But, if I'm being honest, I'm not even sure the previous two lines in my C# equivalent are correct.
Can someone help me figure this one out?
You can convert it like this:
Mat sample = trainSet[0].Data;
Mat mean = Mat.Zeros(sample.Rows, sample.Cols, sample.Depth, sample.NumberOfChannels);
foreach (var person in trainSet)
{
Mat data = person.Data;
for (int i = 0; i < mean.Rows; i++)
{
double meanValue = (double)mean.GetData().GetValue(i, 0);
double personValue = (double)data.GetData().GetValue(i, 0);
meanValue += personValue;
double[] mva = new double[] { meanValue };
Marshal.Copy(mva, 0, mean.DataPointer + i * mean.Cols * mean.ElementSize, 1);
}
}
I'm developing an application able to decode H264 stream through DrectX11's ID3D11VideoDecoder interface ( https://msdn.microsoft.com/en-us/library/windows/desktop/hh447766(v=vs.85).aspx ) and I got stuck at ID3D11VideoDevice::CreateVideoDecoderOutputView method, it just fails returning E_INVALIDARG. Yes, I know, there can be millions of reasons,
but are there some exceptionally common maybe? Are there any samples available illustrating decoding through ID3D11VideoDecoder (I haven't found any) ?
The part of my code that I think is most likely to fail looks as follows:
// texture
D3D11_TEXTURE2D_DESC descT = { 0 };
descT.Width = 1024;
descT.Height = 768;
descT.MipLevels = 1;
descT.ArraySize = 1;
descT.Format = DXGI_FORMAT_NV12;
descT.SampleDesc.Count = 1;
descT.Usage = D3D11_USAGE_DEFAULT;
descT.BindFlags = D3D11_BIND_DECODER;
ID3D11Texture2D *pTex = nullptr;
pDX11VideoDevice->CreateTexture2D(&desc, 0, &pTex);
// decoder
D3D11_VIDEO_DECODER_OUTPUT_VIEW_DESC desc;
desc.DecodeProfile = D3D11_DECODER_PROFILE_H264_VLD_NOFGT; // what is interesting it fails whatever decoder I choose
desc.Texture2D.ArraySlice = 1;
desc.ViewDimension = D3D11_VDOV_DIMENSION_TEXTURE2D;
HRESULT hr = pDX11VideoDevice->CreateVideoDecoderOutputView(pTex, &desc, &pVideoDecoderOutputView); // and here the fail occurs
Thank you
OK, solved the problem, there should be
desc.Texture2D.ArraySlice = 0;
in the snippet in post above. Still lots of work ahead
I am converting from the following format:
const int four_bytes_per_float = 4;
const int eight_bits_per_byte = 8;
_stereoGraphStreamFormat.mFormatID = kAudioFormatLinearPCM;
_stereoGraphStreamFormat.mFormatFlags = kAudioFormatFlagsNativeFloatPacked | kAudioFormatFlagIsNonInterleaved;
_stereoGraphStreamFormat.mBytesPerPacket = four_bytes_per_float;
_stereoGraphStreamFormat.mFramesPerPacket = 1;
_stereoGraphStreamFormat.mBytesPerFrame = four_bytes_per_float;
_stereoGraphStreamFormat.mChannelsPerFrame = 2;
_stereoGraphStreamFormat.mBitsPerChannel = eight_bits_per_byte * four_bytes_per_float;
_stereoGraphStreamFormat.mSampleRate = 44100;
to the following format:
interleavedAudioDescription.mFormatID = kAudioFormatLinearPCM;
interleavedAudioDescription.mFormatFlags = kAudioFormatFlagIsSignedInteger;
interleavedAudioDescription.mChannelsPerFrame = 2;
interleavedAudioDescription.mBytesPerPacket = sizeof(SInt16)*interleavedAudioDescription.mChannelsPerFrame;
interleavedAudioDescription.mFramesPerPacket = 1;
interleavedAudioDescription.mBytesPerFrame = sizeof(SInt16)*interleavedAudioDescription.mChannelsPerFrame;
interleavedAudioDescription.mBitsPerChannel = 8 * sizeof(SInt16);
interleavedAudioDescription.mSampleRate = 44100;
Using the following code:
int32_t availableBytes = 0;
void* tailL = TPCircularBufferTail(inputBufferL(), &availableBytes);
void* tailR = TPCircularBufferTail(inputBufferR(), &availableBytes);
// If we have no data in the buffer, we simply return
if (availableBytes <= 0)
{
return;
}
// ========== Non-Interleaved to Interleaved (Plus Samplerate Conversion) =========
// Get the number of frames available
UInt32 frames = availableBytes / this->mInputFormat.mBytesPerFrame;
pcmOutputBuffer->mBuffers[0].mDataByteSize = frames * interleavedAudioDescription.mBytesPerFrame;
struct complexInputDataProc_t data = (struct complexInputDataProc_t) { .self = this, .sourceL = tailL, .sourceR = tailR, .byteLength = availableBytes };
// Do the conversion
OSStatus result = AudioConverterFillComplexBuffer(interleavedAudioConverter,
complexInputDataProc,
&data,
&frames,
pcmOutputBuffer,
NULL);
// Tell the buffers how much data we consumed during the conversion so that it can be removed
TPCircularBufferConsume(inputBufferL(), availableBytes);
TPCircularBufferConsume(inputBufferR(), availableBytes);
// ========== Buffering Of Interleaved Samples =========
// If we got converted frames back from the converter, we want to add it to a separate buffer
if (frames > 0)
{
// Make sure we have enough space in the buffer to store the new data
TPCircularBufferHead(&pcmCircularBuffer, &availableBytes);
if (availableBytes > pcmOutputBuffer->mBuffers[0].mDataByteSize)
{
// Add the newly converted data to the buffer
TPCircularBufferProduceBytes(&pcmCircularBuffer, pcmOutputBuffer->mBuffers[0].mData, frames * interleavedAudioDescription.mBytesPerFrame);
}
else
{
printf("No Space in Buffer\n");
}
}
However I am getting the following output:
It should be a perfect sine wave, however as you can see it is not.
I have been working on this for days now and just can’t seem to find where it is going wrong.
Can anyone see something that I might be missing?
Edit:
Buffer initialisation:
TPCircularBuffer pcmCircularBuffer;
static SInt16 pcmOutputBuf[BUFFER_SIZE];
pcmOutputBuffer = (AudioBufferList*)malloc(sizeof(AudioBufferList));
pcmOutputBuffer->mNumberBuffers = 1;
pcmOutputBuffer->mBuffers[0].mNumberChannels = 2;
pcmOutputBuffer->mBuffers[0].mData = pcmOutputBuf;
TPCircularBufferInit(&pcmCircularBuffer, BUFFER_SIZE);
Complex input data proc:
static OSStatus complexInputDataProc(AudioConverterRef inAudioConverter,
UInt32 *ioNumberDataPackets,
AudioBufferList *ioData,
AudioStreamPacketDescription **outDataPacketDescription,
void *inUserData) {
struct complexInputDataProc_t *arg = (struct complexInputDataProc_t*)inUserData;
BroadcastingServices::MP3Encoder *self = (BroadcastingServices::MP3Encoder*)arg->self;
if ( arg->byteLength <= 0 )
{
*ioNumberDataPackets = 0;
return 100; //kNoMoreDataErr;
}
UInt32 framesAvailable = arg->byteLength / self->interleavedAudioDescription.mBytesPerFrame;
if (*ioNumberDataPackets > framesAvailable)
{
*ioNumberDataPackets = framesAvailable;
}
ioData->mBuffers[0].mData = arg->sourceL;
ioData->mBuffers[0].mDataByteSize = arg->byteLength;
ioData->mBuffers[1].mData = arg->sourceR;
ioData->mBuffers[1].mDataByteSize = arg->byteLength;
arg->byteLength = 0;
return noErr;
}
I see a few things that raise a red flag.
1) as mentioned in a comment above, the fact that you are overwriting availableBytes for the left input with that from the right:
void* tailL = TPCircularBufferTail(inputBufferL(), &availableBytes);
void* tailR = TPCircularBufferTail(inputBufferR(), &availableBytes);
If the two input streams are changing asynchronously to this code then most certainly you have a race condition.
2) Truncation errors: availableBytes is not necessarily a multiple of bytes per frame. If not then the following bit of code could cause you to consume more bytes than you actually converted.
void* tailL = TPCircularBufferTail(inputBufferL(), &availableBytes);
void* tailR = TPCircularBufferTail(inputBufferR(), &availableBytes);
...
UInt32 frames = availableBytes / this->mInputFormat.mBytesPerFrame;
...
TPCircularBufferConsume(inputBufferL(), availableBytes);
TPCircularBufferConsume(inputBufferR(), availableBytes);
3) If the output buffer is not ready to consume all of the input you just throw the input buffer away. That happens in this code.
if (availableBytes > pcmOutputBuffer->mBuffers[0].mDataByteSize)
{
...
}
else
{
printf("No Space in Buffer\n");
}
I'd be really curious if your seeing the print output.
Here's is how I would suggest doing it. It's going to be pseudo-codeish since I don't have anything necessary to compile and test it.
int32_t availableBytesInL = 0;
int32_t availableBytesInR = 0;
int32_t availableBytesOut = 0;
// figure out how many bytes are available in each buffer.
void* tailL = TPCircularBufferTail(inputBufferL(), &availableBytesInL);
void* tailR = TPCircularBufferTail(inputBufferR(), &availableBytesInR);
TPCircularBufferHead(&pcmCircularBuffer, &availableBytesOut);
// figure out how many full frames are available
UInt32 framesInL = availableBytesInL / mInputFormat.mBytesPerFrame;
UInt32 framesInR = availableBytesInR / mInputFormat.mBytesPerFrame;
UInt32 framesOut = availableBytesOut / interleavedAudioDescription.mBytesPerFrame;
// figure out how many frames to process this time.
UInt32 frames = min(min(framesInL, framesInL), framesOut);
if (frames == 0)
return;
int32_t bytesConsumed = frames * mInputFormat.mBytesPerFrame;
struct complexInputDataProc_t data = (struct complexInputDataProc_t) {
.self = this, .sourceL = tailL, .sourceR = tailR, .byteLength = bytesConsumed };
// Do the conversion
OSStatus result = AudioConverterFillComplexBuffer(interleavedAudioConverter,
complexInputDataProc,
&data,
&frames,
pcmOutputBuffer,
NULL);
int32_t bytesProduced = frames * interleavedAudioDescription.mBytesPerFrame;
// Tell the buffers how much data we consumed during the conversion so that it can be removed
TPCircularBufferConsume(inputBufferL(), bytesConsumed);
TPCircularBufferConsume(inputBufferR(), bytesConsumed);
TPCircularBufferProduceBytes(&pcmCircularBuffer, pcmOutputBuffer->mBuffers[0].mData, bytesProduced);
Basically what I've done here is to figure out up front how many frames should be processed making sure I'm only processing as many frames as the output buffer can handle. If it were me I'd also add some checking for buffer underruns on the output and buffer overruns on the input. Finally, I'm not exactly sure of the semantics of AudioConverterFillComplexBuffer wrt the frame parameter that is passing in and out. It's conceivable that the # frames out would be less or more than the number of frames in. Although, since your not doing sample rate conversion that's probably not going to happen. I've attempted to account for that condition by assigning bytesProduced after the conversion.
Hope this helps. If not you have 2 other clues. One is that the drop outs are periodic and two is that the size of the drop outs looks to be about the same. If you can figure out how many samples each are then you can look for those numbers in your code.
I don't think your output buffer, pcmCircularBuffer, is big enough.
Try replacing
TPCircularBufferInit(&pcmCircularBuffer, BUFFER_SIZE);
with
TPCircularBufferInit(&pcmCircularBuffer, sizeof(pcmOutputBuf));
Even if that is the solution, I think there are some problems with your code. I don't know exactly what you're doing, I guess encoding mp3 (which by itself is an uphill battle on iOS, why not use hardware AAC?), but unless you have realtime demands on both input and output, why use ring buffers at all? Also, I recommend using units to visually catch type frame/byte size mismatches: e.g. BUFFER_SIZE_IN_FRAMES
If it's not the solution, then I want to see the sine generator.
I want to inject touch event in iPhone. I get the coordinates of touch event via network socket. GSSendEvent seems to be good choice. However, it needs GSEventRecord as one of the inputs.
Does anyone know how to prepare GSEventRecord? I prepared it based on some examples but the app crashes after GSSendEvent call.
Appreciate any help.
-(void) handleMouseEventAtPoint:(CGPoint) point
{
static mach_port_t port_;
// structure of touch GSEvent
struct GSTouchEvent {
GSEventRecord record;
GSHandInfo handInfo;
} ;
struct GSTouchEvent *touchEvent = (struct GSTouchEvent *) malloc(sizeof(struct GSTouchEvent));
bzero(touchEvent, sizeof(touchEvent));
// set up GSEvent
touchEvent->record.type = kGSEventHand;
touchEvent->record.windowLocation = point;
touchEvent->record.timestamp = GSCurrentEventTimestamp();
touchEvent->record.infoSize = sizeof(GSHandInfo) + sizeof(GSPathInfo);
touchEvent->handInfo.type = getHandInfoType(0, 1);
touchEvent->handInfo.pathInfosCount = 1;
bzero(&touchEvent->handInfo.pathInfos[0], sizeof(GSPathInfo));
touchEvent->handInfo.pathInfos[0].pathIndex = 1;
touchEvent->handInfo.pathInfos[0].pathIdentity = 2;
touchEvent->handInfo.pathInfos[0].pathProximity = 1 ? 0x03 : 0x00;
touchEvent->handInfo.pathInfos[0].pathLocation = point;
port_ = GSGetPurpleSystemEventPort();
GSSendEvent((GSEventRecord*)touchEvent ,port_);
}
static GSHandInfoType getHandInfoType(int touch_before, int touch_now){
if (!touch_before) {
return (GSHandInfoType) kGSHandInfoType2TouchDown;
}
if (touch_now) {
return (GSHandInfoType) kGSHandInfoType2TouchChange;
}
return (GSHandInfoType) kGSHandInfoType2TouchFinal;
}
Only tested on iOS 6
You are actually on the right track. The problem is you have to figure out what values you should assign to these variables.
First of all, you need to import GraphicsServices.h. Then, you can try the following code with the port which you can get from How to find the purple port for the front most application in IOS 5 and above?.
I am not an iOS expert and Apple doesn't provide any documentation so I can't explain much what's going on here. (It happens to work fine for me.)
Anyway, you can play with it using xcode debug mode to see what happens under the hood.
struct GSTouchEvent * touchEvent = (struct GSTouchEvent*) &gsTouchEvent;
bzero(touchEvent, sizeof(touchEvent));
touchEvent->record.type = kGSEventHand;
touchEvent->record.subtype = kGSEventSubTypeUnknown;
touchEvent->record.location = point;
touchEvent->record.windowLocation = point;
touchEvent->record.infoSize = sizeof(GSHandInfo) + sizeof(GSPathInfo);
touchEvent->record.timestamp = GSCurrentEventTimestamp();
touchEvent->record.window = winRef;
touchEvent->record.senderPID = 919;
bzero(&touchEvent->handInfo, sizeof(GSHandInfo));
bzero(&touchEvent->handInfo.pathInfos[0], sizeof(GSPathInfo));
GSHandInfo touchEventHandInfo;
touchEventHandInfo._0x5C = 0;
touchEventHandInfo.deltaX = 0;
touchEventHandInfo.deltaY = 0;
touchEventHandInfo.height = 0;
touchEventHandInfo.width = 0;
touchEvent->handInfo = touchEventHandInfo;
touchEvent->handInfo.type = handInfoType;
touchEvent->handInfo.deltaX = 1;
touchEvent->handInfo.deltaY = 1;
touchEvent->handInfo.pathInfosCount = 0;
touchEvent->handInfo.pathInfos[0].pathIndex = 1;
touchEvent->handInfo.pathInfos[0].pathIdentity = 2;
touchEvent->handInfo.pathInfos[0].pathProximity = (handInfoType == kGSHandInfoTypeTouchDown || handInfoType == kGSHandInfoTypeTouchDragged || handInfoType == kGSHandInfoTypeTouchMoved) ? 0x03: 0x00;
touchEvent->handInfo.x52 = 1;
touchEvent->handInfo.pathInfos[0].pathLocation = point;
touchEvent->handInfo.pathInfos[0].pathWindow = winRef;
GSEventRecord* record = (GSEventRecord*) touchEvent;
record->timestamp = GSCurrentEventTimestamp();
GSSendEvent(record, port);
To use this code, you have to call it multiple times. For one tap, there are touch-down, touch-drag and then touch-up.
Also note that pathProximity is 0 when touch is up.
As far as I remember, the winRef doesn't matter.
Hope this helps.
Edit: From Bugivore's comment, the problem is:
The way I allocated touchEvent via malloc was wrong. It should be done as EntryLevelDev showed - "static uint8_t handJob[sizeof(GSEventRecord) + sizeof(GSHandInfo) + sizeof(GSPathInfo)];"
Answer from EntryLevelDev is correct, but some of the value is not so important. I got the below codes from somewhere else and have done some try and errors, here is my codes(worked for till latested ios6).
And is anyone working on this for IOS7 now? I could not get it to work. see my post here:With GSCopyPurpleNamedPort(appId) in GraphicsServices deprecated in IOS7, what is the alternative approach?
static int prev_click = 0;
if (!click && !prev_click)
{
//which should never enter
NSLog(#"***error, postHandEvent cancel");
return;
}
CGPoint location = CGPointMake(x, y);
struct GSTouchEvent {
GSEventRecord record;
GSHandInfo handInfo;
} * event = (struct GSTouchEvent*) &touchEvent;
bzero(touchEvent, sizeof(touchEvent));
event->record.type = kGSEventHand;
event->record.windowLocation = location;
event->record.timestamp = GSCurrentEventTimestamp();
//NSLog(#"Timestamp GSCurrentEventTimestamp: %llu",GSCurrentEventTimestamp());
event->record.infoSize = sizeof(GSHandInfo) + sizeof(GSPathInfo);
event->handInfo.type = getHandInfoType(prev_click, click);
//must have the following line
event->handInfo.x52 = 1;
//below line is for ios4
//event->handInfo.pathInfosCount = 1;
bzero(&event->handInfo.pathInfos[0], sizeof(GSPathInfo));
event->handInfo.pathInfos[0].pathIndex = 2;
//following 2 lines, they are by default
event->handInfo.pathInfos[0].pathMajorRadius = 1.0;
event->handInfo.pathInfos[0].pathPressure = 1.0;
//event->handInfo.pathInfos[0].pathIdentity = 2;
event->handInfo.pathInfos[0].pathProximity = click ? 0x03 : 0x00;
//event->handInfo.pathInfos[0].pathProximity = action;
event->handInfo.pathInfos[0].pathLocation = location;
// send GSEvent
GSEventRecord *event1 = (GSEventRecord*) event;
sendGSEvent(event1);
I have an application that applies various filters to an image. It works great on iOS 5 but crashes on 6. Below is a sample of where it's crashing:
CGImageRef inImage = self.CGImage;
CFDataRef m_DataRef = CGDataProviderCopyData(CGImageGetDataProvider(inImage));
UInt8 * m_PixelBuf = (UInt8 *) CFDataGetBytePtr(m_DataRef);
int length = CFDataGetLength(m_DataRef);
for (int i=0; i<length; i+=4)
{
if(filter == filterCurve){
int r = i;
int g = i+1;
int b = i+2;
int red = m_PixelBuf[r];
int green = m_PixelBuf[g];
int blue = m_PixelBuf[b];
m_PixelBuf[r] = SAFECOLOR(red); // <==== EXC_BAD_ACCESS (code = 2)
m_PixelBuf[g] = SAFECOLOR(green);
m_PixelBuf[b] = SAFECOLOR(blue);
}
}
Notice the bad access point when I try to assign a value back to m_PixelBuf. Anybody have any idea why this is occuring? What in iOS 6 would cause this?
This solves the problem: http://www.iphonedevsdk.com/forum/iphone-sdk-development/108072-exc_bad_access-in-ios-6-but-not-in-ios-5.html
In iOS 6 you need to use CFDataCreateMutableCopy() (instead of CGDataProviderCopyData()), followed by CFDataGetMutableBytePtr() (instead of CFDataGetBytePtr()) if you're going to be manipulating the data's bytes directly.
This is the url where you find new class which works with ios 6:https://github.com/kypselia/ios-image-filters/blob/6ef9a937a931f32dd0b7b5e5bbdca6cce2f690dc/Classes/ImageFilter.m