I am working with resizing bitmaps in an application by using the BlackBerry Support Forum thread "Resizing bitmap without creating encoded image". But with the code from that thread, I can reduce the bitmap right-to-left only, from width 360 to 0, how can I resize it left-to-right.
Use this:
int oolddWidth;
int oolddHeight;
int ddispplayWidth;
int ddispplayHeight;
EncodedImage eih1 = EncodedImage.getEncodedImageResource("add2.png");
oolddWidth = eih1.getWidth();
oolddHeight = eih1.getHeight();
ddispplayWidth = Display.getWidth()-40;
ddispplayHeight = 80;
int nnumeerator = net.rim.device.api.math.Fixed32.toFP(oolddWidth);
int ddenoominator = net.rim.device.api.math.Fixed32.toFP(ddispplayWidth);
int wwidtthScale = net.rim.device.api.math.Fixed32.div(nnumeerator, ddenoominator);
nnumeerator = net.rim.device.api.math.Fixed32.toFP(oolddHeight);
ddenoominator = net.rim.device.api.math.Fixed32.toFP(ddispplayHeight);
int hheighhtScale = net.rim.device.api.math.Fixed32.div(nnumeerator, ddenoominator);
EncodedImage newEih1 = eih1.scaleImage32(wwidtthScale, hheighhtScale);
final Bitmap header1 = newEih1.getBitmap();
Related
I am trying to implement Variable Rate Shading in the app based on DirectX 11.
I am doing it this way:
UINT dwRtWidth = 2560;
UINT dwRtHeight = 1440;
D3D11_TEXTURE2D_DESC srcDesc;
ZeroMemory(&srcDesc, sizeof(srcDesc));
int sri_w = dwRtWidth / NV_VARIABLE_PIXEL_SHADING_TILE_WIDTH;
int sri_h = dwRtHeight / NV_VARIABLE_PIXEL_SHADING_TILE_HEIGHT;
srcDesc.Width = sri_w;
srcDesc.Height = sri_h;
srcDesc.ArraySize = 1;
srcDesc.Format = DXGI_FORMAT_R8_UINT;
srcDesc.SampleDesc.Count = 1;
srcDesc.SampleDesc.Quality = 0;
srcDesc.Usage = D3D11_USAGE_DEFAULT; //Optional
srcDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE; //Optional
srcDesc.CPUAccessFlags = 0;
srcDesc.MiscFlags = 0;
D3D11_SUBRESOURCE_DATA initialData;
UINT* data = (UINT*)malloc(sri_w * sri_h * sizeof(UINT));
for (int i = 0; i < sri_w * sri_h; i++)
data[i] = (UINT)0;
initialData.pSysMem = data;
initialData.SysMemPitch = sri_w;
//initialData.SysMemSlicePitch = 0;
HRESULT hr = s_device->CreateTexture2D(&srcDesc, &initialData, &pShadingRateSurface);
if (FAILED(hr))
{
LOG("Texture not created");
LOG(std::system_category().message(hr));
}
else
LOG("Texture created");
When I try to create texture with initial data, it is not being created and HRESULTS gives message: 'The parameter is incorrect'. Doesn't say which one.
When I create texture without initial data it's created successfully.
What's wrong with the initial data? I also tried to use unsigned char instead of UINT as it has 8 bits, but result was the same, texture was not created.
Please help.
Aftr some time I found a solution to the problem. I needed to add a line:
srcDesc.MipLevels = 1;
With this change the texture was finally created with initial data
I'm learning Emgu.CV, I have successfully run the example, I prefer to read the image from byte array rather than direct from file, and also I prefer to save the result to byte array rather than to save directly to file..
any body can help?
Many thanks
var _img = CvInvoke.Imread(_jpg); // HOW TO READ FROM BYTE ARRAY
var _fd = new Emgu.CV.CascadeClassifier(_face_classifier);
var _img_gray = new UMat();
CvInvoke.CvtColor(_img, _img_gray, Emgu.CV.CvEnum.ColorConversion.Bgr2Gray);
foreach (Rectangle _face in _fd.DetectMultiScale(_img_gray, 1.1, 10, new System.Drawing.Size(20,20)))
{
CvInvoke.Rectangle(_img, _face, new Emgu.CV.Structure.MCvScalar(255, 255, 255));
}
_img.Save("result.jpg"); // SAVE TO BYTE ARRAY
I've understood your question to be in fact twofold: Converting an image to a byte[], and saving a byte[] to a file. I'll address the easier and latter first:
Saving a byte[] to File
using(FileStream fs = new FileStream("OutputFile.dat",FileMode.OpenOrCreate))
{
BinaryWriter bs = new BinaryWriter(fs);
bs.Write(byteBuffer);
}
Loading a byte[] from File
byte[] byteBuffer;
using (FileStream fs = new FileStream("InputFile.dat", FileMode.OpenOrCreate))
{
BinaryReader br = new BinaryReader(fs);
// Where 0xF0000 is calculated as Image Width x Height x Bit Depth
byteBuffer = br.ReadBytes(0xF0000);
}
Now for the second part; I'm assuming you are familiar with pixel formats and how the true image structure of byte[,,] (or any other depth) is serialized to a single dimension byte[]. If not please do ask.
Create an Image from byte[]
In the following example, the byte[] is created rather than loaded as above, the source of the byte[] is irrelevant so long as it is a correct serialization of a known image dimension and depth.
int width = 640; // Image Width
int height = 512; // Image Height
int stride = 640 * 3; // Image Stide - Bytes per Row (3 bytes per pixel)
// Create data for an Image 512x640 RGB - 983,040 Bytes
byte[] sourceImgData = new byte[0xF0000];
// Pin the imgData in memory and create an IntPtr to it's location
GCHandle pinnedArray = GCHandle.Alloc(sourceImgData, GCHandleType.Pinned);
IntPtr pointer = pinnedArray.AddrOfPinnedObject();
// Create an image from the imgData
Image<Rgb, byte> img = new Image<Rgb, byte>(width, height, stride, pointer);
// Free the memory
pinnedArray.Free();
Convert an Image to byte[]
// Convert the source Image to Bitmap
Bitmap bitmap = sourceImage.ToBitmap();
// Create a BitmapData object from the resulting Bitmap, locking the backing data in memory
BitmapData bitmapData = bitmap.LockBits(
new Rectangle(0, 0, width, height),
ImageLockMode.ReadOnly,
PixelFormat.Format24bppRgb);
// Create an output byte array
byte[] destImgData = new byte[bitmapData.Stride * bitmap.Height];
// Copy the byte array to the destination imgData
Marshal.Copy(bitmapData.Scan0,
destImgData,
0,
destImgData.Length);
// Free the memory
bitmap.UnlockBits(bitmapData);
I need to determine in Objective-C if an image, that needs to be downloaded and shown, is a progressive JPEG. I did some search but didn't find any obvious way. Can you help?
The JPEG Format contain several markers.
The Progressive Markers are
\xFF\xC2
\xFF\xDA
If you find these in the file, then you're dealing with progressive JPEGs.
After doing some more research I found some more info on the way it could be done in iOS.
Following what I read in Technical Q&A QA1654 - Accessing image properties with ImageIO I did it like this:
//Example of progressive JPEG
NSString *imageUrlStr = #"http://cetus.sakura.ne.jp/softlab/software/spibench/pic_22p.jpg";
CFURLRef url = (__bridge CFURLRef)[NSURL URLWithString:imageUrlStr];
CGImageSourceRef myImageSource = CGImageSourceCreateWithURL(url, NULL);
CFDictionaryRef imagePropertiesDictionary = CGImageSourceCopyPropertiesAtIndex(myImageSource,0, NULL);
The imagePropertiesDictionary dictionary looks like this for a progressive JPEG:
Printing description of imagePropertiesDictionary:
{
ColorModel = RGB;
Depth = 8;
PixelHeight = 1280;
PixelWidth = 1024;
"{JFIF}" = {
DensityUnit = 0;
IsProgressive = 1;
JFIFVersion = (
1,
0,
1
);
XDensity = 1;
YDensity = 1;
};
}
IsProgressive = 1 means it's a progressive/interlaced JPEG. You can check the value like this:
CFDictionaryRef JFIFDictionary = (CFDictionaryRef)CFDictionaryGetValue(imagePropertiesDictionary, kCGImagePropertyJFIFDictionary);
CFBooleanRef isProgressive = (CFBooleanRef)CFDictionaryGetValue(JFIFDictionary, kCGImagePropertyJFIFIsProgressive);
if(isProgressive == kCFBooleanTrue)
NSLog(#"It's a progressive JPEG");
I am converting from the following format:
const int four_bytes_per_float = 4;
const int eight_bits_per_byte = 8;
_stereoGraphStreamFormat.mFormatID = kAudioFormatLinearPCM;
_stereoGraphStreamFormat.mFormatFlags = kAudioFormatFlagsNativeFloatPacked | kAudioFormatFlagIsNonInterleaved;
_stereoGraphStreamFormat.mBytesPerPacket = four_bytes_per_float;
_stereoGraphStreamFormat.mFramesPerPacket = 1;
_stereoGraphStreamFormat.mBytesPerFrame = four_bytes_per_float;
_stereoGraphStreamFormat.mChannelsPerFrame = 2;
_stereoGraphStreamFormat.mBitsPerChannel = eight_bits_per_byte * four_bytes_per_float;
_stereoGraphStreamFormat.mSampleRate = 44100;
to the following format:
interleavedAudioDescription.mFormatID = kAudioFormatLinearPCM;
interleavedAudioDescription.mFormatFlags = kAudioFormatFlagIsSignedInteger;
interleavedAudioDescription.mChannelsPerFrame = 2;
interleavedAudioDescription.mBytesPerPacket = sizeof(SInt16)*interleavedAudioDescription.mChannelsPerFrame;
interleavedAudioDescription.mFramesPerPacket = 1;
interleavedAudioDescription.mBytesPerFrame = sizeof(SInt16)*interleavedAudioDescription.mChannelsPerFrame;
interleavedAudioDescription.mBitsPerChannel = 8 * sizeof(SInt16);
interleavedAudioDescription.mSampleRate = 44100;
Using the following code:
int32_t availableBytes = 0;
void* tailL = TPCircularBufferTail(inputBufferL(), &availableBytes);
void* tailR = TPCircularBufferTail(inputBufferR(), &availableBytes);
// If we have no data in the buffer, we simply return
if (availableBytes <= 0)
{
return;
}
// ========== Non-Interleaved to Interleaved (Plus Samplerate Conversion) =========
// Get the number of frames available
UInt32 frames = availableBytes / this->mInputFormat.mBytesPerFrame;
pcmOutputBuffer->mBuffers[0].mDataByteSize = frames * interleavedAudioDescription.mBytesPerFrame;
struct complexInputDataProc_t data = (struct complexInputDataProc_t) { .self = this, .sourceL = tailL, .sourceR = tailR, .byteLength = availableBytes };
// Do the conversion
OSStatus result = AudioConverterFillComplexBuffer(interleavedAudioConverter,
complexInputDataProc,
&data,
&frames,
pcmOutputBuffer,
NULL);
// Tell the buffers how much data we consumed during the conversion so that it can be removed
TPCircularBufferConsume(inputBufferL(), availableBytes);
TPCircularBufferConsume(inputBufferR(), availableBytes);
// ========== Buffering Of Interleaved Samples =========
// If we got converted frames back from the converter, we want to add it to a separate buffer
if (frames > 0)
{
// Make sure we have enough space in the buffer to store the new data
TPCircularBufferHead(&pcmCircularBuffer, &availableBytes);
if (availableBytes > pcmOutputBuffer->mBuffers[0].mDataByteSize)
{
// Add the newly converted data to the buffer
TPCircularBufferProduceBytes(&pcmCircularBuffer, pcmOutputBuffer->mBuffers[0].mData, frames * interleavedAudioDescription.mBytesPerFrame);
}
else
{
printf("No Space in Buffer\n");
}
}
However I am getting the following output:
It should be a perfect sine wave, however as you can see it is not.
I have been working on this for days now and just can’t seem to find where it is going wrong.
Can anyone see something that I might be missing?
Edit:
Buffer initialisation:
TPCircularBuffer pcmCircularBuffer;
static SInt16 pcmOutputBuf[BUFFER_SIZE];
pcmOutputBuffer = (AudioBufferList*)malloc(sizeof(AudioBufferList));
pcmOutputBuffer->mNumberBuffers = 1;
pcmOutputBuffer->mBuffers[0].mNumberChannels = 2;
pcmOutputBuffer->mBuffers[0].mData = pcmOutputBuf;
TPCircularBufferInit(&pcmCircularBuffer, BUFFER_SIZE);
Complex input data proc:
static OSStatus complexInputDataProc(AudioConverterRef inAudioConverter,
UInt32 *ioNumberDataPackets,
AudioBufferList *ioData,
AudioStreamPacketDescription **outDataPacketDescription,
void *inUserData) {
struct complexInputDataProc_t *arg = (struct complexInputDataProc_t*)inUserData;
BroadcastingServices::MP3Encoder *self = (BroadcastingServices::MP3Encoder*)arg->self;
if ( arg->byteLength <= 0 )
{
*ioNumberDataPackets = 0;
return 100; //kNoMoreDataErr;
}
UInt32 framesAvailable = arg->byteLength / self->interleavedAudioDescription.mBytesPerFrame;
if (*ioNumberDataPackets > framesAvailable)
{
*ioNumberDataPackets = framesAvailable;
}
ioData->mBuffers[0].mData = arg->sourceL;
ioData->mBuffers[0].mDataByteSize = arg->byteLength;
ioData->mBuffers[1].mData = arg->sourceR;
ioData->mBuffers[1].mDataByteSize = arg->byteLength;
arg->byteLength = 0;
return noErr;
}
I see a few things that raise a red flag.
1) as mentioned in a comment above, the fact that you are overwriting availableBytes for the left input with that from the right:
void* tailL = TPCircularBufferTail(inputBufferL(), &availableBytes);
void* tailR = TPCircularBufferTail(inputBufferR(), &availableBytes);
If the two input streams are changing asynchronously to this code then most certainly you have a race condition.
2) Truncation errors: availableBytes is not necessarily a multiple of bytes per frame. If not then the following bit of code could cause you to consume more bytes than you actually converted.
void* tailL = TPCircularBufferTail(inputBufferL(), &availableBytes);
void* tailR = TPCircularBufferTail(inputBufferR(), &availableBytes);
...
UInt32 frames = availableBytes / this->mInputFormat.mBytesPerFrame;
...
TPCircularBufferConsume(inputBufferL(), availableBytes);
TPCircularBufferConsume(inputBufferR(), availableBytes);
3) If the output buffer is not ready to consume all of the input you just throw the input buffer away. That happens in this code.
if (availableBytes > pcmOutputBuffer->mBuffers[0].mDataByteSize)
{
...
}
else
{
printf("No Space in Buffer\n");
}
I'd be really curious if your seeing the print output.
Here's is how I would suggest doing it. It's going to be pseudo-codeish since I don't have anything necessary to compile and test it.
int32_t availableBytesInL = 0;
int32_t availableBytesInR = 0;
int32_t availableBytesOut = 0;
// figure out how many bytes are available in each buffer.
void* tailL = TPCircularBufferTail(inputBufferL(), &availableBytesInL);
void* tailR = TPCircularBufferTail(inputBufferR(), &availableBytesInR);
TPCircularBufferHead(&pcmCircularBuffer, &availableBytesOut);
// figure out how many full frames are available
UInt32 framesInL = availableBytesInL / mInputFormat.mBytesPerFrame;
UInt32 framesInR = availableBytesInR / mInputFormat.mBytesPerFrame;
UInt32 framesOut = availableBytesOut / interleavedAudioDescription.mBytesPerFrame;
// figure out how many frames to process this time.
UInt32 frames = min(min(framesInL, framesInL), framesOut);
if (frames == 0)
return;
int32_t bytesConsumed = frames * mInputFormat.mBytesPerFrame;
struct complexInputDataProc_t data = (struct complexInputDataProc_t) {
.self = this, .sourceL = tailL, .sourceR = tailR, .byteLength = bytesConsumed };
// Do the conversion
OSStatus result = AudioConverterFillComplexBuffer(interleavedAudioConverter,
complexInputDataProc,
&data,
&frames,
pcmOutputBuffer,
NULL);
int32_t bytesProduced = frames * interleavedAudioDescription.mBytesPerFrame;
// Tell the buffers how much data we consumed during the conversion so that it can be removed
TPCircularBufferConsume(inputBufferL(), bytesConsumed);
TPCircularBufferConsume(inputBufferR(), bytesConsumed);
TPCircularBufferProduceBytes(&pcmCircularBuffer, pcmOutputBuffer->mBuffers[0].mData, bytesProduced);
Basically what I've done here is to figure out up front how many frames should be processed making sure I'm only processing as many frames as the output buffer can handle. If it were me I'd also add some checking for buffer underruns on the output and buffer overruns on the input. Finally, I'm not exactly sure of the semantics of AudioConverterFillComplexBuffer wrt the frame parameter that is passing in and out. It's conceivable that the # frames out would be less or more than the number of frames in. Although, since your not doing sample rate conversion that's probably not going to happen. I've attempted to account for that condition by assigning bytesProduced after the conversion.
Hope this helps. If not you have 2 other clues. One is that the drop outs are periodic and two is that the size of the drop outs looks to be about the same. If you can figure out how many samples each are then you can look for those numbers in your code.
I don't think your output buffer, pcmCircularBuffer, is big enough.
Try replacing
TPCircularBufferInit(&pcmCircularBuffer, BUFFER_SIZE);
with
TPCircularBufferInit(&pcmCircularBuffer, sizeof(pcmOutputBuf));
Even if that is the solution, I think there are some problems with your code. I don't know exactly what you're doing, I guess encoding mp3 (which by itself is an uphill battle on iOS, why not use hardware AAC?), but unless you have realtime demands on both input and output, why use ring buffers at all? Also, I recommend using units to visually catch type frame/byte size mismatches: e.g. BUFFER_SIZE_IN_FRAMES
If it's not the solution, then I want to see the sine generator.
I have an application that applies various filters to an image. It works great on iOS 5 but crashes on 6. Below is a sample of where it's crashing:
CGImageRef inImage = self.CGImage;
CFDataRef m_DataRef = CGDataProviderCopyData(CGImageGetDataProvider(inImage));
UInt8 * m_PixelBuf = (UInt8 *) CFDataGetBytePtr(m_DataRef);
int length = CFDataGetLength(m_DataRef);
for (int i=0; i<length; i+=4)
{
if(filter == filterCurve){
int r = i;
int g = i+1;
int b = i+2;
int red = m_PixelBuf[r];
int green = m_PixelBuf[g];
int blue = m_PixelBuf[b];
m_PixelBuf[r] = SAFECOLOR(red); // <==== EXC_BAD_ACCESS (code = 2)
m_PixelBuf[g] = SAFECOLOR(green);
m_PixelBuf[b] = SAFECOLOR(blue);
}
}
Notice the bad access point when I try to assign a value back to m_PixelBuf. Anybody have any idea why this is occuring? What in iOS 6 would cause this?
This solves the problem: http://www.iphonedevsdk.com/forum/iphone-sdk-development/108072-exc_bad_access-in-ios-6-but-not-in-ios-5.html
In iOS 6 you need to use CFDataCreateMutableCopy() (instead of CGDataProviderCopyData()), followed by CFDataGetMutableBytePtr() (instead of CFDataGetBytePtr()) if you're going to be manipulating the data's bytes directly.
This is the url where you find new class which works with ios 6:https://github.com/kypselia/ios-image-filters/blob/6ef9a937a931f32dd0b7b5e5bbdca6cce2f690dc/Classes/ImageFilter.m