I have the following code:
MagickWand *wand = NewMagickWand();
char* cmdargs[] = {
"compare",
"receipt-expected.png",
"-metric",
"psnr",
"difference.png",
"difference2.png",
NULL
};
int argcount = 6;
// Allocate memory for MagickCommand
ImageInfo * info = AcquireImageInfo();
ExceptionInfo* e = AcquireExceptionInfo();
// Execute command
char *metadata = NULL;
MagickBooleanType status = MagickCommandGenesis(info, CompareImageCommand, argcount, cmdargs, &metadata, e);
status is 0, which I assume it working because it has no error and the command works correctly in the CLI.
How do I get the metric it has produced? meta is NULL.
$ compare receipt-expected.png -metric psnr difference.png difference2.png
15.4169
Ideally you would access the API directly, rather than attempting to call a new ImageMagick process as a subprocess.
MagickWand * alpha, * beta, * result;
// ... Allocated & Init `alpha' & `beta'
double metric;
result = MagickCompareImages(alpha,
beta,
PeakSignalToNoiseRatioMetric,
&metric);
printf("psnr = %f\n", metric);
How do I get the metric it has produced?
You can not, as metadata is intended to hold additional IO information in a heap. In this instance, any information written to char ** metadata will be destroyed immediately after the internal command writes to standard output. See here for reference.
Related
So I created gif file from 5 png files with using MagickCoalesceImages call and store it on disk.
How I can read these files back from gif file ?
MagickReadImage does not help
Hard to help without seeing the code, but I can assume you created 5 images with something like...
MagickWand
* gif2png;
gif2png = NewMagickWand();
MagickReadImage(gif2png, "input.gif");
MagickWriteImages(gif2png, "output_%02d.png", MagickFalse);
gif2png = DestroyMagickWand(gif2png);
How I can read these files back from gif file?
You would use MagickReadImage to decode the image from the file, and MagickAddImage to append the decoded image onto a image-stack.
MagickWand
* png2gif,
* temp;
// Create a blank image-stack.
png2gif = NewMagickWand();
char filename[PATH_MAX]; // PATH_MAX provided by limits.h
// Iterate over images to append.
for (int i = 0; i < 5; ++i) {
sprintf(filename, "output_%02d.png", i);
// Read image from disk.
temp = NewMagickWand();
MagickReadImage(temp, filename);
// Add "frame" to stack.
MagickAddImage(png2gif, temp);
temp = DestroyMagickWand(temp);
}
MagickWriteImages(png2gif, "output.gif", MagickTrue);
png2gif = DestroyMagickWand(png2gif);
Warning: The above example omits basic error handling, and assumes the filename names are a sequential series.
Update
From the comments, if you wish to extract a single frame as a PNG file, there are a few ways.
Fastest way is to use MagickWriteImages
MagickWriteImages(img, "output_%02d.png", MagickFalse);
Or use the image stack iterators.
for (MagickSetFirstIterator(img); MagickHasNextImage(img); MagickNextImage(img)) {
MagickWriteImage(img, "output_%02d.png");
}
Or, if the PNG filenames are defined, and you need to map them.
const char * filenames[5] = {
"first.png",
"second.png",
"third.png",
"forth.png",
"fifth.png"
};
for (int i = 0; i < 5; ++i) {
MagickSetIteratorIndex(img, i);
MagickWriteImage(img, filenames[i]);
}
Without seeing the code, we can't offer much help, and can only guess what an acceptable solution would be.
Scenario:
I am using OpenH264 with my App to encode into a video_file.mp4.
Environment:
Platform : MacOs Sierra
Compiler : Clang++
The code:
Following is the crux of the code I have:
void EncodeVideoFile() {
ISVCEncoder * encoder_;
std:string video_file_name = "/Path/to/some/folder/video_file.mp4";
EncodeFileParam * pEncFileParam;
SEncParamExt * pEnxParamExt;
float frameRate = 1000;
EUsageType usageType = EUsageType::CAMERA_VIDEO_REAL_TIME;
bool denoise = false;
bool lossless = true;
bool enable_ltr = false;
int layers = 1;
bool cabac = false;
int sliceMode = 1;
pEncFileParam = new EncodeFileParam;
pEncFileParam->eUsageType = EUsageType::CAMERA_VIDEO_REAL_TIME;
pEncFileParam->pkcFileName = video_file_name.c_str();
pEncFileParam->iWidth = frame_width;
pEncFileParam->iHeight = frame_height;
pEncFileParam->fFrameRate = frameRate;
pEncFileParam->iLayerNum = layers;
pEncFileParam->bDenoise = denoise;
pEncFileParam->bLossless = lossless;
pEncFileParam->bEnableLtr = enable_ltr;
pEncFileParam->bCabac = cabac;
int rv = WelsCreateSVCEncoder (&encoder_);
pEnxParamExt = new SEncParamExt;
pEnxParamExt->iUsageType = pEncFileParam->eUsageType;
pEnxParamExt->iPicWidth = pEncFileParam->iWidth;
pEnxParamExt->iPicHeight = pEncFileParam->iHeight;
pEnxParamExt->fMaxFrameRate = pEncFileParam->fFrameRate;
pEnxParamExt->iSpatialLayerNum = pEncFileParam->iLayerNum;
pEnxParamExt->bEnableDenoise = pEncFileParam->bDenoise;
pEnxParamExt->bIsLosslessLink = pEncFileParam->bLossless;
pEnxParamExt->bEnableLongTermReference = pEncFileParam->bEnableLtr;
pEnxParamExt->iEntropyCodingModeFlag = pEncFileParam->bCabac ? 1 : 0;
for (int i = 0; i < pEnxParamExt->iSpatialLayerNum; i++) {
pEnxParamExt->sSpatialLayers[i].sSliceArgument.uiSliceMode = pEncFileParam->eSliceMode;
}
encoder_->InitializeExt(pEnxParamExt);
int videoFormat = videoFormatI420;
encoder_->SetOption (ENCODER_OPTION_DATAFORMAT, &videoFormat);
int frameSize = frame_width * frame_height * 3 / 2;
int total_num = 500;
BufferedData buf;
buf.SetLength (frameSize);
// check the buffer before proceeding
if (buf.Length() != (size_t)frameSize) {
CloseEncoder();
return;
}
SFrameBSInfo info;
memset (&info, 0, sizeof (SFrameBSInfo));
SSourcePicture pic;
memset (&pic, 0, sizeof (SSourcePicture));
pic.iPicWidth = frame_width;
pic.iPicHeight = frame_height;
pic.iColorFormat = videoFormatI420;
pic.iStride[0] = pic.iPicWidth;
pic.iStride[1] = pic.iStride[2] = pic.iPicWidth >> 1;
pic.pData[0] = buf.data();
pic.pData[1] = pic.pData[0] + frame_width * frame_height;
pic.pData[2] = pic.pData[1] + (frame_width * frame_height >> 2);
for(int num = 0; num < total_num; num++) {
// try to encode the frame
rv = encoder_->EncodeFrame (&pic, &info);
}
if (encoder_) {
encoder_->Uninitialize();
WelsDestroySVCEncoder (encoder_);
}
}
Above code is something I pulled up from official usage examples of OpenH264 where BufferedData.h is a class I reused from OpenH264 utils
Issue:
But, I am getting the following error:
[OpenH264] this = 0x0x1038bc8c0, Error:ParamValidationExt(), width > 0, height > 0, width * height <= 9437184, invalid 0 x 0 in dependency layer settings!
[OpenH264] this = 0x0x1038bc8c0, Error:WelsInitEncoderExt(), ParamValidationExt failed return 2.
[OpenH264] this = 0x0x1038bc8c0, Error:CWelsH264SVCEncoder::Initialize(), WelsInitEncoderExt failed.
Above does not crash the application but it goes through a blank run without creating the video_file.mp4 with the dummy data that I am trying to write into it.
Question:
There seems to be something wrong with the set up config I applying to pEnxParamExtwhich goes into encoder_->InitializeExt.
What am I doing wrong with the set up of the encoder?
Note:
I am not trying to hook up to any camera device. I am just trying to create a .mp4 video out of some dummy image data.
If you want to get complete and working OpenH264 Encoder Initialization procedure you can click... here.
According to your problem scenario, you are trying to create a video file(.mp4/.avi) from some dummy images. This task can be accomplished using two different libraries: i) Library for Codec, ii) Library for Container.
i) Library for Codec: It's so much easy to use a OpenH264 to compress data. One thing I must mention is that, OpenH264 always works with raw frames e.g. yuv420 data. So, if you want to compress your image data, you have to convert these image data into yuv420 color format. To get OpenH264 click... here
ii) Library for Container: After getting the encoded data you have to use another library to create the container with extension .mp4, .avi, .flv etc. There exists a lot of libraries in github to do that staff like FFmpeg, OpenCV, Bento4, MP4Maker, mp4parser etc. Before using these libraries please check in detail about the license issues. If you use FFmpeg, you will not need to use OpenH264 becuse FFmpeg itself works along with several codecs. You will also find lot more working examples as so many developers are working with video data out there.
Hope it helps. :)
I am using the following library <flash.h> to Erase/Write/Read from memory but unfortunately the data I am trying to save doesn't seem to be written to flash memory. I am using PIC18F87j11 with MPLAB XC8 compiler. Also when I read the program memory from PIC after attempting to write to it, there is no data on address 0x1C0CA. What am I doing wrong?
char read[1];
/* set FOSC clock to 8MHZ */
OSCCON = 0b01110000;
/* turn off 4x PLL */
OSCTUNE = 0x00;
TRISDbits.TRISD6 = 0; // set as ouput
TRISDbits.TRISD7 = 0; // set as ouput
LATDbits.LATD6 = 0; // LED 1 OFF
LATDbits.LATD7 = 1; // LED 2 ON
EraseFlash(0x1C0CA, 0x1C0CA);
WriteBytesFlash(0x1C0CA, 1, 0x01);
ReadFlash(0x1C0CA, 1, read[0]);
if (read[0] == 0x01)
LATDbits.LATD6 = 1; // LED 1 ON
while (1) {
}
I don't know what WriteFlashBytes does but the page size for your device is 64 bytes and after writing you need to write an ulock sequence to EECON2 and EECON1 registers to start programming the flash memory
I am developing an app that listens for frequency/pitches, it works fine on iPhone4s, simulator and others but not iPhone 5S. This is the message I am getting:
malloc: *** error for object 0x178203a00: Heap corruption detected, free list canary is damaged
Any suggestion where should I start to dig into?
Thanks!
The iPhone 5s has an arm64/64-bit CPU. Check all the analyze compiler warnings for trying to store 64-bit pointers (and other values) into 32-bit C data types.
Also make sure all your audio code parameter passing, object messaging, and manual memory management code is thread safe, and meets all real-time requirements.
In case it helps anyone, I had exactly the same problem as described above.
The cause in my particular case was pthread_create(pthread_t* thread, ...) on ARM64 was putting the value into *thread at some time AFTER the thread was started. On OSX, ARM32 and on the simulator, it was consistently filling in this value before the start_routine was called.
If I performed a pthread_detach operation in the running thread before that value was written (even by using pthread_self() to get the current thread_t), I would end up with the heap corruption message.
I added a small loop in my thread dispatcher that waited until that value was filled in -- after which the heap errors went away. Don't forget 'volatile'!
Restructuring the code might be a better way to fix this -- it depends on your situation. (I noticed this in a unit test that I'd written, I didn't trip up on this issue on any 'real' code)
Same problem. but my case is I malloc 10Byte memory, but I try to use 20Byte. then it Heap corruption.
## -64,7 +64,7 ## char* bytesToHex(char* buf, int size) {
* be converted to two hex characters, also add an extra space for the terminating
* null byte.
* [size] is the size of the buf array */
- int len = (size * 2) + 1;
+ int len = (size * 3) + 1;
char* output = (char*)malloc(len * sizeof(char));
memset(output, 0, len);
/* pointer to the first item (0 index) of the output array */
char *ptr = &output[0];
int i;
for (i = 0; i < size; i++) {
/* "sprintf" converts each byte in the "buf" array into a 2 hex string
* characters appended with a null byte, for example 10 => "0A\0".
*
* This string would then be added to the output array starting from the
* position pointed at by "ptr". For example if "ptr" is pointing at the 0
* index then "0A\0" would be written as output[0] = '0', output[1] = 'A' and
* output[2] = '\0'.
*
* "sprintf" returns the number of chars in its output excluding the null
* byte, in our case this would be 2. So we move the "ptr" location two
* steps ahead so that the next hex string would be written at the new
* location, overriding the null byte from the previous hex string.
*
* We don't need to add a terminating null byte because it's been already
* added for us from the last hex string. */
ptr += sprintf(ptr, "%02X ", buf[i] & 0xFF);
}
return output;
I am looking for WIN32 program to copy part of the large 1920x1080px 4:2:0 .YUV file (cca. 43GB) into smaller .YUV files. All of the programs I have used, i.e. YUV players, can only copy/save 1 frame at the time. What is the easiest/appropriate method to cut YUV raw data to smaller YUV videos(images)? SOmething similar to ffmpeg command:
ffmpeg -ss [start_seconds] -t [duration_seconds] -i [input_file] [outputfile]
Here is the Minimum Working Example of the code, written in C++, if anyone will search for a simple solution:
// include libraries
#include <fstream>
using namespace std;
#define P420 1.5
const int IMAGE_SIZE = 1920*1080; // ful HD image size in pixels
const double IMAGE_CONVERTION = P420;
int n_frames = 300; // set number of frames to copy
int skip_frames = 500; // set number of frames to skip from the begining of the input file
char in_string[] = "F:\\BigBucksBunny\\yuv\\BigBuckBunny_1920_1080_24fps.yuv";
char out_string[] = "out.yuv";
//////////////////////
// main
//////////////////////
int main(int argc, char** argv)
{
double image_size = IMAGE_SIZE * IMAGE_CONVERTION;
long file_size = 0;
// IO files
ofstream out_file(out_string, ios::out | ios::binary);
ifstream in_file(in_string, ios::in | ios::binary);
// error cheking, like check n_frames+skip_frames overflow
//
// TODO
// image buffer
char* image = new char[(int)image_size];
// skip frames
in_file.seekg(skip_frames*image_size);
// read/write image buffer one by one
for(int i = 0; i < n_frames; i++)
{
in_file.read(image, image_size);
out_file.write(image, image_size);
}
// close the files
out_file.close();
in_file.close();
printf("Copy finished ...");
return 0;
}
If you have python available, you can use this approach to store each frame as a separate file:
src_yuv = open(self.filename, 'rb')
for i in xrange(NUMBER_OF_FRAMES):
data = src_yuv.read(NUMBER_OF_BYTES)
fname = "frame" + "%d" % i + ".yuv"
dst_yuv = open(fname, 'wb')
dst_yuv.write(data)
sys.stdout.write('.')
sys.stdout.flush()
dst_yuv.close()
src_yuv.close()
just change the capitalized variable into valid numbers, e.g
NUMBER_OF_BYTES for one frame 1080p should be 1920*1080*3/2=3110400
Or if you install cygwin you can use the dd tool, e.g. to get the first frame of a 1080p clip do:
dd bs=3110400 count=1 if=sample.yuv of=frame1.yuv
Method1:
If you are using gstreamer and you just want first X amount of yuv frames from large yuv files then you can use below method
gst-launch-1.0 filesrc num-buffers=X location="Your_large.yuv" ! videoparse width=x height=y format="xy" ! filesink location="FirstXframes.yuv"
Method2:
Calculate size of 1 frames and then use split utility to divide large files in small files.
Use
split -b size_in_bytes Large_file prefix