HDF5 rowmajor or colmajor - hdf5

Is it possible to know if a matrix stored in HDF5 format is in RowMajor or ColMajor? For example when I save matrices from octave, which stores them internally as ColMajor, I need to transpose them when I read them in my C code where matrices are stored in RowMajor, and vice versa.

HDF5 stores data in row major order:
HDF5 uses C storage conventions, assuming that the last listed dimension is the fastest-changing dimension and the first-listed dimension is the slowest changing.
from the HDF5 User's Guide.
However, if you're using Octave's built-in HDF5 interface, it will automatically transpose the arrays for you. In general, how the data is actually written in the HDF5 file should be completely opaque to the end-user, and the interface should deal with differences in array ordering, etc.

As #Yossarian pointed out. HDF5 always stores data as row-major (C convention). Octave is the same as Fortran and internally stores data as column-major.
When writing a matrix from Octave, the HDF5 layer does the transpose for you, so it is always written as row-major no matter what language you use. This provides the portability of the file.
There is a very good example in the HDF5 User's Guide section 7.3.2.5, as mentioned by #Yossarian. Here's the example (almost) reproduced using Octave:
octave:1> A = [ 1:3; 4:6 ]
A =
1 2 3
4 5 6
octave:2> save("-hdf5", "test.h5", "A")
octave:3> quit
~$ h5dump test.h5
HDF5 "test.h5" {
GROUP "/" {
COMMENT "# Created by Octave 3.6.4, Fri Jun 13 08:36:16 2014 MDT <user#localhost>"
GROUP "A" {
ATTRIBUTE "OCTAVE_NEW_FORMAT" {
DATATYPE H5T_STD_U8LE
DATASPACE SCALAR
DATA {
(0): 1
}
}
DATASET "type" {
DATATYPE H5T_STRING {
STRSIZE 7;
STRPAD H5T_STR_NULLTERM;
CSET H5T_CSET_ASCII;
CTYPE H5T_C_S1;
}
DATASPACE SCALAR
DATA {
(0): "matrix"
}
}
DATASET "value" {
DATATYPE H5T_IEEE_F64LE
DATASPACE SIMPLE { ( 3, 2 ) / ( 3, 2 ) }
DATA {
(0,0): 1, 4,
(1,0): 2, 5,
(2,0): 3, 6
}
}
}
}
}
Notice how the HDF5 layer has transposed the matrix to make sure it is stored in row-major format.
Then an example of reading it in C:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <hdf5.h>
#define FILE "test.h5"
#define DS "A/value"
int
main(int argc, char **argv)
{
int i = 0;
int j = 0;
int n = 0;
int x = 0;
int rank = 0;
hid_t file_id;
hid_t space_id;
hid_t dset_id;
herr_t stat;
hsize_t *dims = NULL;
int *data = NULL;
file_id = H5Fopen(FILE, H5F_ACC_RDONLY, H5P_DEFAULT);
dset_id = H5Dopen(file_id, DS, dset_id);
space_id = H5Dget_space(dset_id);
n = H5Sget_simple_extent_npoints(space_id);
rank = H5Sget_simple_extent_ndims(space_id);
dims = malloc(rank*sizeof(int));
stat = H5Sget_simple_extent_dims(space_id, dims, NULL);
printf("rank: %d\t dimensions: ", rank);
for (i = 0; i < rank; ++i) {
if (i == 0) {
printf("(");
}
printf("%llu", dims[i]);
if (i == (rank -1)) {
printf(")\n");
} else {
printf(" x ");
}
}
data = malloc(n*sizeof(int));
memset(data, 0, n*sizeof(int));
stat = H5Dread(dset_id, H5T_NATIVE_INT, H5S_ALL, H5S_ALL, H5P_DEFAULT,
data);
printf("%s:\n", DS);
for (i = 0; i < dims[0]; ++i) {
printf(" [ ");
for (j = 0; j < dims[1]; ++j) {
x = i * dims[1] + j;
printf("%d ", data[x]);
}
printf("]\n");
}
stat = H5Sclose(space_id);
stat = H5Dclose(dset_id);
stat = H5Fclose(file_id);
return(EXIT_SUCCESS);
}
When compiled and run gives:
~$ h5cc -o rmat rmat.c
~$ ./rmat
rank: 2 dimensions: (3 x 2)
A/value:
[ 1 4 ]
[ 2 5 ]
[ 3 6 ]
This is great as it means the matrices are stored optimized in memory. What it does mean though is that you have to change how you do your calculations. For row-major you need to do pre-multiplication, while for column-major you should be doing post-multiplication.
Here is an example, hopefully it is explained a bit clearer.
Does this help?

Related

Eigen FFT library

I am trying to use Eigen unsupported FFT library using FFTW backend. Specifically I am want to do a 2D FFT. Here's my code :
void fft2(Eigen::MatrixXf * matIn,Eigen::MatrixXcf * matOut)
{
const int nRows = matIn->rows();
const int nCols = matIn->cols();
Eigen::FFT< float > fft;
for (int k = 0; k < nRows; ++k) {
Eigen::VectorXcf tmpOut(nRows);
fft.fwd(tmpOut, matIn->row(k));
matOut->row(k) = tmpOut;
}
for (int k = 0; k < nCols; ++k) {
Eigen::VectorXcf tmpOut(nCols);
fft.fwd(tmpOut, matOut->col(k));
matOut->col(k) = tmpOut;
}
}
I have 2 problems :
First, I get a segmentation fault when using this code on some matrix. This error doesn't happen for all matrixes. I guess it's related to an alignment error. I use the functions in the following way :
Eigen::MatrixXcf matFFT(mat.rows(),mat.cols());
fft2(&matFloat,&matFFT);
where mat can be any matrix. Funnily, the code plants only when I compute the FFT over the 2nd dimension, never on the first one. This doesn't happen with kissFFT backend.
Second I don't get the same result as Matlab (that uses FFTW), when the function works. Eg :
Input Matrix :
[2, 1, 2]
[3, 2, 1]
[1, 2, 3]
Eigen gives :
[ (0,5), (0.5,0.86603), (0,0.5)]
[ (-4.3301,-2.5), (-1,-1.7321), (0.31699,-1.549)]
[ (-1.5,-0.86603), (2,3.4641), (2,3.4641)]
Matlab gives :
17 + 0i 0.5 + 0.86603i 0.5 - 0.86603i
-1 + 0i -1 - 1.7321i 2 - 3.4641i
-1 + 0i 2 + 3.4641i -1 + 1.7321i
Only the central part is the same.
Any help would be welcome.
I failed to activate EIGEN_FFTW_DEFAULT in my first solution, activating it reveals an error in the fftw-support implementation of Eigen. The following works:
#define EIGEN_FFTW_DEFAULT
#include <iostream>
#include <unsupported/Eigen/FFT>
int main(int argc, char *argv[])
{
Eigen::MatrixXf A(3,3);
A << 2,1,2, 3,2,1, 1,2,3;
const int nRows = A.rows();
const int nCols = A.cols();
std::cout << A << "\n\n";
Eigen::MatrixXcf B(3,3);
Eigen::FFT< float > fft;
for (int k = 0; k < nRows; ++k) {
Eigen::VectorXcf tmpOut(nRows);
fft.fwd(tmpOut, A.row(k));
B.row(k) = tmpOut;
}
std::cout << B << "\n\n";
Eigen::FFT< float > fft2; // Workaround: Using the same FFT object for a real and a complex FFT seems not to work with FFTW
for (int k = 0; k < nCols; ++k) {
Eigen::VectorXcf tmpOut(nCols);
fft2.fwd(tmpOut, B.col(k));
B.col(k) = tmpOut;
}
std::cout << B << '\n';
}
I get this output:
2 1 2
3 2 1
1 2 3
(17,0) (0.5,0.866025) (0.5,-0.866025)
(-1,0) (-1,-1.73205) (2,-3.4641)
(-1,0) (2,3.4641) (-1,1.73205)
Which is the same as your Matlab result.
N.B.: FFTW seems to support 2D real->complex FFT natively (without using individual FFTs). This is likely more efficient.
fftwf_plan fftwf_plan_dft_r2c_2d(int n0, int n1,
float *in, fftwf_complex *out, unsigned flags);

A mod B, A and B are very large numbers

I want to know if A and B are relatively prime using Euclidean Algorithm. A and B are large numbers that cannot be stored in any data type(in C), so they are stored in a linked list. In the algorithm, the operator % is used. My question is, is there a way to compute for A mod B without actually directly using the % operator. I found out that % is distributive over addition:
A%B = ((a1%B)+(a2%B))%B.
But the problem still persists because I will still be doing %B operations.
You need calculate a % b without the % operator. OK? By definition the modulo operation finds the remainder after division of one number by another.
In python:
# mod = a % b
def mod(a, b):
return a-b*int(a/b)
>>> x = [mod(i,j) for j in range(1,100) for i in range(1,100)]
>>> y = [i % j for j in range(1,100) for i in range(1,100)]
>>> x == y
True
In C++:
#include <iostream>
#include <math.h>
using namespace std;
unsigned int mod(unsigned int a, unsigned int b) {
return (unsigned int)(a-b*floor(a/b));
}
int main() {
for (unsigned int i=1; i<=sizeof(unsigned int); ++i)
for (unsigned int j=1; j<=sizeof(unsigned int); ++j)
if (mod(i,j) != i%j)
cout << "Somthing wrong!!";
cout << "Proved for all unsigned int!";
return 0;
}
Proved for all unsigned int!
Now, just extend the result to your big numbers...!!!

How to do classification manually parsing the support vectors from LibSVM model?

As much as I understand, I could parse the support vectors from the model produced by training with a set of data with LibSVM.
What would be the formula, to produce the classifier?
Do I need the data in the headers of the file, like the following (kernel etc...before the listed support vectors):
svm_type c_svc
kernel_type rbf
gamma 0.125
nr_class 4
total_sv 1038
rho -0.859244 -0.876628 -0.958343 0.543365 -1.10722 -1.79433
label 2 1 3 0
nr_sv 364 276 242 156
SV
My case is
I want to do classification from Node.JS. But there isn't any bindings for LibSVM for it, yet.
Since my models are not going to change, I would like to do the classification in Node.JS, holding the model in-memory.
If this proves to be slow, I rather write the same classification from scratch in C++ and create a wrapper module if it's only a matter of a simple computation (as I suspect it is).
Thanks.
You should be able to translate the C function to Javascript.
Here is the relevant code:
double svm_predict_values(const svm_model *model, const svm_node *x, double* dec_values)
{
int i;
int nr_class = model->nr_class;
int l = model->l;
double *kvalue = Malloc(double,l);
for(i=0;i<l;i++)
kvalue[i] = Kernel::k_function(x,model->SV[i],model->param);
int *start = Malloc(int,nr_class);
start[0] = 0;
for(i=1;i<nr_class;i++)
start[i] = start[i-1]+model->nSV[i-1];
int *vote = Malloc(int,nr_class);
for(i=0;i<nr_class;i++)
vote[i] = 0;
int p=0;
for(i=0;i<nr_class;i++)
for(int j=i+1;j<nr_class;j++)
{
double sum = 0;
int si = start[i];
int sj = start[j];
int ci = model->nSV[i];
int cj = model->nSV[j];
int k;
double *coef1 = model->sv_coef[j-1];
double *coef2 = model->sv_coef[i];
for(k=0;k<ci;k++)
sum += coef1[si+k] * kvalue[si+k];
for(k=0;k<cj;k++)
sum += coef2[sj+k] * kvalue[sj+k];
sum -= model->rho[p];
dec_values[p] = sum;
if(dec_values[p] > 0)
++vote[i];
else
++vote[j];
p++;
}
int vote_max_idx = 0;
for(i=1;i<nr_class;i++)
if(vote[i] > vote[vote_max_idx])
vote_max_idx = i;
free(kvalue);
free(start);
free(vote);
return model->label[vote_max_idx];
}
Notice that you have to recreate this equation:
The only difference is since your model has 4 classes, you need to implement the vote system which is basically the code above.
Hope it helps.

Working with matrices, accelerate framework, iOS

I have two matrices: A and B.
How can I store them?
How can I calculate the inverse matrix of matrix A using the Accelerate framework?
How can find the product of A*B?
How can I transpose matrix A using the Accelerate framework?
Thank you for answering my questions!
Helper file
#import <Foundation/Foundation.h>
#include <Accelerate/Accelerate.h>
#interface Working_with_matrices : NSObject
-(int)invert_matrix:(int) N andWithMatrix:(double*) matrix;
#end
Implementation file
#import "Working_with_matrices.h"
#include <Accelerate/Accelerate.h>
#implementation Working_with_matrices
-(int) matrix_invert:(int) N andWithMatrix:(double*)matrix
{
int error=0;
int *pivot = malloc(N*N*sizeof(int));
double *workspace = malloc(N*sizeof(double));
dgetrf_(&N, &N, matrix, &N, pivot, &error);
if (error != 0) {
NSLog(#"Error 1");
return error;
}
dgetri_(&N, matrix, &N, pivot, workspace, &N, &error);
if (error != 0) {
NSLog(#"Error 2");
return error;
}
free(pivot);
free(workspace);
return error;
}
Call my code from main function
#import <Foundation/Foundation.h>
#import "Working_with_matrices.h"
int main(int argc, const char * argv[])
{
int N = 3;
double A[9];
Working_with_matrices* wm=[[Working_with_matrices alloc]init];
A[0] = 1; A[1] = 1; A[2] = 7;
A[3] = 1; A[4] = 2; A[5] = 1;
A[6] = 1; A[7] = 1; A[8] = 3;
[wm invert_matrix:N andWithMatrix:A];
// [ -1.25 -1.0 3.25 ]
// A^-1 = [ 0.5 1.0 -1.5 ]
// [ 0.25 0.0 -0.25 ]
for (int i=0; i<9; i++)
{
NSLog(#"%f", A[i]);
}
return 0;
}
I'm still kinda new to using the accelerate framework but I'll answer what I can.
The accelerate framework expects the matrices to be passed in as a
1D array. So if you have a 4x4 matrix, the first row would be placed
in indexes 0-3 of your array, the second rouw would be placed in
indexes 4-7 and so on.
I've never done it but this answer looks like a good starting point. https://stackoverflow.com/a/11321499/385017
The method you'll want to use is vDSP_mmul for single precision or vDSP_mmulD for double precision. You might want to look at the documentation for it to get a better unerstanding of how to use it but heres an example to get you started.
float *matrixA; //set by you
float *matrixB; //set by you
float *matrixAB; //the matrix that the answer will be stored in
vDSP_mmul( matrixA, 1, matrixB, 1, matrixAB, 1, 4, 4, 4 );
// the 1s should be left alone in most situations
// The 4s in order are:
// the number of rows in matrix A
// the number of columns in matrix B
// the number of columns in matrix A and the number of rows in matrix B.

Extracting DCT coefficients from encoded images and video

Is there a way to easily extract the DCT coefficients (and quantization parameters) from encoded images and video? Any decoder software must be using them to decode block-DCT encoded images and video. So I'm pretty sure the decoder knows what they are. Is there a way to expose them to whomever is using the decoder?
I'm implementing some video quality assessment algorithms that work directly in the DCT domain. Currently, the majority of my code uses OpenCV, so it would be great if anyone knows of a solution using that framework. I don't mind using other libraries (perhaps libjpeg, but that seems to be for still images only), but my primary concern is to do as little format-specific work as possible (I don't want to reinvent the wheel and write my own decoders). I want to be able to open any video/image (H.264, MPEG, JPEG, etc) that OpenCV can open, and if it's block DCT-encoded, to get the DCT coefficients.
In the worst case, I know that I can write up my own block DCT code, run the decompressed frames/images through it and then I'd be back in the DCT domain. That's hardly an elegant solution, and I hope I can do better.
Presently, I use the fairly common OpenCV boilerplate to open images:
IplImage *image = cvLoadImage(filename);
// Run quality assessment metric
The code I'm using for video is equally trivial:
CvCapture *capture = cvCaptureFromAVI(filename);
while (cvGrabFrame(capture))
{
IplImage *frame = cvRetrieveFrame(capture);
// Run quality assessment metric on frame
}
cvReleaseCapture(&capture);
In both cases, I get a 3-channel IplImage in BGR format. Is there any way I can get the DCT coefficients as well?
Well, I did a bit of reading and my original question seems to be an instance of wishful thinking.
Basically, it's not possible to get the DCT coefficients from H.264 video frames for the simple reason that H.264 doesn't use DCT. It uses a different transform (integer transform). Next, the coefficients for that transform don't necessarily change on a frame-by-frame basis -- H.264 is smarter cause it splits up frames into slices. It should be possible to get those coefficients through a special decoder, but I doubt OpenCV exposes it for the user.
For JPEG, things are a bit more positive. As I suspected, libjpeg exposes the DCT coefficients for you. I wrote a small app to show that it works (source at the end). It makes a new image using the DC term from each block. Because the DC term is equal to the block average (after proper scaling), the DC images are downsampled versions of the input JPEG image.
EDIT: fixed scaling in source
Original image (512 x 512):
DC images (64x64): luma Cr Cb RGB
Source (C++):
#include <stdio.h>
#include <assert.h>
#include <cv.h>
#include <highgui.h>
extern "C"
{
#include "jpeglib.h"
#include <setjmp.h>
}
#define DEBUG 0
#define OUTPUT_IMAGES 1
/*
* Extract the DC terms from the specified component.
*/
IplImage *
extract_dc(j_decompress_ptr cinfo, jvirt_barray_ptr *coeffs, int ci)
{
jpeg_component_info *ci_ptr = &cinfo->comp_info[ci];
CvSize size = cvSize(ci_ptr->width_in_blocks, ci_ptr->height_in_blocks);
IplImage *dc = cvCreateImage(size, IPL_DEPTH_8U, 1);
assert(dc != NULL);
JQUANT_TBL *tbl = ci_ptr->quant_table;
UINT16 dc_quant = tbl->quantval[0];
#if DEBUG
printf("DCT method: %x\n", cinfo->dct_method);
printf
(
"component: %d (%d x %d blocks) sampling: (%d x %d)\n",
ci,
ci_ptr->width_in_blocks,
ci_ptr->height_in_blocks,
ci_ptr->h_samp_factor,
ci_ptr->v_samp_factor
);
printf("quantization table: %d\n", ci);
for (int i = 0; i < DCTSIZE2; ++i)
{
printf("% 4d ", (int)(tbl->quantval[i]));
if ((i + 1) % 8 == 0)
printf("\n");
}
printf("raw DC coefficients:\n");
#endif
JBLOCKARRAY buf =
(cinfo->mem->access_virt_barray)
(
(j_common_ptr)cinfo,
coeffs[ci],
0,
ci_ptr->v_samp_factor,
FALSE
);
for (int sf = 0; (JDIMENSION)sf < ci_ptr->height_in_blocks; ++sf)
{
for (JDIMENSION b = 0; b < ci_ptr->width_in_blocks; ++b)
{
int intensity = 0;
intensity = buf[sf][b][0]*dc_quant/DCTSIZE + 128;
intensity = MAX(0, intensity);
intensity = MIN(255, intensity);
cvSet2D(dc, sf, (int)b, cvScalar(intensity));
#if DEBUG
printf("% 2d ", buf[sf][b][0]);
#endif
}
#if DEBUG
printf("\n");
#endif
}
return dc;
}
IplImage *upscale_chroma(IplImage *quarter, CvSize full_size)
{
IplImage *full = cvCreateImage(full_size, IPL_DEPTH_8U, 1);
cvResize(quarter, full, CV_INTER_NN);
return full;
}
GLOBAL(int)
read_JPEG_file (char * filename, IplImage **dc)
{
/* This struct contains the JPEG decompression parameters and pointers to
* working space (which is allocated as needed by the JPEG library).
*/
struct jpeg_decompress_struct cinfo;
struct jpeg_error_mgr jerr;
/* More stuff */
FILE * infile; /* source file */
/* In this example we want to open the input file before doing anything else,
* so that the setjmp() error recovery below can assume the file is open.
* VERY IMPORTANT: use "b" option to fopen() if you are on a machine that
* requires it in order to read binary files.
*/
if ((infile = fopen(filename, "rb")) == NULL) {
fprintf(stderr, "can't open %s\n", filename);
return 0;
}
/* Step 1: allocate and initialize JPEG decompression object */
cinfo.err = jpeg_std_error(&jerr);
/* Now we can initialize the JPEG decompression object. */
jpeg_create_decompress(&cinfo);
/* Step 2: specify data source (eg, a file) */
jpeg_stdio_src(&cinfo, infile);
/* Step 3: read file parameters with jpeg_read_header() */
(void) jpeg_read_header(&cinfo, TRUE);
/* We can ignore the return value from jpeg_read_header since
* (a) suspension is not possible with the stdio data source, and
* (b) we passed TRUE to reject a tables-only JPEG file as an error.
* See libjpeg.txt for more info.
*/
/* Step 4: set parameters for decompression */
/* In this example, we don't need to change any of the defaults set by
* jpeg_read_header(), so we do nothing here.
*/
jvirt_barray_ptr *coeffs = jpeg_read_coefficients(&cinfo);
IplImage *y = extract_dc(&cinfo, coeffs, 0);
IplImage *cb_q = extract_dc(&cinfo, coeffs, 1);
IplImage *cr_q = extract_dc(&cinfo, coeffs, 2);
IplImage *cb = upscale_chroma(cb_q, cvGetSize(y));
IplImage *cr = upscale_chroma(cr_q, cvGetSize(y));
cvReleaseImage(&cb_q);
cvReleaseImage(&cr_q);
#if OUTPUT_IMAGES
cvSaveImage("y.png", y);
cvSaveImage("cb.png", cb);
cvSaveImage("cr.png", cr);
#endif
*dc = cvCreateImage(cvGetSize(y), IPL_DEPTH_8U, 3);
assert(dc != NULL);
cvMerge(y, cr, cb, NULL, *dc);
cvReleaseImage(&y);
cvReleaseImage(&cb);
cvReleaseImage(&cr);
/* Step 7: Finish decompression */
(void) jpeg_finish_decompress(&cinfo);
/* We can ignore the return value since suspension is not possible
* with the stdio data source.
*/
/* Step 8: Release JPEG decompression object */
/* This is an important step since it will release a good deal of memory. */
jpeg_destroy_decompress(&cinfo);
fclose(infile);
return 1;
}
int
main(int argc, char **argv)
{
int ret = 0;
if (argc != 2)
{
fprintf(stderr, "usage: %s filename.jpg\n", argv[0]);
return 1;
}
IplImage *dc = NULL;
ret = read_JPEG_file(argv[1], &dc);
assert(dc != NULL);
IplImage *rgb = cvCreateImage(cvGetSize(dc), IPL_DEPTH_8U, 3);
cvCvtColor(dc, rgb, CV_YCrCb2RGB);
#if OUTPUT_IMAGES
cvSaveImage("rgb.png", rgb);
#else
cvNamedWindow("DC", CV_WINDOW_AUTOSIZE);
cvShowImage("DC", rgb);
cvWaitKey(0);
#endif
cvReleaseImage(&dc);
cvReleaseImage(&rgb);
return 0;
}
You can use, libjpeg to extract dct data of your jpeg file, but for h.264 video file, I can't find any open source code that give you dct data (actully Integer dct data). But you can use h.264 open source software like JM, JSVM or x264. In these two source file, you have to find their specific function that make use of dct function, and change it to your desire form, to get your output dct data.
For Image:
use the following code, and after read_jpeg_file( infilename, v, quant_tbl ), v and quant_tbl will have dct data and quantization table of your jpeg image respectively.
I used Qvector to store my output data, change it to your preferred c++ array list.
#include <iostream>
#include <stdio.h>
#include <jpeglib.h>
#include <stdlib.h>
#include <setjmp.h>
#include <fstream>
#include <QVector>
int read_jpeg_file( char *filename, QVector<QVector<int> > &dct_coeff, QVector<unsigned short> &quant_tbl)
{
struct jpeg_decompress_struct cinfo;
struct jpeg_error_mgr jerr;
FILE * infile;
if ((infile = fopen(filename, "rb")) == NULL) {
fprintf(stderr, "can't open %s\n", filename);
return 0;
}
cinfo.err = jpeg_std_error(&jerr);
jpeg_create_decompress(&cinfo);
jpeg_stdio_src(&cinfo, infile);
(void) jpeg_read_header(&cinfo, TRUE);
jvirt_barray_ptr *coeffs_array = jpeg_read_coefficients(&cinfo);
for (int ci = 0; ci < 1; ci++)
{
JBLOCKARRAY buffer_one;
JCOEFPTR blockptr_one;
jpeg_component_info* compptr_one;
compptr_one = cinfo.comp_info + ci;
for (int by = 0; by < compptr_one->height_in_blocks; by++)
{
buffer_one = (cinfo.mem->access_virt_barray)((j_common_ptr)&cinfo, coeffs_array[ci], by, (JDIMENSION)1, FALSE);
for (int bx = 0; bx < compptr_one->width_in_blocks; bx++)
{
blockptr_one = buffer_one[0][bx];
QVector<int> tmp;
for (int bi = 0; bi < 64; bi++)
{
tmp.append(blockptr_one[bi]);
}
dct_coeff.push_back(tmp);
}
}
}
// coantization table
j_decompress_ptr dec_cinfo = (j_decompress_ptr) &cinfo;
jpeg_component_info *ci_ptr = &dec_cinfo->comp_info[0];
JQUANT_TBL *tbl = ci_ptr->quant_table;
for(int ci =0 ; ci < 64; ci++){
quant_tbl.append(tbl->quantval[ci]);
}
return 1;
}
int main()
{
QVector<QVector<int> > v;
QVector<unsigned short> quant_tbl;
char *infilename = "your_image.jpg";
std::ofstream out;
out.open("out_dct.txt");
if( read_jpeg_file( infilename, v, quant_tbl ) > 0 ){
for(int j = 0; j < v.size(); j++ ){
for (int i = 0; i < v[0].size(); ++i){
out << v[j][i] << "\t";
}
out << "---------------" << std::endl;
}
out << "\n\n\n" << std::string(10,'-') << std::endl;
out << "\nQauntization Table:" << std::endl;
for(int i = 0; i < quant_tbl.size(); i++ ){
out << quant_tbl[i] << "\t";
}
}
else{
std::cout << "Can not read, Returned With Error";
return -1;
}
out.close();
return 0;
}

Resources