Provide overloaded versions to work with string type and test const char* sl = "aaa"; const char* s2 = "bbb"; char* s3 = plus(sl, s2) - return

#include
enter image description here
I don't know why this is wrong

Related

clang-format: macro in a function

Some c code
Before format:
#define MS_DEF(type) extern type
MS_DEF(int) my_func(int a, int b, int c, const char *x, const char *y, const char *z)
{
// do something
return 0;
}
After format (clang-format --style=LLVM test.c) :
#define MS_DEF(type) extern type
MS_DEF(int)
my_func(int a, int b, int c, const char *x, const char *y, const char *z) {
// do something
return 0;
}
I want to keep MS_DEF(int) and my_func in same line
MS_DEF(int) my_func(...)
How to do? thanks
The TypenameMacros style option is intended exactly for such cases.
Try adding the following to your .clang-format configuration file:
TypenameMacros: ['MS_DEF']
With just that option, the formatted result is as following
#define MS_DEF(type) extern type
MS_DEF(int) my_func(int a, int b, int c, const char *x, const char *y,
const char *z) {
// do something
return 0;
}

C++ template code generation Error: use of 'some_variable' before deduction of 'auto'

I ran into some issues with this specific code. The problem has most likely something to do with pointer to member of type Harry stored in a tuple, and a vector with Harrytype variable since all other more simple variants do work.
The error that I get with g++:
main.cpp: In instantiation of 'abra(const std::vector<A>&, const std::tuple<_Elements ...>&) [with A = Harry; B = {int Harry::*, int* Harry::*}]::<lambda(const auto:1&)> [with auto:1 = int Harry::*]':
main.cpp:10:13: required from 'void tuple_foreach_constexpr(const std::tuple<T ...>&, F) [with long unsigned int i = 0; long unsigned int size = 2; F = abra(const std::vector<A>&, const std::tuple<_Elements ...>&) [with A = Harry; B = {int Harry::*, int* Harry::*}]::<lambda(const auto:1&)>; T = {int Harry::*, int* Harry::*}]'
main.cpp:17:82: required from 'void tuple_foreach_constexpr(const std::tuple<_Elements ...>&, F) [with F = abra(const std::vector<A>&, const std::tuple<_Elements ...>&) [with A = Harry; B = {int Harry::*, int* Harry::*}]::<lambda(const auto:1&)>; T = {int Harry::*, int* Harry::*}]'
main.cpp:29:32: required from 'void abra(const std::vector<A>&, const std::tuple<_Elements ...>&) [with A = Harry; B = {int Harry::*, int* Harry::*}]'
main.cpp:56:27: required from here
main.cpp:31:82: error: use of 'a' before deduction of 'auto'
if constexpr(std::is_pointer<typename std::remove_reference<decltype(a.*x)>::type>::value)
^
main.cpp:33:30: error: invalid type argument of unary '*' (have 'int')
std::cout << *(a.*x) << std::endl;
^~~~~~~
main.cpp:6:6: error: 'void tuple_foreach_constexpr(const std::tuple<T ...>&, F) [with long unsigned int i = 1; long unsigned int size = 2; F = abra(const std::vector<A>&, const std::tuple<_Elements ...>&) [with A = Harry; B = {int Harry::*, int* Harry::*}]::<lambda(const auto:1&)>; T = {int Harry::*, int* Harry::*}]', declared using local type 'abra(const std::vector<A>&, const std::tuple<_Elements ...>&) [with A = Harry; B = {int Harry::*, int* Harry::*}]::<lambda(const auto:1&)>', is used but never defined [-fpermissive]
void tuple_foreach_constexpr(const std::tuple<T...>& tuple, F func)
^~~~~~~~~~~~~~~~~~~~~~~
code:
#include <iostream>
#include <tuple>
#include <vector>
template<size_t i, size_t size, typename F, typename... T>
void tuple_foreach_constexpr(const std::tuple<T...>& tuple, F func)
{
if constexpr(i<size)
{
func(std::get<i>(tuple));
tuple_foreach_constexpr<i+1, size, F, T...>(tuple, func);
}
}
template<typename F, typename... T>
void tuple_foreach_constexpr(const std::tuple<T...>& tuple, F func)
{
tuple_foreach_constexpr<0, std::tuple_size<std::tuple<T...>>::value, F, T...>(tuple, func);
}
template<typename A, typename... B>
void abra
(
const std::vector<A>& a_vector,
const std::tuple<B...>& b_tuple
)
{
for(const auto& a : a_vector)
{
tuple_foreach_constexpr(b_tuple, [&a](const auto &x)
{
if constexpr(std::is_pointer<typename std::remove_reference<decltype(a.*x)>::type>::value)
{
std::cout << *(a.*x) << std::endl;
}
else
{
std::cout << a.*x << std::endl;
} // this does NOT work
//std::cout << a.*x << std::endl; // this does work
});
}
}
struct Harry
{
int a;
int* b;
};
int main()
{
int m = 20;
std::vector<Harry> h_vector = {Harry{10, &m}};
std::tuple t_tuple = std::make_tuple(&Harry::a, &Harry::b);
abra(h_vector, t_tuple);
}
It would be very nice if someone had some tips on how to solve this.
(I know this all looks like it makes no sense why anyone would need to do this. However, my priority is not to write good, usable code but to learn stuff and also I really want to get this architecture I had in mind to work.)
It would be very nice if someone had some tips on how to solve this.
First of all: I reproduce your error with g++ but my clang++ (7.0.1) compile your code without problem.
Who's right? g++ or clang++?
I'm not a language lawyer and I'm not sure but I suspect it's a g++ bug.
What is saying g++?
It's saying that in this loop
for(const auto& a : a_vector)
{
tuple_foreach_constexpr(b_tuple, [&a](const auto &x)
{
if constexpr(std::is_pointer<typename std::remove_reference<decltype(a.*x)>::type>::value)
{
std::cout << *(a.*x) << std::endl;
}
else
{
std::cout << a.*x << std::endl;
} // this does NOT work
//std::cout << a.*x << std::endl; // this does work
});
}
the a variable, that is an auto variable (const auto& a : a_vector) so it's type must be deduced by the compiler, that is captured inside the lambda function, is used (decltype(a.*x)) before the deduction of the type.
Anyway, the solution of the problem is simple: to make g++ happy, explicit the definition.
You know that a is an element of a_vector that is defined as a std::vector<A> const &, so you know that a is a A const &.
So, if you write the loop
for ( A const & a : a_vector )
{
// ....
}
there is no more needs to deduce the type of a and your code compile also with g++.

histogram kernel memory issue

I am trying to implement an algorithm to process images with more than 256 bins.
The main issue to process histogram in such case comes from the impossibility to allocate more than 32 Kb as local tab in the GPU.
All the algorithms I found for 8 bits per pixel images use a fixed size tab locally.
The histogram is the first process in that tab then a barrier is up and at last an addition is made with the output vector.
I am working with IR image which has more than 32K bins of dynamic.
So I cannot allocate a fixed size tab inside the GPU.
My algorithm use an atomic_add in order to create directly the output histogram.
I am interfacing with OpenCV so, in order to manage the possible case of saturation my bins use floating points. Depending on the ability of the GPU to manage single or double precision.
OpenCV doesn't manage unsigned int, long, and unsigned long data type as matrix type.
I have an error... I do think this error is a kind of segmentation fault.
After several days I still have no idea what can be wrong.
Here is my code :
histogram.cl :
#pragma OPENCL EXTENSION cl_khr_fp64: enable
#pragma OPENCL EXTENSION cl_khr_int64_base_atomics: enable
static void Atomic_Add_f64(__global double *val, double delta)
{
union {
double f;
ulong i;
} old;
union {
double f;
ulong i;
} new;
do {
old.f = *val;
new.f = old.f + delta;
}
while (atom_cmpxchg ( (volatile __global ulong *)val, old.i, new.i) != old.i);
}
static void Atomic_Add_f32(__global float *val, double delta)
{
union
{
float f;
uint i;
} old;
union
{
float f;
uint i;
} new;
do
{
old.f = *val;
new.f = old.f + delta;
}
while (atom_cmpxchg ( (volatile __global ulong *)val, old.i, new.i) != old.i);
}
__kernel void khist(
__global const uchar* _src,
const int src_steps,
const int src_offset,
const int rows,
const int cols,
__global uchar* _dst,
const int dst_steps,
const int dst_offset)
{
const int gid = get_global_id(0);
// printf("This message has been printed from the OpenCL kernel %d \n",gid);
if(gid < rows)
{
__global const _Sty* src = (__global const _Sty*)_src;
__global _Dty* dst = (__global _Dty*) _dst;
const int src_step1 = src_steps/sizeof(_Sty);
const int dst_step1 = dst_steps/sizeof(_Dty);
src += mad24(gid,src_step1,src_offset);
dst += mad24(gid,dst_step1,dst_offset);
_Dty one = (_Dty)1;
for(int c=0;c<cols;c++)
{
const _Rty idx = (_Rty)(*(src+c+src_offset));
ATOMIC_FUN(dst+idx+dst_offset,one);
}
}
}
The function Atomic_Add_f64 directly come from here and there
main.cpp
#include <opencv2/core.hpp>
#include <opencv2/core/ocl.hpp>
#include <fstream>
#include <sstream>
#include <chrono>
int main()
{
cv::Mat_<unsigned short> a(480,640);
cv::RNG rng(std::time(nullptr));
std::for_each(a.begin(),a.end(),[&](unsigned short& v){ v = rng.uniform(0,100);});
bool ret = false;
cv::String file_content;
{
std::ifstream file_stream("../test/histogram.cl");
std::ostringstream file_buf;
file_buf<<file_stream.rdbuf();
file_content = file_buf.str();
}
int output_flag = cv::ocl::Device::getDefault().doubleFPConfig() == 0 ? CV_32F : CV_64F;
cv::String atomic_fun = output_flag == CV_32F ? "Atomic_Add_f32" : "Atomic_Add_f64";
cv::ocl::ProgramSource source(file_content);
// std::cout<<source.source()<<std::endl;
cv::ocl::Kernel k;
cv::UMat src;
cv::UMat dst = cv::UMat::zeros(1,65536,output_flag);
a.copyTo(src);
atomic_fun = cv::format("-D _Sty=%s -D _Rty=%s -D _Dty=%s -D ATOMIC_FUN=%s",
cv::ocl::typeToStr(src.depth()),
cv::ocl::typeToStr(src.depth()), // this to manage case like a matrix of usigned short stored as a matrix of float.
cv::ocl::typeToStr(output_flag),
atomic_fun.c_str());
ret = k.create("khist",source,atomic_fun);
std::cout<<"check create : "<<ret<<std::endl;
k.args(cv::ocl::KernelArg::ReadOnly(src),cv::ocl::KernelArg::WriteOnlyNoSize(dst));
std::size_t sz = a.rows;
ret = k.run(1,&sz,nullptr,false);
std::cout<<"check "<<ret<<std::endl;
cv::Mat b;
dst.copyTo(b);
std::copy_n(b.ptr<double>(0),101,std::ostream_iterator<double>(std::cout," "));
std::cout<<std::endl;
return EXIT_SUCCESS;
}
Hello I arrived to fix it.
I don't really know where the issue come from.
But if I suppose the output as a pointer rather than a matrix it work.
The changes I made are these :
histogram.cl :
__kernel void khist(
__global const uchar* _src,
const int src_steps,
const int src_offset,
const int rows,
const int cols,
__global _Dty* _dst)
{
const int gid = get_global_id(0);
if(gid < rows)
{
__global const _Sty* src = (__global const _Sty*)_src;
__global _Dty* dst = _dst;
const int src_step1 = src_steps/sizeof(_Sty);
src += mad24(gid,src_step1,src_offset);
ulong one = 1;
for(int c=0;c<cols;c++)
{
const _Rty idx = (_Rty)(*(src+c+src_offset));
ATOMIC_FUN(dst+idx,one);
}
}
}
main.cpp
k.args(cv::ocl::KernelArg::ReadOnly(src),cv::ocl::KernelArg::PtrWriteOnly(dst));
The rest of the code is the same in the two files.
For me it work fine.
If someone know why it work when the ouput is declared as a pointer rather than a vector (matrix of one row) I am interested.
Nevertheless my issue is fix :).

Is there any way to convert an Eigen::Matrix back to itk::image?

I used Eigen library to convert several itk::image images into matrices, and do some dense linear algebra computations on them. Finally, I have the output as a matrix, but I need it in itk::image form. Is there any way to do this?
const unsigned int numberOfPixels = importSize[0] * importSize[1];
float* array1 = inverseU.data();
float* localBuffer = new float[numberOfPixels];
std::memcpy(localBuffer, array1, numberOfPixels);
const bool importImageFilterWillOwnTheBuffer = true;
importFilter->SetImportPointer(localBuffer,numberOfPixels,importImageFilterWillOwnTheBuffer);
importFilter->Update();
inverseU is the Eigen library matrix (float), importSize is the size of this matrix. When I give importFilter->GetOutput(), and write the result to file, the image I get is like this, which is not correct.
This is the matrix inverseU.
https://drive.google.com/file/d/0B3L9EtRhN11QME16SGtfSDJzSWs/view?usp=sharing . It is supposed to give a retinal fundus image in image form, I got the matrix after doing deblurring.
Take a look at the ImportImageFilter of itk. In particular, it may be used to build an itk::Image starting from a C-style array (example).
Someone recently asked how to convert a CImg image to ITK image. My answer might be a starting point...
A way to get the array out of a matrix A from Eigen may be found here :
double* array=A.data();
EDIT : here is a piece of code to turn a matrix of float into a png image saved with ITK. First, the matrix is converted to an itk Image of float. Then, this image is rescaled an cast to a image of unsigned char, using the RescaleIntensityImageFilter as explained here. Finally, the image is saved in png format.
#include <iostream>
#include <itkImage.h>
using namespace itk;
using namespace std;
#include <Eigen/Dense>
using Eigen::MatrixXf;
#include <itkImportImageFilter.h>
#include <itkImageFileWriter.h>
#include "itkRescaleIntensityImageFilter.h"
void eigen_To_ITK (MatrixXf mat)
{
const unsigned int Dimension = 2;
typedef itk::Image<unsigned char, Dimension> UCharImageType;
typedef itk::Image< float, Dimension > FloatImageType;
typedef itk::ImportImageFilter< float, Dimension > ImportFilterType;
ImportFilterType::Pointer importFilter = ImportFilterType::New();
typedef itk::RescaleIntensityImageFilter< FloatImageType, UCharImageType > RescaleFilterType;
RescaleFilterType::Pointer rescaleFilter = RescaleFilterType::New();
typedef itk::ImageFileWriter< UCharImageType > WriterType;
WriterType::Pointer writer = WriterType::New();
FloatImageType::SizeType imsize;
imsize[0] = mat.rows();
imsize[1] = mat.cols();
ImportFilterType::IndexType start;
start.Fill( 0 );
ImportFilterType::RegionType region;
region.SetIndex( start );
region.SetSize( imsize );
importFilter->SetRegion( region );
const itk::SpacePrecisionType origin[ Dimension ] = { 0.0, 0.0 };
importFilter->SetOrigin( origin );
const itk::SpacePrecisionType spacing[ Dimension ] = { 1.0, 1.0 };
importFilter->SetSpacing( spacing );
const unsigned int numberOfPixels = imsize[0] * imsize[1];
const bool importImageFilterWillOwnTheBuffer = true;
float * localBuffer = new float[ numberOfPixels ];
float * it = localBuffer;
memcpy(it, mat.data(), numberOfPixels*sizeof(float));
importFilter->SetImportPointer( localBuffer, numberOfPixels,importImageFilterWillOwnTheBuffer );
rescaleFilter ->SetInput(importFilter->GetOutput());
rescaleFilter->SetOutputMinimum(0);
rescaleFilter->SetOutputMaximum(255);
writer->SetFileName( "output.png" );
writer->SetInput(rescaleFilter->GetOutput() );
writer->Update();
}
int main()
{
const int rows = 42;
const int cols = 90;
MatrixXf mat1(rows, cols);
mat1.topLeftCorner(rows/2, cols/2) = MatrixXf::Zero(rows/2, cols/2);
mat1.topRightCorner(rows/2, cols/2) = MatrixXf::Identity(rows/2, cols/2);
mat1.bottomLeftCorner(rows/2, cols/2) = -MatrixXf::Identity(rows/2, cols/2);
mat1.bottomRightCorner(rows/2, cols/2) = MatrixXf::Zero(rows/2, cols/2);
mat1+=0.1*MatrixXf::Random(rows,cols);
eigen_To_ITK (mat1);
cout<<"running fine"<<endl;
return 0;
}
The program is build using CMake. Here is the CMakeLists.txt :
cmake_minimum_required(VERSION 2.8 FATAL_ERROR)
project(ItkTest)
find_package(ITK REQUIRED)
include(${ITK_USE_FILE})
# to include eigen. This path may need to be changed
include_directories(/usr/local/include/eigen3)
add_executable(MyTest main.cpp)
target_link_libraries(MyTest ${ITK_LIBRARIES})

Add string from arguments to shared memory

I need to add to the shared memory string from arguments (ex. ./a.out abcxyz). I wrote the code, but it don't add string or don't show me string. What is the reason?
int main(int argc, char **argv){
int shmid;
char *buf;
shmid = shmget(KEY, 5, IPC_CREAT | 0600);
buf = (char *)shmat(shmid, NULL, 0);
*buf = argv[1];
printf("\n%c\n", buf);
return 0;
}
You're copying a string, so you can't just use assignment - you need strcpy:
#include <string.h>
...
strcpy(buf, argv[1]);

Resources