show vector elements in listbox - listbox

I would like to display the elements in a vector in a listbox. However, I'm constantly getting a error: error C2664: 'System::Windows::Forms::ListBox::ObjectCollection::Add' : cannot convert parameter 1 from 'std::basic_string<_Elem,_Traits,_Alloc>' to 'System::Object ^'
I am using windows form in c++/cli.
this is the code:
for (size_t z = 0; z < container.size(); z++){
listBox_name->Items->Add(container[z]);
}

Based on the error message, your vector is a vector of std::string. Use marshal_as to convert the std::string to a managed String^ that the managed list box can accept.
for (size_t z = 0; z < container.size(); z++){
listBox_name->Items->Add(marshal_as<String^>(container[z]));
}
If you find you're doing this a lot, consider changing your vector of std::string to be a fully-managed type, such as List<String^>^.

Related

Autodiff for Jacobian derivative with respect to individual joint angles

I am trying to compute $\partial{J}{q_i}$ in drake C++ for manipulator and as per my search, the best approach seems to be using autodiff function. I was not able to fully understand autodiff approach from the resources that I found, so I apologize if my approach is not clear enough. I have used my understanding from some already asked questions mentioned on the forum regarding auto diff as well as https://drake.mit.edu/doxygen_cxx/classdrake_1_1multibody_1_1_multibody_plant.html as reference.
As I want to calculate $\partial{J}{q_i}$, the return type will be a tensor i.e. 3 * 7 * 7(or 6 * 7 * 7 depending on the spatial jacobian). I can think of using std::vectorEigen::MatrixXd to allocate the output or alternatively just doing one $q_i$ at a time and computing the respective jacobian for the auto diff. In either case, I was struggling to pass it in the initializing the jacobian function.
I did the following to initialize autodiff
std::unique_ptr<multibody::MultibodyPlant<AutoDiffXd>> mplant_autodiff = systems::System<double>::ToAutoDiffXd(mplant);
std::unique_ptr<systems::Context<AutoDiffXd>> mContext_autodiff = mplant_autodiff->CreateDefaultContext();
mContext_autodiff->SetTimeStateAndParametersFrom(*mContext);
const multibody::Frame<AutoDiffXd>* mFrame_EE_autodiff = &mplant_autodiff->GetBodyByName(mEE_link).body_frame();
const multibody::Frame<AutoDiffXd>* mWorld_Frame_autodiff = &(mplant_autodiff->world_frame());
//Initialize the q as autodiff vector
drake::AutoDiffVecXd q_autodiff = drake::math::InitializeAutoDiff(mq_robot);
MatrixX<AutoDiffXd> mJacobian_autodiff; // Linear Jacobian matrix.
mplant_autodiff->SetPositions(context_autodiff.get(), q_autodiff);
mplant_autodiff->CalcJacobianTranslationalVelocity(*mContext_autodiff,
multibody::JacobianWrtVariable::kQDot,
*mFrame_EE_autodiff,
Eigen::Vector3d::Zero(),
*mWorld_Frame_autodiff,
*mWorld_Frame_autodiff,
&mJacobian_autodiff
);
However, as far as I understand, InitializeAutoDiff initializes to the identity matrix, whereas I want to $\partial{J}{q_i}$, so is there is a better way to do it. In addition, I get error messages when I try to call the jacobian matrix. Is there a way to address this problem both for $\partial{J}{q_i}$ for each q_i and changing q_i in a for loop or directly getting the result in a tensor. My apologies if I am doing something total tangent to the correct approach. I thank you in anticipation.
However, as far as I understand, InitializeAutoDiff initializes to the identity matrix, whereas I want to $\partial{J}{q_i}$, so is there is a better way to do it
That is correct. When you call InitializeAutoDiff and compute mJacobian_autodiff, you get a matrix of AutoDiffXd. Each AutoDiffXd has a value() function that stores the double value, and a derivatives() storing the gradient as an Eigen::VectorXd. We have
mJacobian(i, j).value() = J(i, j)
mJacobian_autodiff(i, j).derivatives()(k) = ∂J(i, j)/∂q(k)
So if you want to create a std::vecot<Eigen::MatrixXd> such that the k'th entry of this vector stores the matrix ∂J/∂q(k), then here is a code
std::vector<Eigen::MatrixXd> dJdq(q_autodiff.rows());
for (int i = 0; i < q_autodiff.rows(); ++i) {
dJdq[i].resize(mJacobian_autodiff.rows(), mJacobian_autodiff.cols());
}
for (int i = 0; i < q_autodiff.rows(); ++i) {
// dJidq stores the gradient of the ∂J.col(i)/∂q, namely dJidq(j, k) = ∂J(j, i)/∂q(k)
auto dJidq = ExtractGradient(mJacobian_autodiff.col(i));
for (int j = 0; j < static_cast<int>(dJdq.size()); ++j) {
dJdq[j].col(i) = dJidq.col(j);
}
}
Compute ∂J/∂q(i) for a single i
If you do not want to compute ∂J/∂q(i) for all i, but only for one specific i, you can change the initialization of q_autodiff from InitializeAutoDiff to this
AutoDiffVecXd q_autodiff(q.rows());
for (int k = 0; k < q_autodiff.rows(); ++k) {
q_autodiff(k).value() = q(k)
q_autodiff(k).derivatives() = Vector1d::Zero();
if (k == i) {
q_autodiff(k).derivatives()(0) = 1;
}
}
namely q_autodiff stores the gradient ∂q/∂q(i), which is 0 for all k != i and 1 when k == i. And then you can compute mJacobian_autodiff using your current code. Now mJacobian_autodiff(m, n).derivatives() store the gradient of ∂J(m, m)/∂q(i) for that specific i. You can extract this gradient as
Eigen::Matrix dJdqi(mJacobian_autodiff.rows(), mJacobian_autodiff.cols());
for (int m = 0; m < dJdqi.rows(); ++m) {
for (int n = 0; n < dJdqi.cols(); ++n) {
dJdqi(m, n) = mJacobian_autodiff(m, n).derivatives()(0);
}
}

Why VoxelGrid after filtering gives me only 1 point in the cloud?

I am receiving ROS message of type
sensor_msgs::PointCloud2ConstPtr
in my callback function then I transform it to pointer of type
pcl::PointCloud<pcl::PointXYZ>::Ptr
using function
pcl::fromROSMsg.
After that using this code from pcl tutorials for normal estimation:
void OrganizedCloudToNormals(
const pcl::PointCloud<pcl::PointXYZ>::Ptr &_inputCloud,
pcl::PointCloud<pcl::PointNormal>::Ptr &cloud_normals
)
{
pcl::console::print_highlight ("Estimating scene normals...\n");
pcl::NormalEstimationOMP<pcl::PointXYZ,pcl::PointNormal> nest;
nest.setRadiusSearch (0.001);
nest.setInputCloud (_inputCloud);
nest.compute (*cloud_normals);
//write 0 wherever is NaN as value
for(int i=0; i < cloud_normals->points.size(); i++)
{
cloud_normals->points.at(i).normal_x = isnan(cloud_normals->points.at(i).normal_x) ? 0 : cloud_normals->points.at(i).normal_x;
cloud_normals->points.at(i).normal_y = isnan(cloud_normals->points.at(i).normal_y) ? 0 : cloud_normals->points.at(i).normal_y;
cloud_normals->points.at(i).normal_z = isnan(cloud_normals->points.at(i).normal_z) ? 0 : cloud_normals->points.at(i).normal_z;
cloud_normals->points.at(i).curvature = isnan(cloud_normals->points.at(i).curvature) ? 0 : cloud_normals->points.at(i).curvature;
}
}
after that I have point cloud of the type pcl::PointNormal and trying to downsample it
const float leaf = 0.001f; //0.005f;
pcl::VoxelGrid<pcl::PointNormal> gridScene;
gridScene.setLeafSize(leaf, leaf, leaf);
gridScene.setInputCloud(_scene);
gridScene.filter(*_scene);
where _scene is of the type
pcl::PointCloud<pcl::PointNormal>::Ptr _scene (new pcl::PointCloud<pcl::PointNormal>);
then after filtering I end up with my point cloud _scene and it has only 1 point inside. I have tried to change leaf size but that doesn't change outcome.
Does anyone knows what am I doing wrong?
Thanks in advance
I have found where was the problem. Type pcl::PoinNormal has fields x,y,z,normal_x, normal_y and normal_z but in my function OrganizedCloudToNormals I filled only fields normal_x, normal_y and normal_z and fields x, y and z had value 0 for each point. When I filled fields x,y and z from input point cloud problem with filtering (downsampling) disappeared I have filtered cloud with more than 1 point inside. Probably lack of values in x,y and z fields caused problems later in filter method of the voxel grid object.

Got different EM::predict() results after EM::read() saved model in OpenCV

I'm new to OpenCV and C++ and I'm trying to build a classifier using Gaussian Mixture Model within the OpenCV. I figured out how it works and got it worked ... maybe. I got something like this now:
If I classify the training samples just after the model was trained and saved, I got the result I want. But when I reclassify my training data using the read(), one of the clusters is missing, means I got different cluster result from the same GMM model. I don't get it now because the cluster I want was gone, I can't reproduce the classification again until I retrained the model using the same data. I checked the code in runtime and the result valule in the Vec2d from which predict() returned was never assigned to 1 (I set 3 clusters).
Maybe there's a bug or I did something wrong?
p.s. I'm using 2.4.8 in VS2013
My programs like this:
train part
void GaussianMixtureModel::buildGMM(InputArray _src){
//use source to train GMM and save the model
Mat samples, input = _src.getMat();
createSamples(input, samples);
bool status = em_model.train(samples);
saveModel();
}
save/load the model
FileStorage fs(filename, FileStorage::READ);
if (fs.isOpened()) // if we have file with parameters, read them
{
const FileNode& fn = fs["StatModel.EM"];
em_model.read(fn);
fs.release();
}
FileStorage fs_save(filename, FileStorage::WRITE);
if (fs_save.isOpened()) // if we have file with parameters, read them
{
em_model.write(fs_save);
fs_save.release();
}
predict part
vector<Mat> GaussianMixtureModel::classify(Mat input){
/// samples is a matrix of channels x N elements, each row is a set of feature
Mat samples;
createSamples(input, samples);
for (int k = 0; k < clusterN; k++){
masks[k] = Mat::zeros(input.size(), CV_8UC1);
}
int idx = 0;
for (int i = 0; i < input.rows; i++){
for (int j = 0; j < input.cols; j++){
//process the predicted probability
Mat probs(1, clusterN, CV_64FC1);
Vec2d response = em_model.predict(samples.row(idx++), probs);
int result = cvRound(response[1]);
for (int k = 0; k < clusterN; k++){
if (result == k){
// change to the k-th class's picture
masks[k].at<uchar>(i, j) = 255;
}
...
// something else
}
}
}
}
I suppose my answer will be too late but as I have encountered the same problem the solution I found may be useful for others.
By analysing the source code, I notice that in the case of EM::COV_MAT_DIAGONAL the eigen values of covariances matrix(covsEigenValues in source code) are obtained via SVD after loading the saved data.
However, SVD computes the singular(eigen in our case)values and stores it in ASCENDING order.
To prevent this , I simply extract directly the diagonal element of loaded covariance matrix in covsEigenValues to keep the good order.

Checking the value of a cvMat element when during iteration

I'm using CV_MAT_ELEM to access the value of a cvmat without any problem, but when I use it in a for loop it gives me an error ( assertion failed ).
for (int i=0;i<=direction->cols;i++){
for(int j=0;j<=direction->rows;j++){
if ((CV_MAT_ELEM(*direction,float,i,j)<22.0) ) {
CV_MAT_ELEM(*direction,float,i,j)=0;
}
}
}
You are trying to access some pixels that are not within the image's range.
Try to change
for (int i=0;i<=direction->cols;i++){
^^
for(int j=0;j<=direction->rows;j++){
^^
to
for (int i=0;i<direction->cols;i++){
for(int j=0;j<direction->rows;j++){
P.S.: As #berak commented, you are still using old OpenCV API, i.e. using IplImage and CV_MAT_ELEM. Try to use the new API, i.e. Mat and at() correspondingly.

Compile error in CV_MAT_ELEM

As a result of a call to estimateRigidTransform() I get a cv::Mat object named "trans". To retrieve its contained matrix I try to access its elements this way:
for (i=0; i<2; i++) for (j=0; j<3; j++)
{
mtx[j][i]=CV_MAT_ELEM(trans,double,i,j);
}
Unfortunately with VS2010 I get a compiler error
error C2228: left of '.ptr' must have class/struct/union
for the line with CV_MAT_ELEM. When I unwrap this macro I find something like
(mat).data.ptr + (size_t)(mat).step*(row) + (pix_size)*(col))
When I remove the ".ptr" behind (mat).data it compiles. But I can't imagine that's the solution (or can't imagine that's a bug and I'm the only one who noticed it). So what could be wrong here really?
Thanks!
You don't access the mat elements like this. For a traversal see my other answer here with sample code:
color matrix traversal
or see the opencv refman for grayscale Mat:
Mat M; // should be grayscale
int cols = M.cols, rows = M.rows;
for(int i = 0; i < rows; i++)
{
const double* Mi = M.ptr<double>(i);
for(int j = 0; j < cols; j++)
{
Mi[j]; // is the matrix element.
}
}
Just an addendum from my side: meanwhile I found CV_MAT_ELEM expects a structure CvMat (OpenCV-C-interface) but not cv::Mat (the C++-interface). That's why I get this funny error. Conversion from cv::Mat to CvMat can be done simply by casting to CvMat. Funny confusion with the C and C++ interfaces in OpenCV...

Resources