In GeoDMS, I am trying to buffer a polygon but I get an error - buffer

In GeoDMS I want to buffer a polygon set with 5 meters, but I get an error:
polygon_i4D Error: Cannot find operator for these arguments:
arg1 of type DataItem<FPolygon>
arg2 of type DataItem<Float64>
Can someone help me with this issue?
unit<uint32> shapes
: StorageName = "%SourceDataDir%/CBS/bevolkingskern_2011.shp"
, StorageType = "gdal.vect"
, StorageReadOnly = "True"
, FreeData = "False"
, SyncMode = "None"
{
attribute<geometries/rdc> geometry (poly) ;
attribute<geometries/rdc> buffer (poly) := polygon_i4D(geometry, 5d);
}

The configured expression for the buffer attribute results in an inflated polygon.
Use the - operator to find the buffer (the inflated area but not the original area),
for example:
attribute<geometries/rdc> buffer :=
value(polygon_i4D(ipolygon(geometry), 5d) - ipolygon(geometry), geometries/rdc);

Can you try:
attribute<geometries/rdc> buffer := fpolygon(polygon_i4D(ipolygon(geometry), 5d));

Related

writing to flash memory dspic33e

I have some questions regarding the flash memory with a dspic33ep512mu810.
I'm aware of how it should be done:
set all the register for address, latches, etc. Then do the sequence to start the write procedure or call the builtins function.
But I find that there is some small difference between what I'm experiencing and what is in the DOC.
when writing the flash in WORD mode. In the DOC it is pretty straightforward. Following is the example code in the DOC
int varWord1L = 0xXXXX;
int varWord1H = 0x00XX;
int varWord2L = 0xXXXX;
int varWord2H = 0x00XX;
int TargetWriteAddressL; // bits<15:0>
int TargetWriteAddressH; // bits<22:16>
NVMCON = 0x4001; // Set WREN and word program mode
TBLPAG = 0xFA; // write latch upper address
NVMADR = TargetWriteAddressL; // set target write address
NVMADRU = TargetWriteAddressH;
__builtin_tblwtl(0,varWord1L); // load write latches
__builtin_tblwth(0,varWord1H);
__builtin_tblwtl(0x2,varWord2L);
__builtin_tblwth(0x2,varWord2H);
__builtin_disi(5); // Disable interrupts for NVM unlock sequence
__builtin_write_NVM(); // initiate write
while(NVMCONbits.WR == 1);
But that code doesn't work depending on the address where I want to write. I found a fix to write one WORD but I can't write 2 WORD where I want. I store everything in the aux memory so the upper address(NVMADRU) is always 0x7F for me. The NVMADR is the address I can change. What I'm seeing is that if the address where I want to write modulo 4 is not 0 then I have to put my value in the 2 last latches, otherwise I have to put the value in the first latches.
If address modulo 4 is not zero, it doesn't work like the doc code(above). The value that will be at the address will be what is in the second set of latches.
I fixed it for writing only one word at a time like this:
if(Address % 4)
{
__builtin_tblwtl(0, 0xFFFF);
__builtin_tblwth(0, 0x00FF);
__builtin_tblwtl(2, ValueL);
__builtin_tblwth(2, ValueH);
}
else
{
__builtin_tblwtl(0, ValueL);
__builtin_tblwth(0, ValueH);
__builtin_tblwtl(2, 0xFFFF);
__builtin_tblwth(2, 0x00FF);
}
I want to know why I'm seeing this behavior?
2)I also want to write a full row.
That also doesn't seem to work for me and I don't know why because I'm doing what is in the DOC.
I tried a simple write row code and at the end I just read back the first 3 or 4 element that I wrote to see if it works:
NVMCON = 0x4002; //set for row programming
TBLPAG = 0x00FA; //set address for the write latches
NVMADRU = 0x007F; //upper address of the aux memory
NVMADR = 0xE7FA;
int latchoffset;
latchoffset = 0;
__builtin_tblwtl(latchoffset, 0);
__builtin_tblwth(latchoffset, 0); //current = 0, available = 1
latchoffset+=2;
__builtin_tblwtl(latchoffset, 1);
__builtin_tblwth(latchoffset, 1); //current = 0, available = 1
latchoffset+=2;
.
. all the way to 127(I know I could have done it in a loop)
.
__builtin_tblwtl(latchoffset, 127);
__builtin_tblwth(latchoffset, 127);
INTCON2bits.GIE = 0; //stop interrupt
__builtin_write_NVM();
while(NVMCONbits.WR == 1);
INTCON2bits.GIE = 1; //start interrupt
int testaddress;
testaddress = 0xE7FA;
status = NVMemReadIntH(testaddress);
status = NVMemReadIntL(testaddress);
testaddress += 2;
status = NVMemReadIntH(testaddress);
status = NVMemReadIntL(testaddress);
testaddress += 2;
status = NVMemReadIntH(testaddress);
status = NVMemReadIntL(testaddress);
testaddress += 2;
status = NVMemReadIntH(testaddress);
status = NVMemReadIntL(testaddress);
What I see is that the value that is stored in the address 0xE7FA is 125, in 0xE7FC is 126 and in 0xE7FE is 127. And the rest are all 0xFFFF.
Why is it taking only the last 3 latches and write them in the first 3 address?
Thanks in advance for your help people.
The dsPIC33 program memory space is treated as 24 bits wide, it is
more appropriate to think of each address of the program memory as a
lower and upper word, with the upper byte of the upper word being
unimplemented
(dsPIC33EPXXX datasheet)
There is a phantom byte every two program words.
Your code
if(Address % 4)
{
__builtin_tblwtl(0, 0xFFFF);
__builtin_tblwth(0, 0x00FF);
__builtin_tblwtl(2, ValueL);
__builtin_tblwth(2, ValueH);
}
else
{
__builtin_tblwtl(0, ValueL);
__builtin_tblwth(0, ValueH);
__builtin_tblwtl(2, 0xFFFF);
__builtin_tblwth(2, 0x00FF);
}
...will be fine for writing a bootloader if generating values from a valid Intel HEX file, but doesn't make it simple for storing data structures because the phantom byte is not taken into account.
If you create a uint32_t variable and look at the compiled HEX file, you'll notice that it in fact uses up the least significant words of two 24-bit program words. I.e. the 32-bit value is placed into a 64-bit range but only 48-bits out of the 64-bits are programmable, the others are phantom bytes (or zeros). Leaving three bytes per address modulo of 4 that are actually programmable.
What I tend to do if writing data is to keep everything 32-bit aligned and do the same as the compiler does.
Writing:
UINT32 value = ....;
:
__builtin_tblwtl(0, value.word.word_L); // least significant word of 32-bit value placed here
__builtin_tblwth(0, 0x00); // phantom byte + unused byte
__builtin_tblwtl(2, value.word.word_H); // most significant word of 32-bit value placed here
__builtin_tblwth(2, 0x00); // phantom byte + unused byte
Reading:
UINT32 *value
:
value->word.word_L = __builtin_tblrdl(offset);
value->word.word_H = __builtin_tblrdl(offset+2);
UINT32 structure:
typedef union _UINT32 {
uint32_t val32;
struct {
uint16_t word_L;
uint16_t word_H;
} word;
uint8_t bytes[4];
} UINT32;

Why VoxelGrid after filtering gives me only 1 point in the cloud?

I am receiving ROS message of type
sensor_msgs::PointCloud2ConstPtr
in my callback function then I transform it to pointer of type
pcl::PointCloud<pcl::PointXYZ>::Ptr
using function
pcl::fromROSMsg.
After that using this code from pcl tutorials for normal estimation:
void OrganizedCloudToNormals(
const pcl::PointCloud<pcl::PointXYZ>::Ptr &_inputCloud,
pcl::PointCloud<pcl::PointNormal>::Ptr &cloud_normals
)
{
pcl::console::print_highlight ("Estimating scene normals...\n");
pcl::NormalEstimationOMP<pcl::PointXYZ,pcl::PointNormal> nest;
nest.setRadiusSearch (0.001);
nest.setInputCloud (_inputCloud);
nest.compute (*cloud_normals);
//write 0 wherever is NaN as value
for(int i=0; i < cloud_normals->points.size(); i++)
{
cloud_normals->points.at(i).normal_x = isnan(cloud_normals->points.at(i).normal_x) ? 0 : cloud_normals->points.at(i).normal_x;
cloud_normals->points.at(i).normal_y = isnan(cloud_normals->points.at(i).normal_y) ? 0 : cloud_normals->points.at(i).normal_y;
cloud_normals->points.at(i).normal_z = isnan(cloud_normals->points.at(i).normal_z) ? 0 : cloud_normals->points.at(i).normal_z;
cloud_normals->points.at(i).curvature = isnan(cloud_normals->points.at(i).curvature) ? 0 : cloud_normals->points.at(i).curvature;
}
}
after that I have point cloud of the type pcl::PointNormal and trying to downsample it
const float leaf = 0.001f; //0.005f;
pcl::VoxelGrid<pcl::PointNormal> gridScene;
gridScene.setLeafSize(leaf, leaf, leaf);
gridScene.setInputCloud(_scene);
gridScene.filter(*_scene);
where _scene is of the type
pcl::PointCloud<pcl::PointNormal>::Ptr _scene (new pcl::PointCloud<pcl::PointNormal>);
then after filtering I end up with my point cloud _scene and it has only 1 point inside. I have tried to change leaf size but that doesn't change outcome.
Does anyone knows what am I doing wrong?
Thanks in advance
I have found where was the problem. Type pcl::PoinNormal has fields x,y,z,normal_x, normal_y and normal_z but in my function OrganizedCloudToNormals I filled only fields normal_x, normal_y and normal_z and fields x, y and z had value 0 for each point. When I filled fields x,y and z from input point cloud problem with filtering (downsampling) disappeared I have filtered cloud with more than 1 point inside. Probably lack of values in x,y and z fields caused problems later in filter method of the voxel grid object.

Write Int16 Into AVAudioPCMBuffer swift

I have a Data object in swift that is an array of Int16 objects. For some reason using ".pcmFormatInt16" did not work for the format of my AVAudioPCMBuffer and yielded no sound, or a memory error. Eventually, I was able to get white noise/static to play from the speakers by converting the Int16 to a float and putting that onto both channels of my AVAudioPCMBuffer. I have a feeling that I am getting close to the answer, because whenever I speak into the microphone I hear a different frequency of static. I think the issue is that I am not converting the converted Int16 into the buffer floatChannelData.
Here is my code:
for ch in 0..<2 {
for i in 0..<audio.count {
var val = Float( Int16(audio[i]) ) / Float(Int16.max)
if( val > 1 ){
val = 1;
}
if( val < -1 ){
val = -1;
}
self.buffer.floatChannelData![ch][i+self.bufferCount] = val
self.bufferCount+=1
}
}
self.audioFilePlayer.scheduleBuffer(self.buffer, at:nil, options: .interruptsAtLoop, completionHandler: {
print("played sum")
self.bufferCount=0
})
a typical multi-channel PCM buffer has the channels interleaving on a per sample basis although, not being familiar with swift audio, I find it refreshing to see here channels given a dimension on the buffer datastructure
... a flag goes up when I see your guard checks clamping val > 1 set to val = 1 etc. ... elsewhere that is not needed as those boundary checks are moot as the data nicely falls into place as is
... my guess is your input audio[] is signed int 16 because of your val > 1 and val < -1 ? if true then dividing by max int float is wrong as you would be loosing half your dynamic range ...
I suggest you look closely at your
var val = Float( Int16(audio[i]) ) / Float(Int16.max)
lets examine range of your ints in audio[]
2^16 == 65536 // if unsigned then values range from 0 to (2^16 - 1) which is 0 to 65535
2^15 == 32768 // if signed then values would range from -32768 to (2^15 - 1) which is -32768 to 32767
Please tell is whether input buffer audio[] is signed or not ... sometimes its helpful to identify the max_seen and min_seen values of your input data ... do this and tell us the value of max and min of your input audio[]
Now lets focus on your desired output buffer self.buffer.floatChannelData ... since you are saying its 16 bit float ... what is the valid range here ? -1 < valid_value < 1 ?
We can continue once you tell us answers to these basic questions

Bit Manipulation Delphi in XML - Bitwise

I am a student in high school and I am currently learning in Delphi XE3. We are learning about BIT manipulation. We have an assignment and while I have read a lot on the subject and understand the entire process of storing information in Bits and SHL/SHR I am having difficulty understanding how to do this process in Delphi.
The assignment is as follows:
Decimal Hexidecimal Binary
1 0x0001 0000000000000001
2 0x0002 0000000000000010
4 0x0004 0000000000000100
Passing an integer value in an XML file to identify the options set. For example. If I wanted to send option 1 and option 2, I would add 1+2=3. I would send 3 as the number to specify that options 1 and 2 are true.
On the client the binary value would be 0000000000000011 = 3
From what I have read I need to use a mask but I do not understand how to do this. How would do I use masks in Delphi ot obtain the individual values which would be True or False.
I tried doing this in a regular Integer variable but it always gets treated as an Integer and the result is very strange. If I convert the integer to a binary string representation and I iterate thru the characters the result is correct but I am assuming that I should not be doing this with strings. Any help or an example would be greatly appreciated. Thank you.
You usually check if a particular bit is set in a Integer variable using the and binary operator, and you set individual bits using the or operator, like this:
const
OPTION_X = $01;
OPTION_Y = $02;
OPTION_Z = $04;
var
Options: Byte;
begin
Options := OPTION_X or OPTION_Y; //actually 3, like in your example
//check if option_X is set
if (Options and OPTION_X) = OPTION_X then
ShowMessage('Option X is set'); //this message is shown, because the bit is set
//check if option_Z is set
if (Options and OPTION_Z) = OPTION_Z then
ShowMessage('Option Z is set'); //this message is NOT shown
end;
The different OPTION_ constants, are actually masks, in the sense they are used to mask bits to zero (to check if a particular bit is set) or to mask bits to 1 (to set a particular bit).
Consider this fragment:
begin
..
if cbOptionX.Checked then
Options := Options or OPTION_X;
..
the or will mask the first bit to 1. If we start with a Options value (in binary) of 01010000, the resulting Options would be 01010001
01010000
OR 00000001 //OPTION_X
= 01010001
the same value is used to mask all the other bits to 0 to check if a particular bit is set. The if condition, for example: (Options and OPTION_Z) = OPTION_Z, does this:
first it MASKS all the non-interesting bytes of the Option variable to 0. If we consider the last value of 01010001, the operation will result in clearing all the bits, but the first.
01010001
AND 00000001
= 00000001
considering a starting value of 01010000 it will return zero:
01010000
AND 00000001
= 00000000
next, it compares if that value is equal to the mask itself. If it is equal, the bit was set in the original Options variable, otherwise it was not set. If your mask contains only one bit, that's matter of taste, you can just check if the resulting value is, for example, different than 0, but if your mask contains multiple bits and you want to check if all the bits was set, you have to check for equality.
Delphi has a predefined type TIntegerSet which allows to use set operators. Assuming that options is an Integer, you can check if any bit (0-based) is set like this:
option1 := 0 in TIntegerSet(options); { Bit 0 is set? }
option3 := 2 in TIntegerSet(options); { Bit 2 is set? }
Changing the options is done via Include or Exclude:
Include(TIntegerSet(options), 0); { set bit 0 }
Exclude(TIntegerSet(options), 2); { reset bit 2 }
Of course you can use any other set operator that may be helpful.
Delphi has Bitwise Operators for manipulating individual bits of integer types. Look at the shl, shr, and, or, and xor operators. To combine bits, use the or operator. To test for bits, use the and operator. For example, assuming these constants:
const
Option1 = 0x0001;
Option2 = 0x0002;
Option3 = 0x0004;
The or operator looks at the bits of both input values and produces an output value that has a 1 bit in places where either input value has a 1 bit. So combining bits would look like this:
var
Value: Integer;
begin
Value := Option1 or Option2;
{
00000000000000000000000000000001 Option1
00000000000000000000000000000010 Option2
-------------------------------- OR
00000000000000000000000000000011 Result
}
...
end;
The and operator looks at the bits of both input values and produces an output value that has a 1 bit only in places where both input value have a 1 bit, otherwise it produces a 0 bit instead. So testing for bits would look like this:
var
Value: Integer;
Option1Set: Boolean;
Option2Set: Boolean;
Option3Set: Boolean;
begin
Value := 7; // Option1 or Option2 or Option3
Option1Set := (Value and Option1) = Option1;
{
00000000000000000000000000000111 Value
00000000000000000000000000000001 Option1
-------------------------------- AND
00000000000000000000000000000001 Result
}
Option2Set := (Value and Option2) = Option2;
{
00000000000000000000000000000111 Value
00000000000000000000000000000010 Option2
-------------------------------- AND
00000000000000000000000000000010 Result
}
Option3Set := (Value and Option3) = Option3;
{
00000000000000000000000000000111 Value
00000000000000000000000000000100 Option3
-------------------------------- AND
00000000000000000000000000000100 Result
}
...
end;

cvExtractSURF don't work when useProvidedKeypoints = true

So, I'm trying to extract some SURF keypoints, but I want to impose these key points! So, I put the last parameter to "true" which is "useProvidedKeypoints".
Also, when I create my Keypoint, I used the default constructor (so some default values there). I only change the point "pt" and the octave that I set to 3.
I'm using the C++ interface with SURF. But I know that the problem is right at cvExtractSURF because I copied that part of the code in mine to help me debug.
When I call that function, with the last parameter set to true, I got this error:
OpenCV Error: Bad argument (Unknown array type) in cvarrToMat, file /home/widgg/opencv/trunk/modules/core/src/matrix.cpp, line 651
terminate called after throwing an instance of 'cv::Exception'
what(): /home/widgg/opencv/trunk/modules/core/src/matrix.cpp:651: error: (-5) Unknown array type in function cvarrToMat
I really don't know what I'm doing wrong!
EDIT:
Here's some code. First how I create the keypoints (I left a couple of informations, like the layer_id stuff, but you get the main idea):
for (json_pt_info_vector::iterator b_beg = beg->points.begin(); b_beg != b_end; ++b_beg)
{
int layer_id = b_beg->layer_id;
json_point_info_coord &jpic = b_beg->coord;
jpic.feature_id = features[layer_id].keypoints.size();
KeyPoint kp;
kp.octave = 3;
kp.pt.x = jpic.x;
kp.pt.y = jpic.y;
features[layer_id].keypoints.push_back(kp);
}
Here's the call to SURF:
SURF surf(300, 3, 4);
for (int i = 0; i < nb_img; ++i)
{
debug_msg("extract_features #4.1");
cv::detail::ImageFeatures &cdif = features[i];
Mat gray_image = imread(param.layer_images[i], 0); // 0 = force to gray scale!
debug_msg("extract_features #4.2");
vector<float> descriptors;
debug_msg("extract_features #4.3");
surf(gray_image, Mat(), cdif.keypoints, descriptors, true); // MUST BE TRUE TO FORCE THE PROVIDED KEYPOINTS
debug_msg("extract_features #4.4");
cdif.descriptors = Mat(descriptors, true).reshape(1, (int)cdif.keypoints.size());
debug_msg("extract_features #4.5");
gray_image.release();
debug_msg("extract_features #4.6");
images[i] = imread(param.layer_images[i]); // keep the image open
}
It crashes after #4.3 in the debug message!
Hope that helps!
EDIT 2:
I replaced some part by cv::SurfDescriptorExtracter. I replace everything from 4.3 to 4.5 with the following line:
extractor.compute(gray_image, cdif.keypoints, cdif.descriptors);
So now, there's still a bug, but it's located somewhere else, not necessary related to this question!
I'm surprised that the call to surf(gray_image, Mat(), cdif.keypoints, descriptors, true) even compiles. the descriptors argument should be a cv::Mat, not a vector.

Resources