Outputting string depending on color detected in video feed - opencv

so what I'm trying to do is output a certain string depending on the color I see in the video feed. For right now, what I've done is threshold the feed so that everything above a certain brightness shows up as Red. Now I want to have something that says if there's any red in the feed, then I output a "1" to a text box on my user interface that's showing the feed. If there is no red, then I output a "0" to the text box. I'm using Emgu CV Managed C++ with VS2010, can anyone help me? Thank you.
This is the code I have so far that isn't working correctly, it's giving me a compiler error.
cvConvertScaleAbs(frameFromCamera->Ptr.ToPointer(),frameDisplay->Ptr.ToPointer(),double(1)/16,0);
cvCvtColor(frameDisplay->Ptr.ToPointer(),frameColorDisplay->Ptr.ToPointer(),CV_GRAY2BGR);
cvThreshold(frameDisplay->Ptr.ToPointer(),maskSaturated->Ptr.ToPointer(),200,255,CV_THRESH_BINARY);
cvNot(maskSaturated->Ptr.ToPointer(),mask1->Ptr.ToPointer());
cv::Scalar red(0,0,255);
cvSet(frameColorDisplay->Ptr.ToPointer(),red,maskSaturated->Ptr.ToPointer());
highColor = gcnew Emgu::CV::Image<Bgr,UInt16>(0, 0, 255);
lowColor = gcnew Emgu::CV::Image<Bgr,UInt16>(0, 0, 200);
if(maskSaturated->InRange(lowColor, highColor) == 255){
tbMorse->Text ="1";
}
else{
tbMorse->Text = "0";
}
imageMain->Image=frameColorDisplay;
and i have highColor and lowColor initialized in my header as such
Emgu::CV::Image<Bgr,UInt16> ^lowColor;
Emgu::CV::Image<Bgr,UInt16> ^highColor;
and the error it's giving me is
BAOTFISInterface.cpp(1010): error C2664: 'Emgu::CV::Image<TColor,TDepth>::Image(int,int,Emgu::CV::Structure::Bgr)' : cannot convert parameter 3 from 'int' to 'Emgu::CV::Structure::Bgr'
with
[
TColor=Emgu::CV::Structure::Bgr,
TDepth=unsigned short
]
No user-defined-conversion operator available that can perform this conversion, or the operator cannot be called
BAOTFISInterface.cpp(1011): error C2664: 'Emgu::CV::Image<TColor,TDepth>::Image(int,int,Emgu::CV::Structure::Bgr)' : cannot convert parameter 3 from 'int' to 'Emgu::CV::Structure::Bgr'
with
[
TColor=Emgu::CV::Structure::Bgr,
TDepth=unsigned short
]
No user-defined-conversion operator available that can perform this conversion, or the operator cannot be called
BAOTFISInterface.cpp(1013): error C2664: 'Emgu::CV::Image<TColor,TDepth> ^Emgu::CV::Image<TColor,TDepth>::InRange(Emgu::CV::Image<TColor,TDepth> ^,Emgu::CV::Image<TColor,TDepth> ^)' : cannot convert parameter 1 from 'Emgu::CV::Image<TColor,TDepth> ^' to 'Emgu::CV::Image<TColor,TDepth> ^'
with
[
TColor=Emgu::CV::Structure::Gray,
TDepth=unsigned char
]
and
[
TColor=Emgu::CV::Structure::Bgr,
TDepth=unsigned short
]
and
[
TColor=Emgu::CV::Structure::Gray,
TDepth=unsigned char
]
No user-defined-conversion operator available, or
Types pointed to are unrelated; conversion requires reinterpret_cast, C-style cast or function-style cast

Related

question for dask output when using dask.array.map_overlap

I would like to use dask.array.map_overlap to deal with the scipy interpolation function. However, I keep meeting errors that I cannot understand and hoping someone can answer this to me.
Here is the error message I have received if I want to run .compute().
ValueError: could not broadcast input array from shape (1070,0) into shape (1045,0)
To resolve the issue, I started to use .to_delayed() to check each partition outputs, and this is what I found.
Following is my python code.
Step 1. Load netCDF file through Xarray, and then output to dask.array with chunk size (400,400)
df = xr.open_dataset('./Brazil Sentinal2 Tile/' + data_file +'.nc')
lon, lat = df['lon'].data, df['lat'].data
slon = da.from_array(df['lon'], chunks=(400,400))
slat = da.from_array(df['lat'], chunks=(400,400))
data = da.from_array(df.isel(band=0).__xarray_dataarray_variable__.data, chunks=(400,400))
Step 2. declare a function for da.map_overlap use
def sumsum2(lon,lat,data, hex_res=10):
hex_col = 'hex' + str(hex_res)
lon_max, lon_min = lon.max(), lon.min()
lat_max, lat_min = lat.max(), lat.min()
b = box(lon_min, lat_min, lon_max, lat_max, ccw=True)
b = transform(lambda x, y: (y, x), b)
b = mapping(b)
target_df = pd.DataFrame(h3.polyfill( b, hex_res), columns=[hex_col])
target_df['lat'] = target_df[hex_col].apply(lambda x: h3.h3_to_geo(x)[0])
target_df['lon'] = target_df[hex_col].apply(lambda x: h3.h3_to_geo(x)[1])
tlon, tlat = target_df[['lon','lat']].values.T
abc = lNDI(points=(lon.ravel(), lat.ravel()),
values= data.ravel())(tlon,tlat)
target_df['out'] = abc
print(np.stack([tlon, tlat, abc],axis=1).shape)
return np.stack([tlon, tlat, abc],axis=1)
Step 3. Apply the da.map_overlap
b = da.map_overlap(sumsum2, slon[:1200,:1200], slat[:1200,:1200], data[:1200,:1200], depth=10, trim=True, boundary=None, align_arrays=False, dtype='float64',
)
Step 4. Using to_delayed() to test output shape
print(b.to_delayed().flatten()[0].compute().shape, )
print(b.to_delayed().flatten()[1].compute().shape)
(1065, 3)
(1045, 0)
(1090, 3)
(1070, 0)
which is saying that the output from da.map_overlap is only outputting 1-D dimension ( which is (1045,0) and (1070,0) ), while in the da.map_overlap, the output I am preparing is 2-D dimension ( which is (1065,3) and (1090,3) ).
In addition, if I turn off the trim argument, which is
c = da.map_overlap(sumsum2,
slon[:1200,:1200],
slat[:1200,:1200],
data[:1200,:1200],
depth=10,
trim=False,
boundary=None,
align_arrays=False,
dtype='float64',
)
print(c.to_delayed().flatten()[0].compute().shape, )
print(c.to_delayed().flatten()[1].compute().shape)
The output becomes
(1065, 3)
(1065, 3)
(1090, 3)
(1090, 3)
This is saying that when trim=True, I cut out everything?
because...
#-- print out the values
b.to_delayed().flatten()[0].compute()[:10,:]
(1065, 3)
array([], shape=(1045, 0), dtype=float64)
while...
#-- print out the values
c.to_delayed().flatten()[0].compute()[:10,:]
array([[ -47.83683837, -18.98359832, 1395.01848583],
[ -47.8482856 , -18.99038681, 2663.68391094],
[ -47.82800624, -18.99207069, 1465.56517187],
[ -47.81897323, -18.97919009, 2769.91556363],
[ -47.82066663, -19.00712956, 1607.85927095],
[ -47.82696896, -18.97167714, 2110.7516765 ],
[ -47.81562653, -18.98302933, 2662.72112163],
[ -47.82176881, -18.98594465, 2201.83205114],
[ -47.84567 , -18.97512514, 1283.20631652],
[ -47.84343568, -18.97270783, 1282.92117225]])
Any thoughts for this?
Thank You.
I guess I got the answer. Please let me if I am wrong.
I am not allowing to use trim=True is because I change the shape of output array (after surfing the internet, I notice that the shape of output array should be the same with the shape of input array). Since I change the shape, the dask has no idea how to deal with it so it returns the empty array to me (weird).
Instead of using trim=False, since I didn't ask cutting-out the buffer zone, it is now okay to output the return values. (although I still don't know why the dask cannot concat the chunked array, but believe is also related to shape)
The solution is using delayed function on da.concatenate, which is
delayed(da.concatenate)([e.to_delayed().flatten()[idx] for idx in range(len(e.to_delayed().flatten()))])
In this case, we are not relying on the concat function in map_overlap but use our own concat to combine the outputs we want.

Dart beginner help needed

var a = [{'answers' : [{'text':'Cloud','score':10},],},];
main()
{
print(a[0]['answers']);
}
I want to print number 10 in 'score'
Anyone help me fix code !!!
Thanks first !!!
Your problem is properly null-safety which complains about using the value from ['answers']. The reason is that the [] operator on Map returns a nullable type because the result can be null in case that the element does not exist in the Map.
I have in the following used ! to promise the compiler that you are sure that the element does exist in the Map so it stops complaining. But it will insert a check on runtime and crash your application in case the returned value is null:
var a = [
{
'answers': [
{'text': 'Cloud', 'score': 10},
],
},
];
void main() {
print(a[0]['answers']![0]['score']); // 10
}

What are the GLTF animations sampler input/output values?

I am reading the specification, but I can not understand the properties of the sampler.
This is the animation that I have
"animations" : [
{
"channels" : [
{
"sampler" : 0,
"target" : {
"node" : 0,
"path" : "translation"
}
}
],
"name" : "00001_2780.datAction",
"samplers" : [
{
"input" : 9,
"interpolation" : "CUBICSPLINE",
"output" : 10
}
]
},
{
"channels" : [
{
"sampler" : 0,
"target" : {
"node" : 1,
"path" : "translation"
}
}
],
"name" : "00002_2780.datAction",
"samplers" : [
{
"input" : 9,
"interpolation" : "CUBICSPLINE",
"output" : 11
}
]
}
],
What I can not understand is what are the values 9 and 10 for the first sample and 9 and 11 for the second
All that we have in the specification is
https://github.com/KhronosGroup/glTF/tree/master/specification/2.0#animations
Each of the animation's samplers defines the input/output pair: a set of floating point scalar values representing linear time in seconds; and a set of vectors or scalars representing animated property.
And this makes it more unclear to me.
Is there a more detailed explanation about what input/output values are and what they represent. What will happen for example if I change the input from 9 to 99 or to 9.9 or to 0.9 or to 0. How will this change the animation?
Thanks
The numbers 9 and 10 here are glTF accessor index ID values. If you decode accessor index 9, you'll find the list of times for each of the keyframes of the animation. If you decode accessor 10, normally you would expect to find the list of values for the keyframes. But since this is CUBICSPLINE, accessor 10 will contain the in-tangent, value, and out-tangent for each keyframe.
One way to investigate glTF files like this is to use the glTF Tools extension for VSCode. You can right-click the input or output value and choose Go To Definition to get to the accessor in question, and choose Go To Definition again to decode it. (Disclaimer, I'm a contributor to glTF Tools).

In GeoDMS, I am trying to buffer a polygon but I get an error

In GeoDMS I want to buffer a polygon set with 5 meters, but I get an error:
polygon_i4D Error: Cannot find operator for these arguments:
arg1 of type DataItem<FPolygon>
arg2 of type DataItem<Float64>
Can someone help me with this issue?
unit<uint32> shapes
: StorageName = "%SourceDataDir%/CBS/bevolkingskern_2011.shp"
, StorageType = "gdal.vect"
, StorageReadOnly = "True"
, FreeData = "False"
, SyncMode = "None"
{
attribute<geometries/rdc> geometry (poly) ;
attribute<geometries/rdc> buffer (poly) := polygon_i4D(geometry, 5d);
}
The configured expression for the buffer attribute results in an inflated polygon.
Use the - operator to find the buffer (the inflated area but not the original area),
for example:
attribute<geometries/rdc> buffer :=
value(polygon_i4D(ipolygon(geometry), 5d) - ipolygon(geometry), geometries/rdc);
Can you try:
attribute<geometries/rdc> buffer := fpolygon(polygon_i4D(ipolygon(geometry), 5d));

Assembly explanation about stereoCalibrate error with OutputArray::Create assertion error

I came across an error during execute stereoCalibrate in Opencv 2.4.11, which is says :
OpenCV Error: Assertion failed (!fixedSize() || ((Mat*)obj)->size.operator()() == Size(cols, rows)) in cv::_OutputArray::create,
I think this must be some size error between these parameters, which go through them one by one. But there is still error. I hope someone awesome could find the error from the assembly code below. Here is the method call in my code.
double error = cv::stereoCalibrate(
objPoints, cali0.imgPoints, cali1.imgPoints,
camera0.intr.cameraMatrix, camera0.intr.distCoeffs,
camera1.intr.cameraMatrix, camera1.intr.distCoeffs,
cv::Size(1920,1080), m.rvec, m.tvec, m.evec, m.fvec,
cv::TermCriteria(CV_TERMCRIT_ITER + CV_TERMCRIT_EPS, 100, 1e-5)
,CV_CALIB_FIX_INTRINSIC + CV_CALIB_USE_INTRINSIC_GUESS
);
In my code, m.rvec is (3,3,CV_64F), m.tvec is (3,1,CV_64F), m.evec and m.fvec are not preallocated which is same with the stereoCalibrate example. And intr.cameraMatrix is (3,3,CV_64F) and intr.distCoeffs is (8,1,CV_64F), objPoints is computed from the checkerboard which stores the 3d position of corners and all z value for point is zero.
After reading advice from #Josh, I modify the code as plain output mat object which are in CV_64F, but it still throws this assertion.
cv::Mat R, t, e, f;
double error = cv::stereoCalibrate(
objPoints, cali0.imgPoints, cali1.imgPoints,
camera0.intr.cameraMatrix, camera0.intr.distCoeffs,
camera1.intr.cameraMatrix, camera1.intr.distCoeffs,
cali0.imgSize, R, t, e, f,
cv::TermCriteria(CV_TERMCRIT_ITER + CV_TERMCRIT_EPS, 100, 1e-5));
Finally I solved this problem, as a reminder, make sure the camera parameters you passed in are not const type....
Why go for assembly? OpenCV is open source and you can check the code you're calling here: https://github.com/opencv/opencv/blob/master/modules/calib3d/src/calibration.cpp#L3523
If you get assertion fails in OpenCV it's usually because you've passed a matrix with an incorrect shape. OpenCV is extremely picky. The assertion fail is on an OutputArray, so checking the function signature there are four possible culprits:
OutputArray _Rmat, OutputArray _Tmat, OutputArray _Emat, OutputArray _Fmat
The sizing is done inside cv::stereoCalibrate here:
https://github.com/opencv/opencv/blob/master/modules/calib3d/src/calibration.cpp#L3550
_Rmat.create(3, 3, rtype);
_Tmat.create(3, 1, rtype);
<-- snipped -->
if( _Emat.needed() )
{
_Emat.create(3, 3, rtype);
p_matE = &(c_matE = _Emat.getMat());
}
if( _Fmat.needed() )
{
_Fmat.create(3, 3, rtype);
p_matF = &(c_matF = _Fmat.getMat());
}
The assertion is being triggered in one of these calls, the code is here:
https://github.com/opencv/opencv/blob/master/modules/core/src/matrix.cpp#L2241
Try passing in plain Mat objects without preallocating their shape.

Resources