usage:
OpenCV 4.5.3-dev
I followed this example: https://github.com/opencv/opencv/blob/master/samples/dnn/scene_text_spotting.cpp
It is use:
for the detection model: DB_TD500_resnet50.onnx
for the recognition model : crnn_cs.onnx
for the vocabulary: alphabet_94.txt
the decode type is set to: "CTC-prefix-beam-search"
These settings received the following error:
OpenCV(4.5.3-dev) C:\DevTools\opencv\modules\dnn\src\model.cpp:745: error: (-2:Unspecified error) in function 'class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > __cdecl cv::dnn::TextRecognitionModel_Impl::ctcPrefixBeamSearchDecode(const class cv::Mat &)'
> (expected: 'prediction.size[2] == (int)vocabulary.size() + 1'), where
> 'prediction.size[2]' is 96
> must be equal to
> '(int)vocabulary.size() + 1' is 95
Add one empty line at the end of the vocabulary or push_back one more item in the vocabulary vector.
I don't know if is only a workaround or a real solution...
Related
I use TFF 0.12.0
each client has in train 38 images and in test 16 images, I have 4 clients,
I write a simple code of federated learning :
.....
def create_compiled_keras_model():
base_model = tf.keras.applications.resnet.ResNet50(include_top=False, weights='imagenet', input_shape=(224,224,3,))
global_average_layer = tf.keras.layers.GlobalAveragePooling2D()
prediction_layer = tf.keras.layers.Dense(2, activation='softmax')
model = tf.keras.Sequential([
base_model,
global_average_layer,
prediction_layer
])
return model
def model_fn():
keras_model = create_compiled_keras_model()
return tff.learning.from_keras_model(keras_model, sample_batch, loss=tf.keras.losses.CategoricalCrossentropy(), metrics=[tf.keras.metrics.CategoricalAccuracy()])
iterative_process = tff.learning.build_federated_averaging_process(model_fn, server_optimizer_fn=lambda: tf.keras.optimizers.SGD(learning_rate=1.0), client_optimizer_fn=lambda: tf.keras.optimizers.SGD(learning_rate=0.001), client_weight_fn=None)
state = iterative_process.initialize()
I can't understand why I find those lines in execution:
2020-11-05 15:00:16.642666: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 32 in the outer inference context.
2020-11-05 15:00:16.642724: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 23 in the outer inference context.
2020-11-05 15:00:16.643344: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 51 in the outer inference context.
2020-11-05 15:00:16.643400: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 41 in the outer inference context.
2020-11-05 15:00:16.643545: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 69 in the outer inference context.
2020-11-05 15:00:16.643589: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 60 in the outer inference context.
2020-11-05 15:00:16.643696: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 97 in the outer inference context.
2020-11-05 15:00:16.643756: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 86 in the outer inference context.
2020-11-05 15:00:16.643923: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 106 in the outer inference context.
2020-11-05 15:00:16.643988: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 116 in the outer inference context.
2020-11-05 15:00:16.644071: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 79 in the outer inference context.
2020-11-05 15:00:16.644213: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 134 in the outer inference context.
Knowing that If I change Resnet50 with VGG16, those lines disappear.
Help please !! what does this mean ?
I am running opencv 4.1.0 Windows 10 64 bit with python 3.7.
My code is:
img = cv2.imread('C:/projects/kort/Hjerter/IMG_2383.jpg')
det = cv2.text.TextDetectorCNN_create("c:/projects/Caffe/textbox.prototxt", "c:/projects/Caffe/TextBoxes_icdar13.caffemodel")
rects, probs = det.detect(img)
Here is the error I get (below). Any hints?
rects, probs = det.detect(img)
Traceback (most recent call last):
File "", line 1, in
cv2.error: OpenCV(4.1.0) C:\projects\opencv-python\opencv_contrib\modules\text\src\text_detectorCNN.cpp:66: error: (-2:Unspecified error) in function 'void cdecl cv::text::TextDetectorCNNImpl::detect(const class cv::InputArray &,class std::vector > &,class std::vector &)'
(expected: 'inputImage.channels() == inputChannelCount_'), where
'inputImage_.channels()' is 1
must be equal to
'inputChannelCount_' is 3
I have got a bunch of crash logs from iTunesConnect for my ios swift app with the top of the stacktrace showing the error message:
protocol witness for Strideable.distance(to : A) -> A.Stride in conformance Int64 + 124
This comes from an innocuous line in my code which looks like the following:
if (var1 - var2 > MyClass.THRESHOLD) {
// Do something
}
var1 and var2 are declared to be of type Int64, while THRESHOLD is:
static let THRESHOLD = 900 * 1000
I have a hunch that this is because THRESHOLD is not declared to be of Int64, though I still don't have a hypothesis as to how this could cause a problem. Also, the bug is not reproducible, so I can't verify.
Any help on what this error message means, and what might be the issue here?
The mixed-type comparison can be
the cause for the problem. The subtraction operator is inferred from
the types of its operands as
#available(swift, deprecated: 3.0, obsoleted: 4.0, message: "Mixed-type subtraction is deprecated. Please use explicit type conversion.")
public func -<T>(lhs: T, rhs: T) -> T.Stride where T : SignedInteger
with T == Int64 and T.Stride == Int. Your code would cause a warning message with Xcode 8.3.2:
let THRESHOLD = 900 * 1000
let var1: Int64 = 0x100000000
let var2: Int64 = 0
// warning: '-' is deprecated: Mixed-type subtraction is deprecated. Please use explicit type conversion.
if var1 - var2 > THRESHOLD {
print("foo")
} else {
print("bar")
}
On a 32-bit device the difference can be too large for an Int
and the above example would abort with a runtime error
* thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_INSTRUCTION (code=EXC_I386_INVOP, subcode=0x0)
frame #0: 0x0223d428 libswiftCore.dylib`protocol witness for Swift.Strideable.distance (to : A) -> A.Stride in conformance Swift.Int64 : Swift.Strideable in Swift + 72
The solution is to explicitly convert the right-hand side:
if var1 - var2 > Int64(THRESHOLD) { ... }
I am trying to implement HMM in opencv.
First i create arrays of double, and copy them to Mat variables,
Mat INIT = Mat(0,3,CV_64F,trans).clone();
Then i am trying to access the individual pixel/position values from the matrix as:
cout << INIT.at<double>(r,c) << " ";//Where r and c are row and column values.
I am getting error like:
OpenCV Error: Assertion failed (dims <= 2 && data && (unsigned)i0 < (unsigned)si
ze.p[0] && (unsigned)(i1*DataType<_Tp>::channels) < (unsigned)(size.p[1]*channel
s()) && ((((sizeof(size_t)<<28)|0x8442211) >> ((DataType<_Tp>::depth) & ((1 << 3
) - 1))*4) & 15) == elemSize1()) in unknown function, file c:\opencv2.4.4\includ
e\opencv2\core\mat.hpp, line 537
I searched over the forums and couldnot find anything wrong with the code. Any ideas?
Thanks alot in advance.
Declare the Matrix INIT as :-
Mat INIT=Mat(1,3,CV_64FC1,trans).clone();
Now access the individual pixel/position values from the matrix as:
cout << INIT.at<double>(r,c) << " ";
I created OpenCV matrix:
CvMat * src = cvCreateMat(1, 2, CV_32FC2);
Then I want to set up element row=0, col=1, channel=1
According to the example in description of documentation for CvMat
I tried to set element with the following code:
CV_MAT_ELEM(*src, float, 0, 1 * 2 + 1) = 123;
But assert is fired.
And the reason is obvious:
We have following defintions in OpenCV sources:
#define CV_MAT_ELEM_PTR_FAST( mat, row, col, pix_size ) \
(assert( (unsigned)(row) < (unsigned)(mat).rows && \
(unsigned)(col) < (unsigned)(mat).cols ), \
(mat).data.ptr + (size_t)(mat).step*(row) + (pix_size)*(col))
#define CV_MAT_ELEM( mat, elemtype, row, col ) \
(*(elemtype*)CV_MAT_ELEM_PTR_FAST( mat, row, col, sizeof(elemtype)))
In my case mat.cols == 2 and col == 1 * 2 + 1 == 3.
What is wrong: documentation or assert in sources of OpenCV?
How to manage this?
How can I set up element of multichannel matrix?
Thanks.
P.S. To OpenCV developers if anyone here.
When I press "you can create one now" to create new account to report a bug from the page http://opencv.willowgarage.com/wiki/Welcome?action=login, I obtain the error "Unknown action newaccount."
UPDATE:
I use OpenCV 2.1.
I have worked around usage of CV_MAT_ELEM:
float * src_ptr = (float*)src->data.ptr;
*(src_ptr + 1 * 2 + 1) = 123;
Not an answer to your question, but a good suggestion:
Neither the documentation, nor the source code are wrong.
But why don't you use the C++ interface? I can bet you do not use the C interface because you really need it (you build for some strange embedded platform, that can't compile c++).
Mat src(1, 2, CV_32FC2);
// isn't it nicer than CV_UGLY_AND_SCARY_MACRO()?
src.at<Vec2f>(0,1)[1] = 123; // (0,1) means row 0, col 1. [1] means channel 1.
EDIT
From OpenCV:
For single-channel matrices there is a macro CV_MAT_ELEM( matrix, elemtype, row, col ), i.e. for 32-bit floating point real matrix