netlink on ClearOS 7.3 issue - netfilter

I am trying following simple program on ClearOS 7.3, 64 bit
#include <sys/socket.h>
#include <linux/netlink.h>
#include <stdlib.h>
#include <string.h>
#include <stdio.h>
#include <errno.h>
int main()
{
int flags =0;
int bus = NETLINK_NETFILTER;
int sock_fd = socket(AF_NETLINK, SOCK_RAW | flags, bus);
if(sock_fd<0)
{
printf("\nsocket failed with error no = %d and error msg = %s\n",
errno, strerror(errno));
return -1;
}
printf("\nOP completed successfully..!\n");
return 0;
}
I am getting following error:
socket failed with error no = 93 and error msg = Protocol not
supported
My OS details are:
ClearOS release 7.3.0 (Final)
Linux 3.10.0-514.26.2.v7.x86_64 #1 SMP Wed Jul 5 10:37:54 MDT 2017
x86_64 x86_64 x86_64 GNU/Linux
Please help.

Works for me.
The NETLINK_NETFILTER protocol is registered by the nfnetlink module.
In my case, the kernel registers the module automatically since this code uses it, but if yours doesn't, try inserting it manually:
$ sudo modprobe nfnetlink
And then try opening the socket again.

Related

How to fix the compile error when include the opencv intrin_avx.hpp

can you give me some advice for the problem.
System information (version)
OpenCV => 4.2.0
Operating System / Platform => ubuntu 16.04 64 Bit
Compiler => g++ 7.5.0
Detailed description
Compiling the single intrin_avx.hpp fails when using avx/avx2 supported device.
[ 50%] Building CXX object CMakeFiles/test.dir/test.cpp.o
In file included from /home/workspace/Test/test.cpp:2:0:
/usr/local/include/opencv4/opencv2/core/hal/intrin_avx.hpp:17:1: error: ‘CV_CPU_OPTIMIZATION_HAL_NAMESPACE_BEGIN’ does not name a type
CV_CPU_OPTIMIZATION_HAL_NAMESPACE_BEGIN
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/local/include/opencv4/opencv2/core/hal/intrin_avx.hpp:24:8: error: ‘__m256’ does not name a type
inline __m256 _v256_combine(const __m128& lo, const __m128& hi)
^~~~~~
/usr/local/include/opencv4/opencv2/core/hal/intrin_avx.hpp:27:8: error: ‘__m256d’ does not name a type
inline __m256d _v256_combine(const __m128d& lo, const __m128d& hi)
^~~~~~~
/usr/local/include/opencv4/opencv2/core/hal/intrin_avx.hpp:30:35: error: ‘__m256i’ does not name a type
inline int _v_cvtsi256_si32(const __m256i& a)
^~~~~~
...
Steps to reproduce
test.cpp
#include <stdio.h>
#include "opencv2/core/hal/intrin_avx.hpp"
int main() {
printf("test\n");
return 0;
}
CMakeLists.txt
CMAKE_MINIMUM_REQUIRED(VERSION 3.15)
PROJECT(test)
SET(CMAKE_CXX_STANDARD 11)
# avx
SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -mavx2")
# opencv
MESSAGE(STATUS "FIND OpenCV on LINUX.")
FIND_PACKAGE(OpenCV REQUIRED)
ADD_EXECUTABLE(test test.cpp)
TARGET_LINK_LIBRARIES(
test
${OpenCV_LIBS}
)

'iterator' file not found when compiling bar for iOS

I was using Xcode 11 to compile zbar for my iOS Device.
However it gives me this error:'iterator' file not found
Some code:
#ifndef _ZBAR_IMAGE_H_
#define _ZBAR_IMAGE_H_
/// #file
/// Image C++ wrapper
#ifndef _ZBAR_H_
# error "include zbar.h in your application, **not** zbar/Image.h"
#endif
#include <assert.h>
#include <iterator>
#include "Symbol.h"
#include "Exception.h"
I have tried to clean and rebuild and this didn't seem to work.
Environment:
Mac OS Catalina 10.15 Beta (19A558d)
Xcode 11.0(11A420a)
C11
<iterator> is a c++ reference, please refer to here.
so,
a. Renaming the image.c to image.cpp
or
b. Compile with g++, NOT gcc
would be helpful.

How to linked OpenCV 3.4.5 dnn module with custom cv_bridge using catkin? Error: undefined reference cv::dnn:experimental_dnn_v2::Net::Net()

I'm trying to use publisher-subcriber no to run my object recognition program by streaming images from my webcam. In my program, I am using readNetFromTensorflow function that provided by opencv_dnn library. But while I catkin_make on my workspace, there is always popup this error:
CMakeFiles/my_subscriber.dir/src/my_subscriber.cpp.o: In function `main':
my_subscriber.cpp:(.text+0x2ee): undefined reference to `cv::dnn::experimental_dnn_v2::readNetFromTensorflow(cv::String const&, cv::String const&)'
my_subscriber.cpp:(.text+0x30e): undefined reference to `cv::dnn::experimental_dnn_v2::Net::setPreferableBackend(int)'
my_subscriber.cpp:(.text+0x31a): undefined reference to `cv::dnn::experimental_dnn_v2::Net::setPreferableTarget(int)'
my_subscriber.cpp:(.text+0x4c4): undefined reference to `cv::dnn::experimental_dnn_v2::Net::~Net()'
my_subscriber.cpp:(.text+0x5f0): undefined reference to `cv::dnn::experimental_dnn_v2::Net::~Net()'
collect2: error: ld returned 1 exit status
image_transport_tutorial/CMakeFiles/my_subscriber.dir/build.make:176: recipe for target '/home/odroid/image_transport_ws/devel/lib/image_transport_tutorial/my_subscriber' failed
make[2]: *** [/home/odroid/image_transport_ws/devel/lib/image_transport_tutorial/my_subscriber] Error 1
CMakeFiles/Makefile2:1125: recipe for target 'image_transport_tutorial/CMakeFiles/my_subscriber.dir/all' failed
make[1]: *** [image_transport_tutorial/CMakeFiles/my_subscriber.dir/all] Error 2
Makefile:138: recipe for target 'all' failed
make: *** [all] Error 2
And here is my program so far:
#include <ros/ros.h>
#include <image_transport/image_transport.h>
#include <cv_bridge/cv_bridge.h>
#include <opencv2/highgui.hpp>
#include <opencv2/dnn.hpp>
#include <opencv2/calib3d.hpp>
using namespace std;
using namespace cv;
const size_t inWidth = 320;
const size_t inHeight = 320;
const char* classNames[] = {"background",
"A", "B", "C"};
cv_bridge::CvImagePtr cv_ptr;
int h,w;
void imageCallback(const sensor_msgs::ImageConstPtr& msg)
{
try
{
cv_ptr = cv_bridge::toCvCopy(msg, sensor_msgs::image_encodings::BGR8);
w = cv_ptr->image.cols;
h = cv_ptr->image.rows;
cv::waitKey(10);
}
catch (cv_bridge::Exception& e)
{
ROS_ERROR("Could not convert from '%s' to 'bgr8'.", msg->encoding.c_str());
}
}
int main(int argc, char **argv)
{
ros::init(argc, argv, "image_listener");
ros::NodeHandle nh;
cv::namedWindow("view");
cv::startWindowThread();
image_transport::ImageTransport it(nh);
//! [Initialize network]
cv::dnn::Net net = dnn::readNetFromTensorflow("/home/odroid/Desktop/Archive/frozen_inference_graph.pb","/home/odroid/Desktop/Archive/graph.pbtxt");
net.setPreferableBackend(3);
net.setPreferableTarget(1);
//! [Initialize network]
image_transport::Subscriber sub = it.subscribe("camera/image", 1, &imageCallback);
//cv::Mat inputBlob = cv::dnn::blobFromImage(cv_ptr->image,1.0, Size(inWidth,inHeight),Scalar(127.5,127.5,127.5), true,false); //Convert Mat to batch of images
//net.setInput(inputBlob);
//Mat detection = net.forward();
ros::spin();
cv::destroyWindow("view");
}
If I comment cv::dnn::Net, I can build 100% complete.
And here is my CMakeLists.txt:
cmake_minimum_required(VERSION 3.1)
#Enable C++11
set(CMAKE_CXX_STANDARD 11)
set(CMAKE_CXX_STANDARD_REQUIRED TRUE)
project(image_transport_tutorial)
find_package(catkin REQUIRED COMPONENTS cv_bridge image_transport message_generation sensor_msgs)
# add the resized image message
add_message_files(DIRECTORY msg
FILES ResizedImage.msg
)
generate_messages(DEPENDENCIES sensor_msgs)
catkin_package(CATKIN_DEPENDS cv_bridge image_transport message_runtime sensor_msgs)
find_package(OpenCV REQUIRED)
#Set OpenCV
set(OpenCV_INCLUDE_DIRS /usr/local/include/ /usr/local/include/opencv2/)
#Print some message showing some of them
message(STATUS "OpenCV library status:")
message(STATUS "config: ${OpenCV_DIR}")
message(STATUS "version: ${OpenCV_VERSION}")
message(STATUS "libraries: ${OpenCV_LIBRARIES}")
message(STATUS "include path: ${OpenCV_INCLUDE_DIRS}")
include_directories(include ${catkin_INCLUDE_DIRS} ${OpenCV_INCLUDE_DIRS})
# add the publisher example
add_executable(my_publisher src/my_publisher.cpp)
add_dependencies(my_publisher ${catkin_EXPORTED_TARGETS} ${${PROJECT_NAME}_EXPORTED_TARGETS})
target_link_libraries(my_publisher ${catkin_LIBRARIES} ${OpenCV_LIBRARIES})
# add the subscriber example
add_executable(my_subscriber src/my_subscriber.cpp)
add_dependencies(my_subscriber ${catkin_EXPORTED_TARGETS} ${${PROJECT_NAME}_EXPORTED_TARGETS})
target_link_libraries(my_subscriber ${catkin_LIBRARIES} ${OpenCV_LIBRARIES})
# add the plugin example
add_library(resized_publisher src/manifest.cpp src/resized_publisher.cpp src/resized_subscriber.cpp)
add_dependencies(resized_publisher ${catkin_EXPORTED_TARGETS} ${${PROJECT_NAME}_EXPORTED_TARGETS})
target_link_libraries(resized_publisher ${catkin_LIBRARIES} ${OpenCV_LIBRARIES})
# Mark executables and/or libraries for installation
install(TARGETS my_publisher my_subscriber resized_publisher
ARCHIVE DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION}
LIBRARY DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION}
RUNTIME DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION}
)
install(DIRECTORY launch/
DESTINATION ${CATKIN_PACKAGE_SHARE_DESTINATION}/launch
)
install(DIRECTORY model/
DESTINATION ${CATKIN_PACKAGE_SHARE_DESTINATION}/model
)
install(FILES resized_plugins.xml
DESTINATION ${CATKIN_PACKAGE_SHARE_DESTINATION}
)
**
UPDATES:
**
If I catkin_make by using opencv 3.3.1, which is built while I installed ROS Kinetic, I can compiled it 100% successful. But when I do some rosrun, it will cause some error. Here is the error:
Preprocessor/sub:Sub(Preprocessor/mul)(Preprocessor/sub/y) OpenCV Error: Unspecified error (Unknown layer type Sub in op Preprocessor/sub) in populateNet, file /tmp/binarydeb/ros-kinetic-opencv3-3.3.1/modules/dnn/src/tensorflow/tf_importer.cpp, line 1311
I have done some browsing for a few days and found a guy that saying it's because of cv_bridge and image_transport of the opencv that comes with ROS.Here the link:https://answers.ros.org/question/318146/using-opencv-dnn-module-with-ros/
Please help me, Any help will be appreciate.
I finally solved this issue, here the solution:
$ cd ~/catkin_ws
$ git clone https://github.com/ros-perception/vision_opencv src/vision_opencv
$ git clone https://github.com/ros-perception/image_transport_plugins.git src/image_transport_plugins
$ catkin_make
Mantap!

issues with create an opencv window

I have Mac OS Lion 10.7.5 and xCode 4.6. I've downloaded an opencv template. Then retrive a framework and want to create my own project. For the first i install cmake 2.8.10, add a framework and that's it. In main.m i try to create a window:
#import <UIKit/UIKit.h>
#include "opencv2/highgui/highgui_c.h"
#import "AppDelegate.h"
int main(int argc, char *argv[])
{
IplImage* img = cvLoadImage("1.png", 8);
cvNamedWindow("Example",CV_WINDOW_AUTOSIZE); //Thread 1: signal SIGANRT
#autoreleasepool {
return UIApplicationMain(argc, argv, nil, NSStringFromClass([AppDelegate class]));
}
}
And in debug area i got following:
OpenCV Error: Unspecified error (The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Carbon support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script) in cvNamedWindow, file /Volumes/minijHome/Documents/xcode_mini/hillegass/advancedIOS/postCourse/openCV/clean-downloads/openCVgitClone/opencv/modules/highgui/src/window.cpp, line 652
libc++abi.dylib: terminate called throwing an exception
What i have to do?

Is it possible to compile iPhone app via command-line gcc?

If i want to compile minimum OSX app via command-line gcc i can compile the file test.m:
#import <AppKit/AppKit.h>
int main( int argc, char** argv ) { return 0; }
via following command:
gcc -c test.m
But how to compile iOS app same way? I change test.m to refer iOS cocoa touch:
#import <UIKit/UIKit.h>
int main( int argc, char** argv ) { return 0; }
And this is no longer compiles with error:
test.m:1:24: error: UIKit/UIKit.h: No such file or directory
You probably want to use xcodebuild if you're building apps from the command line, as they consist of more than just Objective-C files.

Resources