OpenCV and connection with IP Camera - my camera model - opencv

I bought IP Camera which seems to have no brand on the box but it works fine when I check it through the browser.
I wnat to use it to grab from it some frames. On the box it is said that it allowes me to grab data as mjpeg stream but in reality I can't do that. I did it before with other ip camera and all worked fine - til now.
This is my code - if it help you to solve or show me the way.
#include <OpenCV/cv.h>
#include <OpenCV/highgui.h>
CvCapture *kamera = NULL;
CvMemStorage *pamiec = NULL;
CvSeq *zakreslenia = NULL;
IplImage *klatka = 0;
IplImage *szary = 0;
char *nazwa1 = "Orginalna klatka";
char *nazwa2 = "Po zmianach";
int main()
{
kamera = cvCaptureFromFile("http://kni:blashyrkh#83.15.3.69:80/image.jpg");kamerki w systemie
if(kamera!=NULL)
{
cvNamedWindow(nazwa1,CV_WINDOW_AUTOSIZE);
cvNamedWindow(nazwa2,CV_WINDOW_AUTOSIZE);
pamiec = cvCreateMemStorage(0);
while((klatka=cvQueryFrame(kamera)) != NULL)
{
szary = cvCreateImage(cvGetSize(klatka),8,1);
cvCvtColor(klatka,szary,CV_BGR2GRAY);
cvSmooth(szary, szary, CV_GAUSSIAN_5x5,9,9,0,0);
cvCanny(szary,szary,0,20,3);
zakreslenia = cvHoughCircles(szary,pamiec,CV_HOUGH_GRADIENT,2,szary->height/4,100,100,0,1000);
cvShowImage(nazwa1,klatka);
cvShowImage(nazwa2,szary);
if(cvWaitKey(1)==(char)27)break;
}
cvReleaseImage(&szary);
cvReleaseImage(&klatka);
cvReleaseMemStorage(&pamiec);
cvDestroyWindow(nazwa1);
cvDestroyWindow(nazwa2);
cvReleaseCapture(&kamera);
cvWaitKey(0);
}
return 0;//bo jestem miły dla systemu i informuję go o braku błędów
}
I've got no idea what to do - shall I return that cam to store or write custom app to grab the frames somehow ?
I thought that it could work with image.jpg/cachebust=117434456&a on the end but it doesn't change anything
Camera is assigned to ip 83.15.3.69 with login kni and pass blashyrkh so you're allowed to check it.
Waiting for you response...

As I know, the possibility to use OpenCV with IP cameras is an undocumented (and unexpected) feature, and it works just because ffmpeg (its backend) supports ip rtp transfer.
The problem is that it works only with unencrypted streams (so if your camera doesn't have a password, it should work.) When you send a password, it is not correctly processed, and ffmpeg doesn't receive the expected path string.
You can test it trying to connect using VLC. And you can also use Wireshark to check the message transfer between camera and OpenCV. (filter with ip.addr==your_camera_ip)

Related

Writing to the serial monitor on the Sparkfun ESP8266 Thing

Below is my current code:
#include <Wire.h>
#include <ESP8266WiFi.h>
#include <BlynkSimpleEsp8266.h>
// You should get Auth Token in the Blynk App.
// Go to the Project Settings (nut icon).
char auth[] = "836addccd2ee4f05b96f0f3ad831249e"; // ***Type in your Blynk Token
// Your WiFi credentials.
// Set password to "" for open networks.
char ssid[] = "_Fast&Furious";// ***your wifi name
char pass[] = "Mclaren2018";// ***and password
const int MOTION_PIN = 4; // Pin connected to motion detector
WidgetLCD lcd(V1);
void setup()
{
Serial.begin(115200);
Blynk.begin(auth, ssid, pass);
pinMode(MOTION_PIN, INPUT_PULLUP);
Serial.println("SETUP");
}
void loop()
{
Blynk.run();
int proximity = digitalRead(MOTION_PIN);
if (proximity == LOW) // If the sensor's output goes low, motion is detected
{
Blynk.virtualWrite(5,1023);
lcd.clear();
lcd.print(0,0,"Motion detected");
Serial.println("Motion detected!");
}
else
{
Blynk.virtualWrite(5,0);
lcd.clear();
lcd.print(0,0,"Motion NOT detected");
Serial.println("Motion NOT detected!");
}
}
I am currently trying to simily write some text to the serial console. But when I upload my code it will just write a string of k's to the console. What am I doing wrong to produce such a strange output?
This is a link to the tutorial I have been following: http://designinformaticslab.github.io/productdesign_tutorial/2017/01/24/motion_sensor.html
Any help would be much appreciated!
It all looks good to me, are you sure you have the baud rate set correctly on the serial monitor? I would write a new program real quick that ONLY does serial output and get that working (this simplifies the problem to solve, and makes it more obvious if it is something like serial port speed), then go back to your more complete program and it should work.

IP camera capture

I am trying to capture the stream of two IP cameras directly connected to a mini PCIe dual gigabit expansion card in a nVidia Jetson TK1.
I achieved to capture the stream of both cameras using gstreamer with the next command:
gst-launch-0.10 rtspsrc location=rtsp://admin:123456#192.168.0.123:554/mpeg4cif latency=0 ! decodebin ! ffmpegcolorspace ! autovideosink rtspsrc location=rtsp://admin:123456#192.168.2.254:554/mpeg4cif latency=0 ! decodebin ! ffmpegcolorspace ! autovideosink
It displays one window per camera, but gives this output just when the capture starts:
WARNING: from element /GstPipeline:pipeline0/GstAutoVideoSink:autovideosink1/GstXvImageSink:autovideosink1-actual-sink-xvimage: A lot of buffers are being dropped.
Additional debug info:
gstbasesink.c(2875): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstAutoVideoSink:autovideosink1/GstXvImageSink:autovideosink1-actual-sink-xvimage:
There may be a timestamping problem, or this computer is too slow.
---> TVMR: Video-conferencing detected !!!!!!!!!
The stream is played good, with "good" synchronization also between cameras, but after a while, suddenly one of the cameras stops, and usually few seconds later the other one stops too. Using an interface snifer like Wireshark I can check that the rtsp packets are still sending from the cameras.
My purpose is to use this cameras to use them as a stereo camera using openCV. I am able to capture the stream with OpenCV with the following function:
camera[0].open("rtsp://admin:123456#192.168.2.254:554/mpeg4cif");//right
camera[1].open("rtsp://admin:123456#192.168.0.123:554/mpeg4cif");//left
It randomnly starts the capture good or bad, synchronized or not, with delay or not, but after a while is impossible to use the captured images as you can observe in the image:
And the output while running the openCV program usually is this: (I have copied the most complete one)
[h264 # 0x1b9580] slice type too large (2) at 0 23
[h264 # 0x1b9580] decode_slice_header error
[h264 # 0x1b1160] left block unavailable for requested intra mode at 0 6
[h264 # 0x1b1160] error while decoding MB 0 6, bytestream (-1)
[h264 # 0x1b1160] mmco: unref short failure
[h264 # 0x1b9580] too many reference frames
[h264 # 0x1b1160] pps_id (-1) out of range
The used cameras are two SIP-1080J modules.
Anyone knows how to achieve a good capture using openCV? First of all get rid of those h264 messages and have stable images while the program executes.
If not, how can I improve the pipelines and buffers using gstreamer to have a good capture without the sudden stop of the stream?. Although I never captured through openCV using gstreamer, perhaps some day I will know how to do it and solve this problem.
Thanks a lot.
After some days of deep search and some attempts, I turned on directly to use the gstreamer-0.10 API. First I learned how to use it with the tutorials from http://docs.gstreamer.com/pages/viewpage.action?pageId=327735
For most of the tutorials, you just need to install libgstreamer0.10-dev and some other packages. I installed all by:
sudo apt-get install libgstreamer0*
Then copy the code of the example you want to try into a .c file and type from the terminal in the folder where the .c file is located (In some examples you have to add more libs to pkg-config):
gcc basic-tutorial-1.c $(pkg-config --cflags --libs gstreamer-0.10) -o basic-tutorial-1.c
After that I did not feel lost I started to try to mix some c and c++ code. You can compile it using a proper g++ command, or with a CMakeLists.txt or the way you want to... As I am developing with a nVidia Jetson TK1, I use Nsight Eclipse Edition and I need to configure the project properties properly to be able to use the gstreamer-0.10 libs and the openCV libs.
Mixing some code, finally I am able to capture the streams of my two IP cameras in real time without appreciable delay, without bad decoding in any frame and both streams synchronized. The only thing left that I have not solved yet is the obtaining of frames in color and not in gray scale when (I have tried with other CV_ values with "segmentation fault" result):
v = Mat(Size(640, 360),CV_8U, (char*)GST_BUFFER_DATA(gstImageBuffer));
The complete code is next where I capture using gstreamer, transform the capture to a openCV Mat object and then show it. The code is for just a capture of one IP camera. You can replicate the objects and methods for capture multiple cameras at the same time.
#include <opencv2/core/core.hpp>
#include <opencv2/contrib/contrib.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/video/video.hpp>
#include <gst/gst.h>
#include <gst/app/gstappsink.h>
#include <gst/app/gstappbuffer.h>
#include <glib.h>
#define DEFAULT_LATENCY_MS 1
using namespace cv;
typedef struct _vc_cfg_data {
char server_ip_addr[100];
} vc_cfg_data;
typedef struct _vc_gst_data {
GMainLoop *loop;
GMainContext *context;
GstElement *pipeline;
GstElement *rtspsrc,*depayloader, *decoder, *converter, *sink;
GstPad *recv_rtp_src_pad;
} vc_gst_data;
typedef struct _vc_data {
vc_gst_data gst_data;
vc_cfg_data cfg;
} vc_data;
/* Global data */
vc_data app_data;
static void vc_pad_added_handler (GstElement *src, GstPad *new_pad, vc_data *data);
#define VC_CHECK_ELEMENT_ERROR(e, name) \
if (!e) { \
g_printerr ("Element %s could not be created. Exiting.\n", name); \
return -1; \
}
/*******************************************************************************
Gstreamer pipeline creation and init
*******************************************************************************/
int vc_gst_pipeline_init(vc_data *data)
{
GstStateChangeReturn ret;
// Template
GstPadTemplate* rtspsrc_pad_template;
// Create a new GMainLoop
data->gst_data.loop = g_main_loop_new (NULL, FALSE);
data->gst_data.context = g_main_loop_get_context(data->gst_data.loop);
// Create gstreamer elements
data->gst_data.pipeline = gst_pipeline_new ("videoclient");
VC_CHECK_ELEMENT_ERROR(data->gst_data.pipeline, "pipeline");
//RTP UDP Source - for received RTP messages
data->gst_data.rtspsrc = gst_element_factory_make ("rtspsrc", "rtspsrc");
VC_CHECK_ELEMENT_ERROR(data->gst_data.rtspsrc,"rtspsrc");
printf("URL: %s\n",data->cfg.server_ip_addr);
g_print ("Setting RTSP source properties: \n");
g_object_set (G_OBJECT (data->gst_data.rtspsrc), "location", data->cfg.server_ip_addr, "latency", DEFAULT_LATENCY_MS, NULL);
//RTP H.264 Depayloader
data->gst_data.depayloader = gst_element_factory_make ("rtph264depay","depayloader");
VC_CHECK_ELEMENT_ERROR(data->gst_data.depayloader,"rtph264depay");
//ffmpeg decoder
data->gst_data.decoder = gst_element_factory_make ("ffdec_h264", "decoder");
VC_CHECK_ELEMENT_ERROR(data->gst_data.decoder,"ffdec_h264");
data->gst_data.converter = gst_element_factory_make ("ffmpegcolorspace", "converter");
VC_CHECK_ELEMENT_ERROR(data->gst_data.converter,"ffmpegcolorspace");
// i.MX Video sink
data->gst_data.sink = gst_element_factory_make ("appsink", "sink");
VC_CHECK_ELEMENT_ERROR(data->gst_data.sink,"appsink");
gst_app_sink_set_max_buffers((GstAppSink*)data->gst_data.sink, 1);
gst_app_sink_set_drop ((GstAppSink*)data->gst_data.sink, TRUE);
g_object_set (G_OBJECT (data->gst_data.sink),"sync", FALSE, NULL);
//Request pads from rtpbin, starting with the RTP receive sink pad,
//This pad receives RTP data from the network (rtp-udpsrc).
rtspsrc_pad_template = gst_element_class_get_pad_template (GST_ELEMENT_GET_CLASS (data->gst_data.rtspsrc),"recv_rtp_src_0");
// Use the template to request the pad
data->gst_data.recv_rtp_src_pad = gst_element_request_pad (data->gst_data.rtspsrc, rtspsrc_pad_template,
"recv_rtp_src_0", NULL);
// Print the name for confirmation
g_print ("A new pad %s was created\n",
gst_pad_get_name (data->gst_data.recv_rtp_src_pad));
// Add elements into the pipeline
g_print(" Adding elements to pipeline...\n");
gst_bin_add_many (GST_BIN (data->gst_data.pipeline),
data->gst_data.rtspsrc,
data->gst_data.depayloader,
data->gst_data.decoder,
data->gst_data.converter,
data->gst_data.sink,
NULL);
// Link some of the elements together
g_print(" Linking some elements ...\n");
if(!gst_element_link_many (data->gst_data.depayloader, data->gst_data.decoder, data->gst_data.converter, data->gst_data.sink, NULL))
g_print("Error: could not link all elements\n");
// Connect to the pad-added signal for the rtpbin. This allows us to link
//the dynamic RTP source pad to the depayloader when it is created.
if(!g_signal_connect (data->gst_data.rtspsrc, "pad-added",
G_CALLBACK (vc_pad_added_handler), data))
g_print("Error: could not add signal handler\n");
// Set the pipeline to "playing" state
g_print ("Now playing A\n");
ret = gst_element_set_state (data->gst_data.pipeline, GST_STATE_PLAYING);
if (ret == GST_STATE_CHANGE_FAILURE) {
g_printerr ("Unable to set the pipeline A to the playing state.\n");
gst_object_unref (data->gst_data.pipeline);
return -1;
}
return 0;
}
static void vc_pad_added_handler (GstElement *src, GstPad *new_pad, vc_data *data) {
GstPad *sink_pad = gst_element_get_static_pad (data->gst_data.depayloader, "sink");
GstPadLinkReturn ret;
GstCaps *new_pad_caps = NULL;
GstStructure *new_pad_struct = NULL;
const gchar *new_pad_type = NULL;
g_print ("Received new pad '%s' from '%s':\n", GST_PAD_NAME (new_pad), GST_ELEMENT_NAME (src));
/* Check the new pad's name */
if (!g_str_has_prefix (GST_PAD_NAME (new_pad), "recv_rtp_src_")) {
g_print (" It is not the right pad. Need recv_rtp_src_. Ignoring.\n");
goto exit;
}
/* If our converter is already linked, we have nothing to do here */
if (gst_pad_is_linked (sink_pad)) {
g_print (" Sink pad from %s already linked. Ignoring.\n", GST_ELEMENT_NAME (src));
goto exit;
}
/* Check the new pad's type */
new_pad_caps = gst_pad_get_caps (new_pad);
new_pad_struct = gst_caps_get_structure (new_pad_caps, 0);
new_pad_type = gst_structure_get_name (new_pad_struct);
/* Attempt the link */
ret = gst_pad_link (new_pad, sink_pad);
if (GST_PAD_LINK_FAILED (ret)) {
g_print (" Type is '%s' but link failed.\n", new_pad_type);
} else {
g_print (" Link succeeded (type '%s').\n", new_pad_type);
}
exit:
/* Unreference the new pad's caps, if we got them */
if (new_pad_caps != NULL)
gst_caps_unref (new_pad_caps);
/* Unreference the sink pad */
gst_object_unref (sink_pad);
}
int vc_gst_pipeline_clean(vc_data *data) {
GstStateChangeReturn ret;
GstStateChangeReturn ret2;
/* Cleanup Gstreamer */
if(!data->gst_data.pipeline)
return 0;
/* Send the main loop a quit signal */
g_main_loop_quit(data->gst_data.loop);
g_main_loop_unref(data->gst_data.loop);
ret = gst_element_set_state (data->gst_data.pipeline, GST_STATE_NULL);
if (ret == GST_STATE_CHANGE_FAILURE) {
g_printerr ("Unable to set the pipeline A to the NULL state.\n");
gst_object_unref (data->gst_data.pipeline);
return -1;
}
g_print ("Deleting pipeline\n");
gst_object_unref (GST_OBJECT (data->gst_data.pipeline));
/* Zero out the structure */
memset(&data->gst_data, 0, sizeof(vc_gst_data));
return 0;
}
void handleKey(char key)
{
switch (key)
{
case 27:
break;
}
}
int vc_mainloop(vc_data* data)
{
GstBuffer *gstImageBuffer;
Mat v;
namedWindow("view",WINDOW_NORMAL);
while (1) {
gstImageBuffer = gst_app_sink_pull_buffer((GstAppSink*)data->gst_data.sink);
if (gstImageBuffer != NULL )
{
v = Mat(Size(640, 360),CV_8U, (char*)GST_BUFFER_DATA(gstImageBuffer));
imshow("view", v);
handleKey((char)waitKey(3));
gst_buffer_unref(gstImageBuffer);
}else{
g_print("gsink buffer didn't return buffer.");
}
}
return 0;
}
int main (int argc, char *argv[])
{
setenv("DISPLAY", ":0", 0);
strcpy(app_data.cfg.server_ip_addr, "rtsp://admin:123456#192.168.0.123:554/mpeg4cif");
gst_init (&argc, &argv);
if(vc_gst_pipeline_init(&app_data) == -1) {
printf("Gstreamer pipeline creation and init failed\n");
goto cleanup;
}
vc_mainloop(&app_data);
printf ("Returned, stopping playback\n");
cleanup:
return vc_gst_pipeline_clean(&app_data);
return 0;
}
I hope this helps!! ;)
uri = 'rtsp://admin:123456#192.168.0.123:554/mpeg4cif'
gst_str = ("rtspsrc location={} latency={} ! rtph264depay ! h264parse ! omxh264dec ! nvvidconv ! video/x-raw, width=(int){}, height=(int){}, format=(string)BGRx ! videoconvert ! appsink sync=false").format(uri, 200, 3072, 2048)
cap= cv2.VideoCapture(gst_str,cv2.CAP_GSTREAMER)
while(True):
_,frame = cap.read()
if frame is None:
break
cv2.imshow("",frame)
cv2.waitKey(0)
cap.release()
cv2.destroyAllWindows()

Questions About How to Save Image from BlackMagicDesign Intensity Pro Using Decklink sdk 9.7.7

I am trying to use Decklink sdk to grab image from HDMI Input , I am using Windows 7 64bit , Visual Studio C# Express 2010 and EmguCV 2.4.0 .
My setting is
_BMDDisplayModeSupport displayModeSupport;
IDeckLinkDisplayMode displayMode = null ;
_BMDDisplayMode setDisplayMode = _BMDDisplayMode .bmdModeHD720p60;
_BMDPixelFormat setPixelFormat = _BMDPixelFormat .bmdFormat8BitYUV;
_BMDVideoInputFlags setInputFlag = _BMDVideoInputFlags .bmdVideoInputFlagDefault;
_deckLinkInput.DoesSupportVideoMode(setDisplayMode, setPixelFormat, setInputFlag, out displayModeSupport, out displayMode);
_deckLinkInput.SetScreenPreviewCallback( this );
try
{
_deckLinkInput.DisableAudioInput();
_deckLinkInput.EnableVideoInput(setDisplayMode, setPixelFormat, setInputFlag);
}
catch (Exception em)
{
Console .WriteLine("deck link init failed: " + em.Message);
}
_deckLinkInput.SetCallback( this );
And I have enabled callback function and I may see frame come to my computer
int frameCount = 0;
public void VideoInputFrameArrived( IDeckLinkVideoInputFrame video, IDeckLinkAudioInputPacket audio)
{
IntPtr pData;
video.GetBytes( out pData);
frameCount++;
System.Runtime.InteropServices. Marshal .ReleaseComObject(video);
}
Now I would like to transfer pData to a MIplimage , how may I do that ?
I guess I have to transfer the frame from YUV to RBGA , then transfer RBGA frame to MIplimage.
I saw the sample using DirectShow to solve the problem , but I am not familiar with it .
Any way to figure it out ? Thanks.

Cannot upload data get xivelyclient.put returned -1

Background:
I am trying to upload data from a simple on off sensor to learn the Xively methods. I am using an Arduino Uno with a WiFi Shield. I made some simple alterations to an example sketch from the Xively library to keep it very simple. I have read the documentation and the FAQ's plus searched the web. There was a similar question (Can't connect to Xively using Arduino + wifi shield, "ret = -1 No sockets available) and the answer was to reduce the number of libraries loaded. I'm using the same library list recommended in that post.I have also updated my Arduino IDE and downloaded the latest xively and HTTP library. The code compiles without error. I re-loaded my device on the xively website and got a new API key and number as well. Plus, I ensured the channel was added with the correct sensor name. I have also checked my router and ensured the WiFi shield was connecting properly.
Problem: I can't see the data on the Xively website and I keep getting the following error messages on the serial monitor from the Arduino:
xivelyclint.put returned -1
also, after several tries, I get "No socket available"
Question: Is there an problem with the code or is it something else?
Current Arduino code (actual ssid, password and API key and number removed):
#include <SPI.h>
#include <WiFi.h>
#include <HttpClient.h>
#include <Xively.h>
char ssid[] = "ssid"; // your network SSID (name)
char pass[] = "password"; // your network password (use for WPA, or use as key for WEP)
int keyIndex = 0; // your network key Index number (needed only for WEP)
int status = WL_IDLE_STATUS;
// Your Xively key to let you upload data
char xivelyKey[] = "api_key";
// Analog pin which we're monitoring (0 and 1 are used by the Ethernet shield)
int sensorPin = 2;
// Define the strings for our datastream IDs
char sensorId[] = "sensor_id";
XivelyDatastream datastreams[] = {
XivelyDatastream(sensorId, strlen(sensorId), DATASTREAM_INT),
};
// Finally, wrap the datastreams into a feed
XivelyFeed feed(feed no., datastreams, 1 /* number of datastreams */);
WiFiClient client;
XivelyClient xivelyclient(client);
void printWifiStatus() {
// print the SSID of the network you're attached to:
Serial.print("SSID: ");
Serial.println(WiFi.SSID());
// print your WiFi shield's IP address:
IPAddress ip = WiFi.localIP();
Serial.print("IP Address: ");
Serial.println(ip);
// print the received signal strength:
long rssi = WiFi.RSSI();
Serial.print("signal strength (RSSI):");
Serial.print(rssi);
Serial.println(" dBm");
}
void setup() {
// put your setup code here, to run once:
Serial.begin(9600);
Serial.println("Starting single datastream upload to Xively...");
Serial.println();
// attempt to connect to Wifi network:
while ( status != WL_CONNECTED) {
Serial.print("Attempting to connect to SSID: ");
Serial.println(ssid);
status = WiFi.begin(ssid, pass);
// wait 10 seconds for connection:
delay(10000);
}
Serial.println("Connected to wifi");
printWifiStatus();
}
void loop() {
int sensorValue = digitalRead(sensorPin);
datastreams[0].setInt(sensorValue);
Serial.print("Read sensor value ");
Serial.println(datastreams[0].getInt());
Serial.println("Uploading it to Xively");
int ret = xivelyclient.put(feed, xivelyKey);
Serial.print("xivelyclient.put returned ");
Serial.println(ret);
Serial.println();
delay(15000);
}

How to load ClassifierCascade in Android using JavaCV?

I'm trying to use facial recognition with Android . All the loads are ok , but the haarcascade_frontalface_alt2.xml file wich i don't know how to load it using JavaCV.
This is the code i have:
public static void detectFacialFeatures()
{
// The cascade definition to be used for detection.
// This will redirect the OpenCV errors to the Java console to give you
// feedback about any problems that may occur.
new JavaCvErrorCallback();
// Load the original image.
// We need a grayscale image in order to do the recognition, so we
// create a new image of the same size as the original one.
IplImage grayImage = IplImage.create(iplImage.width(),iplImage.height(), IPL_DEPTH_8U, 1);
// We convert the original image to grayscale.
cvCvtColor(iplImage, grayImage, CV_BGR2GRAY);
CvMemStorage storage = CvMemStorage.create();
// We instantiate a classifier cascade to be used for detection, using the cascade definition.
CvHaarClassifierCascade cascade = new CvHaarClassifierCascade(cvLoad("./haarcascade_frontalface_alt2.xml"));
// We detect the faces.
CvSeq faces = cvHaarDetectObjects(grayImage, cascade, storage, 1.1, 1, 0);
Log.d("CARAS","Hay "+faces.total()+" caras ahora mismo");
}
The problem is here
CvHaarClassifierCascade(cvLoad("./haarcascade_frontalface_alt2.xml"));
I have tried putting the xml file it into the /assets folder , but i have no idea of how must i load it. It's always giving me the next error:
03-26 17:31:25.385: E/cv::error()(14787): OpenCV Error: Null pointer (Invalid classifier cascade) in CvSeq* cvHaarDetectObjectsForROC(const CvArr*, CvHaarClassifierCascade*, CvMemStorage*, std::vector<int>&, std::vector<double>&, double, int, int, CvSize, CvSize, bool), file /home/saudet/projects/cppjars/OpenCV-2.4.4/modules/objdetect/src/haar.cpp, line 1514
...
looking more near at the error it points to this code line:
CvSeq faces = cvHaarDetectObjects(grayImage, cascade, storage, 1.1, 1,
0);
That's why i'm pretty sure that the problem comes from the haarcascade_frontalface_alt2.xml load.
Thanks for your help.
P.D: I want to include the cascade into the apk not in sdcard .
If your cascade is in SD card you can use:
CascadeClassifier cascade = new CascadeClassifier(Environment.getExternalStorageDirectory().getAbsolutePath() + "/cascade.xml");
Environment.getExternalStorageDirectory().getAbsolutePath() give you right path to SD card and next - is address to your file in your SD.
You can pack your file in apk and then copy it to external location so it is accessible by OpenCV functions.
try {
File learnedInputFile = new File(Environment.getExternalStorageDirectory().getPath() + "/learnedData.xml");
if (!learnedInputFile.exists()) {
InputStream learnedDataInputStream = assetManager.open("learnedData.xml");
FileOutputStream learnedDataOutputStream = new FileOutputStream(learnedInputFile);
// copy file from asset folder to external location, i.e. sdcard
byte[] buffer = new byte[300];
int n = 0;
while (-1 != (n = learnedDataInputStream.read(buffer))) {
learnedDataOutputStream.write(buffer, 0, n);
}
}
classifier.load(learnedInputFile.getAbsolutePath());
} catch (IOException exception) {
// there are no learned data, train ml algorithm or show error, etc.
}

Resources