IP camera capture - opencv

I am trying to capture the stream of two IP cameras directly connected to a mini PCIe dual gigabit expansion card in a nVidia Jetson TK1.
I achieved to capture the stream of both cameras using gstreamer with the next command:
gst-launch-0.10 rtspsrc location=rtsp://admin:123456#192.168.0.123:554/mpeg4cif latency=0 ! decodebin ! ffmpegcolorspace ! autovideosink rtspsrc location=rtsp://admin:123456#192.168.2.254:554/mpeg4cif latency=0 ! decodebin ! ffmpegcolorspace ! autovideosink
It displays one window per camera, but gives this output just when the capture starts:
WARNING: from element /GstPipeline:pipeline0/GstAutoVideoSink:autovideosink1/GstXvImageSink:autovideosink1-actual-sink-xvimage: A lot of buffers are being dropped.
Additional debug info:
gstbasesink.c(2875): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstAutoVideoSink:autovideosink1/GstXvImageSink:autovideosink1-actual-sink-xvimage:
There may be a timestamping problem, or this computer is too slow.
---> TVMR: Video-conferencing detected !!!!!!!!!
The stream is played good, with "good" synchronization also between cameras, but after a while, suddenly one of the cameras stops, and usually few seconds later the other one stops too. Using an interface snifer like Wireshark I can check that the rtsp packets are still sending from the cameras.
My purpose is to use this cameras to use them as a stereo camera using openCV. I am able to capture the stream with OpenCV with the following function:
camera[0].open("rtsp://admin:123456#192.168.2.254:554/mpeg4cif");//right
camera[1].open("rtsp://admin:123456#192.168.0.123:554/mpeg4cif");//left
It randomnly starts the capture good or bad, synchronized or not, with delay or not, but after a while is impossible to use the captured images as you can observe in the image:
And the output while running the openCV program usually is this: (I have copied the most complete one)
[h264 # 0x1b9580] slice type too large (2) at 0 23
[h264 # 0x1b9580] decode_slice_header error
[h264 # 0x1b1160] left block unavailable for requested intra mode at 0 6
[h264 # 0x1b1160] error while decoding MB 0 6, bytestream (-1)
[h264 # 0x1b1160] mmco: unref short failure
[h264 # 0x1b9580] too many reference frames
[h264 # 0x1b1160] pps_id (-1) out of range
The used cameras are two SIP-1080J modules.
Anyone knows how to achieve a good capture using openCV? First of all get rid of those h264 messages and have stable images while the program executes.
If not, how can I improve the pipelines and buffers using gstreamer to have a good capture without the sudden stop of the stream?. Although I never captured through openCV using gstreamer, perhaps some day I will know how to do it and solve this problem.
Thanks a lot.

After some days of deep search and some attempts, I turned on directly to use the gstreamer-0.10 API. First I learned how to use it with the tutorials from http://docs.gstreamer.com/pages/viewpage.action?pageId=327735
For most of the tutorials, you just need to install libgstreamer0.10-dev and some other packages. I installed all by:
sudo apt-get install libgstreamer0*
Then copy the code of the example you want to try into a .c file and type from the terminal in the folder where the .c file is located (In some examples you have to add more libs to pkg-config):
gcc basic-tutorial-1.c $(pkg-config --cflags --libs gstreamer-0.10) -o basic-tutorial-1.c
After that I did not feel lost I started to try to mix some c and c++ code. You can compile it using a proper g++ command, or with a CMakeLists.txt or the way you want to... As I am developing with a nVidia Jetson TK1, I use Nsight Eclipse Edition and I need to configure the project properties properly to be able to use the gstreamer-0.10 libs and the openCV libs.
Mixing some code, finally I am able to capture the streams of my two IP cameras in real time without appreciable delay, without bad decoding in any frame and both streams synchronized. The only thing left that I have not solved yet is the obtaining of frames in color and not in gray scale when (I have tried with other CV_ values with "segmentation fault" result):
v = Mat(Size(640, 360),CV_8U, (char*)GST_BUFFER_DATA(gstImageBuffer));
The complete code is next where I capture using gstreamer, transform the capture to a openCV Mat object and then show it. The code is for just a capture of one IP camera. You can replicate the objects and methods for capture multiple cameras at the same time.
#include <opencv2/core/core.hpp>
#include <opencv2/contrib/contrib.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/video/video.hpp>
#include <gst/gst.h>
#include <gst/app/gstappsink.h>
#include <gst/app/gstappbuffer.h>
#include <glib.h>
#define DEFAULT_LATENCY_MS 1
using namespace cv;
typedef struct _vc_cfg_data {
char server_ip_addr[100];
} vc_cfg_data;
typedef struct _vc_gst_data {
GMainLoop *loop;
GMainContext *context;
GstElement *pipeline;
GstElement *rtspsrc,*depayloader, *decoder, *converter, *sink;
GstPad *recv_rtp_src_pad;
} vc_gst_data;
typedef struct _vc_data {
vc_gst_data gst_data;
vc_cfg_data cfg;
} vc_data;
/* Global data */
vc_data app_data;
static void vc_pad_added_handler (GstElement *src, GstPad *new_pad, vc_data *data);
#define VC_CHECK_ELEMENT_ERROR(e, name) \
if (!e) { \
g_printerr ("Element %s could not be created. Exiting.\n", name); \
return -1; \
}
/*******************************************************************************
Gstreamer pipeline creation and init
*******************************************************************************/
int vc_gst_pipeline_init(vc_data *data)
{
GstStateChangeReturn ret;
// Template
GstPadTemplate* rtspsrc_pad_template;
// Create a new GMainLoop
data->gst_data.loop = g_main_loop_new (NULL, FALSE);
data->gst_data.context = g_main_loop_get_context(data->gst_data.loop);
// Create gstreamer elements
data->gst_data.pipeline = gst_pipeline_new ("videoclient");
VC_CHECK_ELEMENT_ERROR(data->gst_data.pipeline, "pipeline");
//RTP UDP Source - for received RTP messages
data->gst_data.rtspsrc = gst_element_factory_make ("rtspsrc", "rtspsrc");
VC_CHECK_ELEMENT_ERROR(data->gst_data.rtspsrc,"rtspsrc");
printf("URL: %s\n",data->cfg.server_ip_addr);
g_print ("Setting RTSP source properties: \n");
g_object_set (G_OBJECT (data->gst_data.rtspsrc), "location", data->cfg.server_ip_addr, "latency", DEFAULT_LATENCY_MS, NULL);
//RTP H.264 Depayloader
data->gst_data.depayloader = gst_element_factory_make ("rtph264depay","depayloader");
VC_CHECK_ELEMENT_ERROR(data->gst_data.depayloader,"rtph264depay");
//ffmpeg decoder
data->gst_data.decoder = gst_element_factory_make ("ffdec_h264", "decoder");
VC_CHECK_ELEMENT_ERROR(data->gst_data.decoder,"ffdec_h264");
data->gst_data.converter = gst_element_factory_make ("ffmpegcolorspace", "converter");
VC_CHECK_ELEMENT_ERROR(data->gst_data.converter,"ffmpegcolorspace");
// i.MX Video sink
data->gst_data.sink = gst_element_factory_make ("appsink", "sink");
VC_CHECK_ELEMENT_ERROR(data->gst_data.sink,"appsink");
gst_app_sink_set_max_buffers((GstAppSink*)data->gst_data.sink, 1);
gst_app_sink_set_drop ((GstAppSink*)data->gst_data.sink, TRUE);
g_object_set (G_OBJECT (data->gst_data.sink),"sync", FALSE, NULL);
//Request pads from rtpbin, starting with the RTP receive sink pad,
//This pad receives RTP data from the network (rtp-udpsrc).
rtspsrc_pad_template = gst_element_class_get_pad_template (GST_ELEMENT_GET_CLASS (data->gst_data.rtspsrc),"recv_rtp_src_0");
// Use the template to request the pad
data->gst_data.recv_rtp_src_pad = gst_element_request_pad (data->gst_data.rtspsrc, rtspsrc_pad_template,
"recv_rtp_src_0", NULL);
// Print the name for confirmation
g_print ("A new pad %s was created\n",
gst_pad_get_name (data->gst_data.recv_rtp_src_pad));
// Add elements into the pipeline
g_print(" Adding elements to pipeline...\n");
gst_bin_add_many (GST_BIN (data->gst_data.pipeline),
data->gst_data.rtspsrc,
data->gst_data.depayloader,
data->gst_data.decoder,
data->gst_data.converter,
data->gst_data.sink,
NULL);
// Link some of the elements together
g_print(" Linking some elements ...\n");
if(!gst_element_link_many (data->gst_data.depayloader, data->gst_data.decoder, data->gst_data.converter, data->gst_data.sink, NULL))
g_print("Error: could not link all elements\n");
// Connect to the pad-added signal for the rtpbin. This allows us to link
//the dynamic RTP source pad to the depayloader when it is created.
if(!g_signal_connect (data->gst_data.rtspsrc, "pad-added",
G_CALLBACK (vc_pad_added_handler), data))
g_print("Error: could not add signal handler\n");
// Set the pipeline to "playing" state
g_print ("Now playing A\n");
ret = gst_element_set_state (data->gst_data.pipeline, GST_STATE_PLAYING);
if (ret == GST_STATE_CHANGE_FAILURE) {
g_printerr ("Unable to set the pipeline A to the playing state.\n");
gst_object_unref (data->gst_data.pipeline);
return -1;
}
return 0;
}
static void vc_pad_added_handler (GstElement *src, GstPad *new_pad, vc_data *data) {
GstPad *sink_pad = gst_element_get_static_pad (data->gst_data.depayloader, "sink");
GstPadLinkReturn ret;
GstCaps *new_pad_caps = NULL;
GstStructure *new_pad_struct = NULL;
const gchar *new_pad_type = NULL;
g_print ("Received new pad '%s' from '%s':\n", GST_PAD_NAME (new_pad), GST_ELEMENT_NAME (src));
/* Check the new pad's name */
if (!g_str_has_prefix (GST_PAD_NAME (new_pad), "recv_rtp_src_")) {
g_print (" It is not the right pad. Need recv_rtp_src_. Ignoring.\n");
goto exit;
}
/* If our converter is already linked, we have nothing to do here */
if (gst_pad_is_linked (sink_pad)) {
g_print (" Sink pad from %s already linked. Ignoring.\n", GST_ELEMENT_NAME (src));
goto exit;
}
/* Check the new pad's type */
new_pad_caps = gst_pad_get_caps (new_pad);
new_pad_struct = gst_caps_get_structure (new_pad_caps, 0);
new_pad_type = gst_structure_get_name (new_pad_struct);
/* Attempt the link */
ret = gst_pad_link (new_pad, sink_pad);
if (GST_PAD_LINK_FAILED (ret)) {
g_print (" Type is '%s' but link failed.\n", new_pad_type);
} else {
g_print (" Link succeeded (type '%s').\n", new_pad_type);
}
exit:
/* Unreference the new pad's caps, if we got them */
if (new_pad_caps != NULL)
gst_caps_unref (new_pad_caps);
/* Unreference the sink pad */
gst_object_unref (sink_pad);
}
int vc_gst_pipeline_clean(vc_data *data) {
GstStateChangeReturn ret;
GstStateChangeReturn ret2;
/* Cleanup Gstreamer */
if(!data->gst_data.pipeline)
return 0;
/* Send the main loop a quit signal */
g_main_loop_quit(data->gst_data.loop);
g_main_loop_unref(data->gst_data.loop);
ret = gst_element_set_state (data->gst_data.pipeline, GST_STATE_NULL);
if (ret == GST_STATE_CHANGE_FAILURE) {
g_printerr ("Unable to set the pipeline A to the NULL state.\n");
gst_object_unref (data->gst_data.pipeline);
return -1;
}
g_print ("Deleting pipeline\n");
gst_object_unref (GST_OBJECT (data->gst_data.pipeline));
/* Zero out the structure */
memset(&data->gst_data, 0, sizeof(vc_gst_data));
return 0;
}
void handleKey(char key)
{
switch (key)
{
case 27:
break;
}
}
int vc_mainloop(vc_data* data)
{
GstBuffer *gstImageBuffer;
Mat v;
namedWindow("view",WINDOW_NORMAL);
while (1) {
gstImageBuffer = gst_app_sink_pull_buffer((GstAppSink*)data->gst_data.sink);
if (gstImageBuffer != NULL )
{
v = Mat(Size(640, 360),CV_8U, (char*)GST_BUFFER_DATA(gstImageBuffer));
imshow("view", v);
handleKey((char)waitKey(3));
gst_buffer_unref(gstImageBuffer);
}else{
g_print("gsink buffer didn't return buffer.");
}
}
return 0;
}
int main (int argc, char *argv[])
{
setenv("DISPLAY", ":0", 0);
strcpy(app_data.cfg.server_ip_addr, "rtsp://admin:123456#192.168.0.123:554/mpeg4cif");
gst_init (&argc, &argv);
if(vc_gst_pipeline_init(&app_data) == -1) {
printf("Gstreamer pipeline creation and init failed\n");
goto cleanup;
}
vc_mainloop(&app_data);
printf ("Returned, stopping playback\n");
cleanup:
return vc_gst_pipeline_clean(&app_data);
return 0;
}
I hope this helps!! ;)

uri = 'rtsp://admin:123456#192.168.0.123:554/mpeg4cif'
gst_str = ("rtspsrc location={} latency={} ! rtph264depay ! h264parse ! omxh264dec ! nvvidconv ! video/x-raw, width=(int){}, height=(int){}, format=(string)BGRx ! videoconvert ! appsink sync=false").format(uri, 200, 3072, 2048)
cap= cv2.VideoCapture(gst_str,cv2.CAP_GSTREAMER)
while(True):
_,frame = cap.read()
if frame is None:
break
cv2.imshow("",frame)
cv2.waitKey(0)
cap.release()
cv2.destroyAllWindows()

Related

arduino programming: not enough memory message

I am new to arduino programming (Arduino Pro Mini 3.3v version), i have some code like below. I am connecting 9DOF, OLED screen and a BLE breakout to arduino pro mini.
I already went through some of the memory optimization tips, but i still have some issue. Even with the following code, i only have 9 bytes left for dynamic memory. If i enable BTLEserial.begin();, it will kill the memory. Please any suggestions will be appreciated.
#include <Wire.h>
#include <SPI.h>
#include <SparkFunLSM9DS1.h>
#include "Adafruit_BLE_UART.h"
#include <Adafruit_GFX.h>
#include <Adafruit_SSD1306.h>
#define OLED_RESET 4
Adafruit_SSD1306 display(OLED_RESET);
LSM9DS1 imu;
#define LSM9DS1_M 0x1E // Would be 0x1C if SDO_M is LOW
#define LSM9DS1_AG 0x6B // Would be 0x6A if SDO_AG is LOW
#define ADAFRUITBLE_REQ 10
#define ADAFRUITBLE_RDY 2
#define ADAFRUITBLE_RST 9
Adafruit_BLE_UART BTLEserial = Adafruit_BLE_UART(ADAFRUITBLE_REQ, ADAFRUITBLE_RDY, ADAFRUITBLE_RST);
void setup(void) {
Serial.begin(9600);
display.begin(SSD1306_SWITCHCAPVCC, 0x3D); // initialize with the I2C addr 0x3D (for the 128x64)
display.display();
delay(2000);
display.clearDisplay();
display.drawPixel(10, 10, WHITE);
display.display();
delay(2000);
display.clearDisplay();
imu.settings.device.commInterface = IMU_MODE_I2C;
imu.settings.device.mAddress = LSM9DS1_M;
imu.settings.device.agAddress = LSM9DS1_AG;
if (!imu.begin())
{
while (1)
;
}
// BTLEserial.begin(); - if i uncomment this code, i will get a not enough memory error.
}
aci_evt_opcode_t laststatus = ACI_EVT_DISCONNECTED;
void loop() {
displayAllDOF();
}
void displayAllDOF(){
display.setTextSize(1);
display.setTextColor(WHITE);
imu.readGyro();
display.setCursor(0,0);
display.print("G:");
display.print(imu.calcGyro(imu.gx));
display.print(", ");
display.print(imu.calcGyro(imu.gy));
display.print(", ");
display.print(imu.calcGyro(imu.gz));
display.println(" ");
imu.readAccel();
display.print("A:");
display.print(imu.calcAccel(imu.ax));
display.print(", ");
display.print(imu.calcAccel(imu.ay));
display.print(", ");
display.print(imu.calcAccel(imu.az));
display.println(" ");
imu.readMag();
display.print("M:");
display.print(imu.calcMag(imu.mx));
display.print(", ");
display.print(imu.calcMag(imu.my));
display.print(", ");
display.print(imu.calcMag(imu.mz));
display.println(" ");
display.display();
display.clearDisplay();
}
To start, you'll need to figure out where your RAM is going - How much does each library take? Do you really need to run them all at the same time? You know that you can run the display library, and the IMU code in your current setup - Can you implement something that only enables the IMU code, pulls data, then disables it? And the same with the display and BTLE code? That way each library is only consuming RAM when it's needed, and frees it once it's operation is finished
Update 1
An example of what I mentioned above. I do not know if all the libraries implement the .end() function. They may have a similar method you can use.
// Simple data storage for the .gx and .gy values
typedef struct {
float x, y;
} GyroData_t;
GyroData_t getImuData() {
GyroData_t data;
// Create the IMU class, gather data from it, and then destroy it
LSM9DS1 *imu = new LSM9DS1();
imu->begin();
imu->readGyro();
data.x = imu.gx;
data.y = imu.gy;
imu->end();
// This will reclaim the RAM that was used by the IMU - We no longer need it
delete imu;
return data;
}
void displayAllDOF() {
// Gather the IMU data
GyroData_t data = getImuData();
// Create the display object, and print the data we received
Adafruit_SSD1306 *display = new Adafruit_SSD1306(OLED_RESET);
display->print(...);
....
display->end();
// Reclaim the display RAM used
delete display;
// Do any bluetooth operations now
doBluetoothStuff();
}
void doBluetoothStuff() {
Adafruit_BLE_UART *BTLEserial = new Adafruit_BLE_UART(ADAFRUITBLE_REQ, ADAFRUITBLE_RDY, ADAFRUITBLE_RST);
BTLESerial->begin();
...
BTLESerial->end();
delete BTLESerial;
}

how to write a read DICOM file using VTK or ITK libraries?

I used vtkDICOMImageReader to read the DICOM file. I used the vtkImageThreshold to threshold a CT image. And now i want to write it back to my hard disk before further processing.
I tried vtkImageWriter library to write it back. But it is not working when i try to open the file using 3D slicer. I am much grateful if anyone can suggest me a methodology for writing Dicom files.
i have included my code here and i am trying to threshold a dicom image and viewing it. Then i would like to save the thresholded image as a dicom file. But i could not succeed in doing that. please help me.
thanks in advance.
#include <itkImageToVTKImageFilter.h>
#include <vtkSmartPointer.h>
#include <vtkImageData.h>
#include <vtkImageThreshold.h>
#include <vtkRenderWindow.h>
#include <vtkRenderWindowInteractor.h>
#include <vtkInteractorStyleImage.h>
#include <vtkRenderer.h>
#include <vtkImageMapper3D.h>
#include <vtkImageActor.h>
#include <vtkImageCast.h>
#include <vtkNIFTIImageWriter.h>
#include <vtkImageMandelbrotSource.h>
#include <vtkImageViewer2.h>
#include <vtkDICOMImageReader.h>
int main(int argc, char* argv[])
{
std::string folder = argv[1];
vtkSmartPointer<vtkDICOMImageReader> reader =
vtkSmartPointer<vtkDICOMImageReader>::New();
reader->SetFileName(folder.c_str());
reader->Update();
vtkSmartPointer<vtkImageViewer2> imageViewer =
vtkSmartPointer<vtkImageViewer2>::New();
imageViewer->SetInputConnection(reader->GetOutputPort());
// threshold the images
vtkSmartPointer<vtkImageThreshold> imageThreshold =
vtkSmartPointer<vtkImageThreshold>::New();
imageThreshold->SetInputConnection(reader->GetOutputPort());
// unsigned char lower = 127;
unsigned char upper = 511;
imageThreshold->ThresholdByLower(upper);
imageThreshold->ReplaceInOn();
imageThreshold->SetInValue(0);
imageThreshold->ReplaceOutOn();
imageThreshold->SetOutValue(511);
imageThreshold->Update();
// Create actors
vtkSmartPointer<vtkImageActor> inputActor =
vtkSmartPointer<vtkImageActor>::New();
inputActor->GetMapper()->SetInputConnection(
reader->GetOutputPort());
vtkSmartPointer<vtkImageActor> thresholdedActor =
vtkSmartPointer<vtkImageActor>::New();
thresholdedActor->GetMapper()->SetInputConnection(
imageThreshold->GetOutputPort());
// There will be one render window
vtkSmartPointer<vtkRenderWindow> renderWindow =
vtkSmartPointer<vtkRenderWindow>::New();
renderWindow->SetSize(600, 300);
// And one interactor
vtkSmartPointer<vtkRenderWindowInteractor> interactor =
vtkSmartPointer<vtkRenderWindowInteractor>::New();
interactor->SetRenderWindow(renderWindow);
// Define viewport ranges
// (xmin, ymin, xmax, ymax)
double leftViewport[4] = {0.0, 0.0, 0.5, 1.0};
double rightViewport[4] = {0.5, 0.0, 1.0, 1.0};
// Setup both renderers
vtkSmartPointer<vtkRenderer> leftRenderer =
vtkSmartPointer<vtkRenderer>::New();
renderWindow->AddRenderer(leftRenderer);
leftRenderer->SetViewport(leftViewport);
leftRenderer->SetBackground(.6, .5, .4);
vtkSmartPointer<vtkRenderer> rightRenderer =
vtkSmartPointer<vtkRenderer>::New();
renderWindow->AddRenderer(rightRenderer);
rightRenderer->SetViewport(rightViewport);
rightRenderer->SetBackground(.4, .5, .6);
leftRenderer->AddActor(inputActor);
rightRenderer->AddActor(thresholdedActor);
leftRenderer->ResetCamera();
rightRenderer->ResetCamera();
renderWindow->Render();
interactor->Start();
vtkSmartPointer<vtkNIFTIImageWriter> writer =
vtkSmartPointer<vtkNIFTIImageWriter>::New();
writer->SetInputConnection(reader->GetOutputPort());
writer->SetFileName("output");
writer->Write();
// writing the thresholded image to the hard drive.
//this is the part i am not able to code. Please can somebody help me please?
return EXIT_SUCCESS;
}
First, do you want to save out the thresholded image or just the image as read in? If not replace reader with imageThreshold
writer->SetInputConnection(reader->GetOutputPort())
From looking at the VTK tests of NIFTI readers and writers the following options may be required...
writer->SetNIFTIHeader(reader->GetNIFTIHeader())
writer->SetQFac(reader->GetQFac());
writer->SetTimeDimension(reader->GetTimeDimension());
writer->SetTimeSpacing(reader->GetTimeSpacing());
writer->SetRescaleSlope(reader->GetRescaleSlope());
writer->SetRescaleIntercept(reader->GetRescaleIntercept());
writer->SetQFormMatrix(reader->GetQFormMatrix());
I would test with adding these options and then see what you get
I think VTK is a little bit confused to process image. I only use it to display image.I wrote a piece of code for you. The code which read , threshold and write dicom image is below.It use only itk. I think it fills the bill.
#include "itkBinaryThresholdImageFilter.h"
#include "itkImageFileReader.h"
#include "itkGDCMImageIO.h"
#include "itkImageFileWriter.h"
#include "itkImage.h"
int main () {
typedef unsigned char InputPixelType ; //Pixel Type of Input image
typedef unsigned char OutputPixelType; //Pixel Type of Output image
const unsigned int InputDimension = 2; //Dimension of image
typedef itk::Image < InputPixelType, InputDimension > InputImageType; //Type definition of Input Image
typedef itk::Image < InputPixelType, InputDimension > OutputImageType;//Type definition of Output Image
typedef itk::ImageSeriesReader< InputImageType > ReaderType;//Type definition of Reader
typedef itk::BinaryThresholdImageFilter<InputImageType, OutputImageType > FilterType; // Type definition of Filter
typedef itk::ImageFileWriter<OutputImageType> ImageWriterType; //Definition of Writer of Ouput image
typedef itk::GDCMImageIO ImageIOType; //Type definition of Image IO for Dicom images
//Starts Reading Process
ReaderType::Pointer reader = ReaderType::New(); //Creates reader
ImageIOType::Pointer gdcmImageIO_input = ImageIOType::New(); //Creates ImageIO object for input image
reader->SetImageIO( gdcmImageIO_input ); // Sets image IO of reader
reader->SetFileNames( "example_image.dcm" ); // Sets filename to reader
//Exceptional handling
try
{
reader->UpdateLargestPossibleRegion();
}
catch (itk::ExceptionObject & e)
{
std::cerr << "exception in file reader " << std::endl;
std::cerr << e << std::endl;
return EXIT_FAILURE;
}
// Start filtering process
FilterType::Pointer filter = FilterType::New(); //Creates filter
filter->SetInput( reader->GetOutput() );
filter->SetOutsideValue( 0); // Set pixel value which are out of lower and upper threshold value
filter->SetInsideValue( 255 );// Set pixel value which are within lower and upper threshold value
filter->SetLowerThreshold( 25 ); // Lower threshold value 25
filter->SetUpperThreshold( 150 );// Upper threshold value 150
filter->Update();
//Starts Writing Process
ImageWriterType::Pointer imageWriter = ImageWriterType::New(); // Creates writer
ImageIOType::Pointer gdcmImageIO_output = ImageIOType::New(); // Creates Image IO object for output image
imageWriter->SetImageIO( gdcmImageIO_output ); // Set image IO as dicom
imageWriter->SetInput(filter->GetOutput() ); // Connects output of filter with to input of writer
imageWriter->SetFileName(example_image_thresholded.dcm); // Sets output file name
//Exceptional handling
try
{
imageWriter->Update();
}
catch ( itk::ExceptionObject &exception )
{
std::cerr << "Exception in file writer ! " << std::endl;
std::cerr << exception << std::endl;
return EXIT_FAILURE;
}
return EXIT_SUCCESS;
}

Gstreamer - appsrc push model

I'm developing a C application (under linux) that receives a raw h.264 stream and should visualize this stream using gstreamer APIs.
I am a newbie with GStreamer so maybe I am doing huge stupid mistakes or ignoring well-known stuff, sorry about that.
I got a raw h264 video (I knew it was the exact same format I need) and developed an application that plays it. It correctly works with appsrc in pull mode (when need data is called, I get new data from the file and perform push-buffer).
Now I'm trying to do the exact same thing but in push mode, this is basically because I don't have a video, but a stream. So I have a method inside my code that will be called every time new data (in the form of an uint8_t buffer) arrives, and this is my video source.
I googled my problem and had a look at the documentation, but I found no simple code snippets for my use case, even if it seems a very simple one. I understood that I have to init the pipeline and appsrc and then only push-buffer when I have new data.
Well, I developed two methods: init_stream() for pipeline/appsrc initialization and populate_app(void *inBuf, size_t len) to send data when they are available.
It compiles and correctly runs, but no video:
struct _App
{
GstAppSrc *appsrc;
GstPipeline *pipeline;
GstElement *h264parse;
GstElement *mfw_vpudecoder;
GstElement *mfw_v4lsin;
GMainLoop *loop;
};
typedef struct _App App;
App s_app;
App *app = &s_app;
static gboolean bus_message (GstBus * bus, GstMessage * message, App * app)
{
GST_DEBUG ("got message %s", gst_message_type_get_name (GST_MESSAGE_TYPE (message)));
switch (GST_MESSAGE_TYPE (message)) {
case GST_MESSAGE_ERROR:
g_error ("received error");
g_main_loop_quit (app->loop);
break;
case GST_MESSAGE_EOS:
g_main_loop_quit (app->loop);
break;
default:
break;
}
return TRUE;
}
int init_stream()
{
GstBus *bus;
gst_init (NULL, NULL);
fprintf(stderr, "gst_init done\n");
/* create a mainloop to get messages */
app->loop = g_main_loop_new (NULL, TRUE);
fprintf(stderr, "app loop initialized\n");
app->pipeline = gst_parse_launch("appsrc name=mysource ! h264parse ! mfw_vpudecoder ! mfw_v4lsin", NULL);
app->appsrc = gst_bin_get_by_name (GST_BIN(app->pipeline), "mysource");
gst_app_src_set_stream_type(app->appsrc, GST_APP_STREAM_TYPE_STREAM);
gst_app_src_set_emit_signals(app->appsrc, TRUE);
fprintf(stderr, "Pipeline and appsrc initialized\n");
/* Create Bus from pipeline */
bus = gst_pipeline_get_bus(app->pipeline);
fprintf(stderr, "bus created\n");
/* add watch for messages */
gst_bus_add_watch (bus, (GstBusFunc) bus_message, app);
gst_object_unref(bus);
fprintf(stderr, "bus_add_watch done\n");
GstCaps* video_caps = gst_caps_new_simple ("video/x-h264",
"width", G_TYPE_INT, 800,
"height", G_TYPE_INT, 480,
"framerate", GST_TYPE_FRACTION, 25,
1, NULL);
gst_app_src_set_caps(GST_APP_SRC(app->appsrc), video_caps);
/* go to playing and wait in a mainloop. */
gst_element_set_state ((GstElement*) app->pipeline, GST_STATE_PLAYING);
fprintf(stderr, "gst_element_set_state play\n");
/* this mainloop is stopped when we receive an error or EOS */
g_main_loop_run (app->loop);
fprintf(stderr, "g_main_loop_run called\n");
gst_element_set_state ((GstElement*) app->pipeline, GST_STATE_NULL);
fprintf(stderr, "gst_element_set_state GST_STATE_NULL\n");
/* free the file */
// g_mapped_file_unref (app->file);
gst_object_unref (bus);
g_main_loop_unref (app->loop);
return 0;
}
void populateApp(void *inBuf , size_t len) {
guint8 *_buffer = (guint8*) inBuf;
GstFlowReturn ret;
GstBuffer *buffer = gst_buffer_new();
GstCaps* video_caps = gst_caps_new_simple ("video/x-h264",
"width", G_TYPE_INT, 800,
"height", G_TYPE_INT, 480,
"framerate", GST_TYPE_FRACTION, 25,
1, NULL);
gst_buffer_set_caps(buffer, video_caps);
GST_BUFFER_DATA (buffer) = _buffer;
GST_BUFFER_SIZE (buffer) = len;
// g_signal_emit_by_name (app->appsrc, "push-buffer", buffer, &ret);
ret = gst_app_src_push_buffer(GST_APP_SRC(app->appsrc), buffer);
gst_buffer_unref (buffer);
}
As said, I am a total newbie at GStreamer so there's a lot of cut-and-paste code from the internet, but IMHO it should work.
Do you see any issues?
It's not clear how you are calling populateApp, but you need to call that repeatedly as you have data to push to your pipeline. This can be done in a separate thread from the one blocked by g_main_loop_run, or you can restructure your program to avoid using GMainLoop.

Changing DCT coefficients

I decided to use libjpeg as the main library working with jpeg files.
I've read libjpg.txt file. And I was pleased that library allows DCT coefficients reading/writing in a convenient way. Since writing an own decoder will take a long time.
My work is related to the lossless embedding. Currently I need to read DCT coefficients from a file then modify some of them and write changed coefficients in the same file.
Well, I found jpeg_write_coefficients() function. And I naively thought that I could apply it to a decompression object (struct jpeg_decompress_struct). But it does not work and requires a compression object.
I can't believe that such the powerful library is not able to do this.
I think that most likely I'm missing something. Although I tried to be attentive.
Perhaps the writing coefficients can be done more sophisticated way.
But I don't know how to.
I will be very glad if you propose your ideas.
You can ue jpeg_write_coefficients to write your changed DCT.
The following information is avaliable in libjpeg.txt
To write the contents of a JPEG file as DCT coefficients, you must provide
the DCT coefficients stored in virtual block arrays. You can either pass
block arrays read from an input JPEG file by jpeg_read_coefficients(), or
allocate virtual arrays from the JPEG compression object and fill them
yourself. In either case, jpeg_write_coefficients() is substituted for
jpeg_start_compress() and jpeg_write_scanlines(). Thus the sequence is
* Create compression object
* Set all compression parameters as necessary
* Request virtual arrays if needed
* jpeg_write_coefficients()
* jpeg_finish_compress()
* Destroy or re-use compression object
jpeg_write_coefficients() is passed a pointer to an array of virtual block
array descriptors; the number of arrays is equal to cinfo.num_components.
The virtual arrays need only have been requested, not realized, before
jpeg_write_coefficients() is called. A side-effect of
jpeg_write_coefficients() is to realize any virtual arrays that have been
requested from the compression object's memory manager. Thus, when obtaining
the virtual arrays from the compression object, you should fill the arrays
after calling jpeg_write_coefficients(). The data is actually written out
when you call jpeg_finish_compress(); jpeg_write_coefficients() only writes
the file header.
When writing raw DCT coefficients, it is crucial that the JPEG quantization
tables and sampling factors match the way the data was encoded, or the
resulting file will be invalid. For transcoding from an existing JPEG file,
we recommend using jpeg_copy_critical_parameters(). This routine initializes
all the compression parameters to default values (like jpeg_set_defaults()),
then copies the critical information from a source decompression object.
The decompression object should have just been used to read the entire
JPEG input file --- that is, it should be awaiting jpeg_finish_decompress().
jpeg_write_coefficients() marks all tables stored in the compression object
as needing to be written to the output file (thus, it acts like
jpeg_start_compress(cinfo, TRUE)). This is for safety's sake, to avoid
emitting abbreviated JPEG files by accident. If you really want to emit an
abbreviated JPEG file, call jpeg_suppress_tables(), or set the tables'
individual sent_table flags, between calling jpeg_write_coefficients() and
jpeg_finish_compress().
So to change a single dct, you can use the following simple code:
To access any dct coeff, you need to change four index, cx, bx, by, bi.
In my code, I used blockptr_one[bi]++; to increase one dct Coeff
#include <stdio.h>
#include <jpeglib.h>
#include <stdlib.h>
#include <iostream>
#include <string>
int write_jpeg_file(std::string outname,jpeg_decompress_struct in_cinfo, jvirt_barray_ptr *coeffs_array ){
struct jpeg_compress_struct cinfo;
struct jpeg_error_mgr jerr;
FILE * infile;
if ((infile = fopen(outname.c_str(), "wb")) == NULL) {
fprintf(stderr, "can't open %s\n", outname.c_str());
return 0;
}
cinfo.err = jpeg_std_error(&jerr);
jpeg_create_compress(&cinfo);
jpeg_stdio_dest(&cinfo, infile);
j_compress_ptr cinfo_ptr = &cinfo;
jpeg_copy_critical_parameters((j_decompress_ptr)&in_cinfo,cinfo_ptr);
jpeg_write_coefficients(cinfo_ptr, coeffs_array);
jpeg_finish_compress( &cinfo );
jpeg_destroy_compress( &cinfo );
fclose( infile );
return 1;
}
int read_jpeg_file( std::string filename, std::string outname )
{
struct jpeg_decompress_struct cinfo;
struct jpeg_error_mgr jerr;
FILE * infile;
if ((infile = fopen(filename.c_str(), "rb")) == NULL) {
fprintf(stderr, "can't open %s\n", filename.c_str());
return 0;
}
cinfo.err = jpeg_std_error(&jerr);
jpeg_create_decompress(&cinfo);
jpeg_stdio_src(&cinfo, infile);
(void) jpeg_read_header(&cinfo, TRUE);
jvirt_barray_ptr *coeffs_array = jpeg_read_coefficients(&cinfo);
//change one dct:
int ci = 0; // between 0 and number of image component
int by = 0; // between 0 and compptr_one->height_in_blocks
int bx = 0; // between 0 and compptr_one->width_in_blocks
int bi = 0; // between 0 and 64 (8x8)
JBLOCKARRAY buffer_one;
JCOEFPTR blockptr_one;
jpeg_component_info* compptr_one;
compptr_one = cinfo.comp_info + ci;
buffer_one = (cinfo.mem->access_virt_barray)((j_common_ptr)&cinfo, coeffs_array[ci], by, (JDIMENSION)1, FALSE);
blockptr_one = buffer_one[0][bx];
blockptr_one[bi]++;
write_jpeg_file(outname, cinfo, coeffs_array);
jpeg_finish_decompress( &cinfo );
jpeg_destroy_decompress( &cinfo );
fclose( infile );
return 1;
}
int main()
{
std::string infilename = "you_image.jpg", outfilename = "out_image.jpg";
/* Try opening a jpeg*/
if( read_jpeg_file( infilename, outfilename ) > 0 )
{
std::cout << "It's Okay..." << std::endl;
}
else return -1;
return 0;
}
You should really take a look at transupp.h and sources for jpegtran that comes with the library.
Anyway, here is my dirty code with comments, assembled partially from jpegtran. It lets you manipulate DCT coefficients one by one.
#include "jpeglib.h" /* Common decls for cjpeg/djpeg applications */
#include "transupp.h" /* Support routines for jpegtran */
struct jpeg_decompress_struct srcinfo;
struct jpeg_compress_struct dstinfo;
struct jpeg_error_mgr jsrcerr, jdsterr;
static jpeg_transform_info transformoption; /* image transformation options */
transformoption.transform = JXFORM_NONE;
transformoption.trim = FALSE;
transformoption.force_grayscale = FALSE;
jvirt_barray_ptr * src_coef_arrays;
jvirt_barray_ptr * dst_coef_arrays;
/* Initialize the JPEG decompression object with default error handling. */
srcinfo.err = jpeg_std_error(&jsrcerr);
jpeg_create_decompress(&srcinfo);
/* Initialize the JPEG compression object with default error handling. */
dstinfo.err = jpeg_std_error(&jdsterr);
jpeg_create_compress(&dstinfo);
FILE *fp;
if((fp = fopen(filePath], "rb")) == NULL) {
//Throw an error
} else {
//Continue
}
/* Specify data source for decompression */
jpeg_stdio_src(&srcinfo, fp);
/* Enable saving of extra markers that we want to copy */
jcopy_markers_setup(&srcinfo, JCOPYOPT_ALL);
/* Read file header */
(void) jpeg_read_header(&srcinfo, TRUE);
jtransform_request_workspace(&srcinfo, &transformoption);
src_coef_arrays = jpeg_read_coefficients(&srcinfo);
jpeg_copy_critical_parameters(&srcinfo, &dstinfo);
/* Do your DCT shenanigans here on src_coef_arrays like this (I've moved it into a separate function): */
moveDCTAround(&srcinfo, &dstinfo, 0, src_coef_arrays);
/* ..when done with DCT, do this: */
dst_coef_arrays = jtransform_adjust_parameters(&srcinfo, &dstinfo, src_coef_arrays, &transformoption);
fclose(fp);
//And write everything back
fp = fopen(filePath, "wb");
/* Specify data destination for compression */
jpeg_stdio_dest(&dstinfo, fp);
/* Start compressor (note no image data is actually written here) */
jpeg_write_coefficients(&dstinfo, dst_coef_arrays);
/* Copy to the output file any extra markers that we want to preserve */
jcopy_markers_execute(&srcinfo, &dstinfo, JCOPYOPT_ALL);
jpeg_finish_compress(&dstinfo);
jpeg_destroy_compress(&dstinfo);
(void) jpeg_finish_decompress(&srcinfo);
jpeg_destroy_decompress(&srcinfo);
fclose(fp);
And the function itself:
void moveDCTAround (j_decompress_ptr srcinfo, j_compress_ptr dstinfo, JDIMENSION x_crop_offset, jvirt_barray_ptr *src_coef_arrays)
{
size_t block_row_size;
JBLOCKARRAY coef_buffers[MAX_COMPONENTS];
JBLOCKARRAY row_ptrs[MAX_COMPONENTS];
//Allocate DCT array buffers
for (JDIMENSION compnum=0; compnum<srcinfo->num_components; compnum++)
{
coef_buffers[compnum] = (dstinfo->mem->alloc_barray)((j_common_ptr) dstinfo, JPOOL_IMAGE, srcinfo->comp_info[compnum].width_in_blocks,
srcinfo->comp_info[compnum].height_in_blocks);
}
//For each component,
for (JDIMENSION compnum=0; compnum<srcinfo->num_components; compnum++)
{
block_row_size = (size_t) sizeof(JCOEF)*DCTSIZE2*srcinfo->comp_info[compnum].width_in_blocks;
//...iterate over rows,
for (JDIMENSION rownum=0; rownum<srcinfo->comp_info[compnum].height_in_blocks; rownum++)
{
row_ptrs[compnum] = ((dstinfo)->mem->access_virt_barray)((j_common_ptr) &dstinfo, src_coef_arrays[compnum], rownum, (JDIMENSION) 1, FALSE);
//...and for each block in a row,
for (JDIMENSION blocknum=0; blocknum<srcinfo->comp_info[compnum].width_in_blocks; blocknum++)
//...iterate over DCT coefficients
for (JDIMENSION i=0; i<DCTSIZE2; i++)
{
//Manipulate your DCT coefficients here. For instance, the code here inverts the image.
coef_buffers[compnum][rownum][blocknum][i] = -row_ptrs[compnum][0][blocknum][i];
}
}
}
//Save the changes
//For each component,
for (JDIMENSION compnum=0; compnum<srcinfo->num_components; compnum++)
{
block_row_size = (size_t) sizeof(JCOEF)*DCTSIZE2 * srcinfo->comp_info[compnum].width_in_blocks;
//...iterate over rows
for (JDIMENSION rownum=0; rownum < srcinfo->comp_info[compnum].height_in_blocks; rownum++)
{
//Copy the whole rows
row_ptrs[compnum] = (dstinfo->mem->access_virt_barray)((j_common_ptr) dstinfo, src_coef_arrays[compnum], rownum, (JDIMENSION) 1, TRUE);
memcpy(row_ptrs[compnum][0][0], coef_buffers[compnum][rownum][0], block_row_size);
}
}

Opencv cannot access my webcam

I have trouble with accessing webcam using opencv 2.4.3.
My System:
Hp Probook 4530s - HP Fixed HD Webcam
Ubuntu 12.10
OpenCV 2.4.3
İf I want to capture my built-in camera i get ERROR: capture is NULL
I'm using http://opencv.willowgarage.com/wiki/CameraCapture sample code.
Sample code is:
#include "cv.h"
#include "highgui.h"
#include <stdio.h>
// A Simple Camera Capture Framework
int main() {
CvCapture* capture = cvCaptureFromCAM( CV_CAP_ANY );
if ( !capture ) {
fprintf( stderr, "ERROR: capture is NULL \n" );
getchar();
return -1;
}
// Create a window in which the captured images will be presented
cvNamedWindow( "mywindow", CV_WINDOW_AUTOSIZE );
// Show the image captured from the camera in the window and repeat
while ( 1 ) {
// Get one frame
IplImage* frame = cvQueryFrame( capture );
if ( !frame ) {
fprintf( stderr, "ERROR: frame is null...\n" );
getchar();
break;
}
cvShowImage( "mywindow", frame );
// Do not release the frame!
//If ESC key pressed, Key=0x10001B under OpenCV 0.9.7(linux version),
//remove higher bits using AND operator
if ( (cvWaitKey(10) & 255) == 27 ) break;
}
// Release the capture device housekeeping
cvReleaseCapture( &capture );
cvDestroyWindow( "mywindow" );
return 0;
}
I also tried with xawtv -hwscan using typing terminal. I get this output:
looking for available devices
port 129-144
type : Xvideo, image scaler
name : Intel(R) Textured Video`
/dev/video0: OK
[ -device /dev/video0 ]
type : libv4l
name : HP HD Webcam [Fixed]
flags: capture
then I can access my webcam typing xawtv video0. I think I have no trouble with my webcam.
I have trouble with opencv.
I solved my problem few minutes ago. And I decided share my solution for people who handling similar error.
First I didnt install some of below packets ( I dont remember which of them, so I paste all of them)
libjpeg62-dev
libtiff4-dev
zlib1g-dev
libjasper-dev
libavcodec-dev
libdc1394-22-dev
libgstreamer0.10-dev
libgstreamer-plugins-base0.10-dev
libavformat-dev
libv4l-dev
libswscale-dev
Then You should configure your cmake process with this code
cmake -D CMAKE_BULD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D BUILD_PYTHON_SUPPORT=ON USE_V4L=ON WITH_TBB=ON -D BUILD_NEW_PYTHON_SUPPORT=ON USE_GStreamer=ON ..
Please notice USE_V4L=ON this code..
I hope you solve after reading my solution.
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
using namespace cv;
using namespace std;
int main()
{
VideoCapture webcam;
webcam.open(0);
if(!webcam.isOpened())//**EDITED**
{
std::cout<<"CANNOT OPEN CAM"<<std::endl;
return -1;
}
Mat frame;
while(true)
{
webcam >> frame;
imshow("TEST",frame);
waitKey(20);
}
return 0;
}
Try the above code...
In some cases it is down to the inbuilt cameras response time (as it was in my case). I discovered that the in-built webcam on my HP G62 only "wakes up" after the first opencv cap.read(frame) call. Therefore to get a positive read from the camera (and hence no error later in the code) I made the call several times before proceeding:
if (!cap.read(frame))
{
if(!cap.read(frame))
{
if(!cap.read(frame))
{
if(!cap.read(frame))
{
printf("Cam read error");
}
}
}
}
For me the optimum was 4 read calls, which ensured my camera was awake and on before running through the main block of code. It is possible that a simple "waitKey" call would work and only two read calls, although I haven't tried this.

Resources