SharpPCap missing packets - wireshark

I'm using SharpPCap to collect IEC61850-9-2LE Sampled Values over Ethernet.
IEC61850-9-2LE Sampled Values consists of several streams, each one sending 4000 packets per second, where the avg packet size is 125 bytes.
Using SharpPCap I'm trying to collect 3 of those streams (3x4000 packets per second - 125bytes each).
In the following code I set up the Network Interface Card.
if (nicToUse != null)
{
try
{
nicToUse.OnPacketArrival -= OnPackectArrivalLive;
nicToUse.OnPacketArrival += OnPackectArrivalLive;
try
{
if (nicToUse.Started)
nicToUse.StopCapture();
if (nicToUse.Opened)
nicToUse.Close();
}
catch (Exception)
{
//no handling, just do it.
}
nicToUse.Open(OpenFlags.Promiscuous|OpenFlags.MaxResponsiveness,10);
var kernelBufferAssigned = false;
uint kernelBufferSize = 200;
while (!kernelBufferAssigned)
{
try
{
nicToUse.KernelBufferSize = kernelBufferSize * 1024 * 1024;
kernelBufferAssigned = true;
}
catch (Exception)
{
kernelBufferSize--;
}
}
nicToUse.Filter = "(ether[0:4] = 0x010CCD04)";
watchdog.Enabled = true;
counter = 0;
nicToUse.StartCapture();
}
catch (Exception ex)
{
throw new Exception(Resources.SharpPCapPacketsProducer_Start_Error_while_starting_online_capture_, ex);
}
}
This is the OnPacketArrival event handler:
private void OnPackectArrivalLive(object sender, CaptureEventArgs e)
{
try
{
counter++;
circularBuffer[circularBufferIndex] = e.Packet;
circularBufferIndex++;
if (circularBufferIndex > circularBufferSize - 1)
circularBufferIndex = 0;
}
catch (Exception)
{
}
}
When the capturing is over (user stops it), the captured packets are decoded and since they hold a sequential counter I've discovered some samples are missing.
Connecting the same source to another PC running Wireshark, those samples are not missing.
Any idea ?

What version of SharpPcap are you using? There have been some pretty big performance improvements due to overhead reduction in the 3.x and 4.x series.
Your example seems to be wrapping the circular buffer around at the tail. What type is circularBuffer? How are you sure that you are processing the packets before your buffer has filled up?
Have you looked at this example, from the SharpPcap source distribution, that shows one technique for doing background packet processing?
QueueingPacketsForBackgroundProcessing/Main.cs

Related

Is it possible to send lin frame from CANoe without using the LDF file

I'm new to CANoe and CAPL, currently using Vector CANoe 16 SP2 Demo version. I'm able to send a 8 bytes of CAN message as shown below with the help of CAPL script without CAN database file.
/*#!Encoding:1252*/
includes
{
}
variables
{
message 0x01 msg1;
msTimer tm;
}
on start
{
setTimer(tm,10);
}
on timer tm
{
Send_msg_1();
setTimer(tm,10);
}
void Send_msg_1()
{
int i;
msg1.msgChannel=1;
msg1.dlc=8;
for(i=0;i<8;i++)
{
msg1.byte(i)=0x1;
}
output(msg1);
}
enter image description here
Now, I want to send 8 bytes of lin frame with CAPL without the LDF file. I've written the following code for it
/*#!Encoding:1252*/
includes
{
}
variables
{
linFrame 0x01 msg1;
msTimer tm;
}
on start
{
setTimer(tm,10);
}
on timer tm
{
msg1();
setTimer(tm,10);
}
void msg1()
{
int i;
msg1.msgChannel=1;
msg1.dlc=8;
for(i=0;i<8;i++)
{
msg1.byte(i)=0x1;
}
msg1.rtr=0;
output(msg1);
msg1.rtr=1;
output(msg1);
write("Frame Transmitted");
}
But, I'm not sure if its getting really transmitted because it is showing like this in trace window
enter image description here
If its not correct. Please let me know the correct way of transmitting a lin frame in CANoe with CAPL scripting.
Thanks in advance :)

PIC32MZ2048ECH144 USART- Junk characters transmitted

I am new to MPLAB X Harmony framework and also in working with microcontrollers. I am working on PIC32MZ2048ECH144. I wanted to transmit a simple string using USART and see it displayed in RealTerm terminal.(I have tried HyperTerminal also.) Whatever string I send, I see only junk characters being displayed. When I browsed for the solution for this problem of junk characters being displayed, there were suggestions to check for the baud rate. I have set the baud rate to be 9600 in MPLab Harmony Configurator(Options -> Harmony Framework configuration -> Drivers -> USART -> USART Driver Instance 0 -> Baud Rate -> 9600). So I used the following line in app.c to explicitly set the baud rate.(PBCLK is 100MHz). But no luck!
PLIB_USART_BaudRateSet(USART_ID_2, 100000000 ,9600);
The code for app.c file:
/*******************************************************************************
Start of File
*/
const char *string1 = "*** UART Interrupt-driven Application Example ***\r\n";
const char *string2 = "*** Type some characters and observe the LED turn ON ***\r\n";
APP_DATA appData =
{
};
APP_DRV_OBJECTS appDrvObject;
void APP_Initialize ( void )
{
appData.state = USART_ENABLE;
appData.InterruptFlag = false;
}
bool WriteString(void)
{
if(*appData.stringPointer == '\0')
{
return true;
}
while (PLIB_USART_TransmitterIsEmpty(USART_ID_1))
{
PLIB_USART_TransmitterByteSend(USART_ID_1, *appData.stringPointer);
appData.stringPointer++;
if(*appData.stringPointer == '\0')
{
return true;
}
}
return false;
}
bool PutCharacter(const char character)
{
if(PLIB_USART_TransmitterIsEmpty(USART_ID_1))
{
PLIB_USART_TransmitterByteSend(USART_ID_1, character);
return true;
}
else
return false;
}
void APP_Tasks ( void )
{
/* check the application state*/
switch ( appData.state )
{
case USART_ENABLE:
/* Enable the UART module*/
PLIB_USART_BaudRateSet(USART_ID_1, 100000000 ,9600);
PLIB_USART_Enable(USART_ID_1);
appData.stringPointer = string1;
appData.state = USART_TRANSMIT_FIRST_STRING;
break;
case USART_TRANSMIT_FIRST_STRING:
if(true == WriteString())
{
appData.state = USART_TRANSMIT_SECOND_STRING;
appData.stringPointer = string2;
}
break;
case USART_TRANSMIT_SECOND_STRING:
if(true == WriteString())
{
appData.state = USART_RECEIVE_DONE;
}
break;
case USART_RECEIVE_DONE:
if (appData.InterruptFlag)
{
if(true == PutCharacter(appData.data))
{
appData.InterruptFlag = false;
}
}
break;
default:
while (1);
}
}
/*******************************************************************************
End of File
*/
I am sorry that I cannot attach the image of the output I receive in RealTerm as I do not have enough points.
I have no clue where else the problem could be that gives the mismatch of baud rate. Any hints or help would be of great help. Thanks in advance.
Kindly apologize me for any mistakes in the post.
you are correct that it is most likely BAUD rate, but just to be sure how is the USART hooked to the computer? Do you have a translator chip since the computer is expecting +-5V? As for the BAUD, check your clocking scheme and know that PBCLK is sometimes DIV_2 of the SYSCLOCK. There is a great clocking schematic in the Harmony framework to double check your clocking and CONFIG pragmas.

Improve Face Recognition

I am trying to a develop face-recognition app in android. I am using JavaCv FaceRecognizer. But so far I am getting very poor results. It recognizes image of person which was trained but it also recognizes unknown images. For the known faces it gives me large value as a distance, most of the time from 70-90, sometimes 90+, while unknown images also get 70-90.
So how can I increase the performance of face-recognition? What techniques are there? What percentage of success you can get with this normally?
I have never worked with image processing. I will appreciate any guidelines.
Here is the code:
public class PersonRecognizer {
public final static int MAXIMG = 100;
FaceRecognizer faceRecognizer;
String mPath;
int count=0;
labels labelsFile;
static final int WIDTH= 70;
static final int HEIGHT= 70;
private static final String TAG = "PersonRecognizer";
private int mProb=999;
PersonRecognizer(String path)
{
faceRecognizer = com.googlecode.javacv.cpp.opencv_contrib.createLBPHFaceRecognizer(2,8,8,8,100);
// path=Environment.getExternalStorageDirectory()+"/facerecog/faces/";
mPath=path;
labelsFile= new labels(mPath);
}
void changeRecognizer(int nRec)
{
switch(nRec) {
case 0: faceRecognizer = com.googlecode.javacv.cpp.opencv_contrib.createLBPHFaceRecognizer(1,8,8,8,100);
break;
case 1: faceRecognizer = com.googlecode.javacv.cpp.opencv_contrib.createFisherFaceRecognizer();
break;
case 2: faceRecognizer = com.googlecode.javacv.cpp.opencv_contrib.createEigenFaceRecognizer();
break;
}
train();
}
void add(Mat m, String description)
{
Bitmap bmp= Bitmap.createBitmap(m.width(), m.height(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(m,bmp);
bmp= Bitmap.createScaledBitmap(bmp, WIDTH, HEIGHT, false);
FileOutputStream f;
try
{
f = new FileOutputStream(mPath+description+"-"+count+".jpg",true);
count++;
bmp.compress(Bitmap.CompressFormat.JPEG, 100, f);
f.close();
} catch (Exception e) {
Log.e("error",e.getCause()+" "+e.getMessage());
e.printStackTrace();
}
}
public boolean train() {
File root = new File(mPath);
FilenameFilter pngFilter = new FilenameFilter() {
public boolean accept(File dir, String name) {
return name.toLowerCase().endsWith(".jpg");
};
};
File[] imageFiles = root.listFiles(pngFilter);
MatVector images = new MatVector(imageFiles.length);
int[] labels = new int[imageFiles.length];
int counter = 0;
int label;
IplImage img=null;
IplImage grayImg;
int i1=mPath.length();
for (File image : imageFiles) {
String p = image.getAbsolutePath();
img = cvLoadImage(p);
if (img==null)
Log.e("Error","Error cVLoadImage");
Log.i("image",p);
int i2=p.lastIndexOf("-");
int i3=p.lastIndexOf(".");
int icount = 0;
try
{
icount=Integer.parseInt(p.substring(i2+1,i3));
}
catch(Exception ex)
{
ex.printStackTrace();
}
if (count<icount) count++;
String description=p.substring(i1,i2);
if (labelsFile.get(description)<0)
labelsFile.add(description, labelsFile.max()+1);
label = labelsFile.get(description);
grayImg = IplImage.create(img.width(), img.height(), IPL_DEPTH_8U, 1);
cvCvtColor(img, grayImg, CV_BGR2GRAY);
images.put(counter, grayImg);
labels[counter] = label;
counter++;
}
if (counter>0)
if (labelsFile.max()>1)
faceRecognizer.train(images, labels);
labelsFile.Save();
return true;
}
public boolean canPredict()
{
if (labelsFile.max()>1)
return true;
else
return false;
}
public String predict(Mat m) {
if (!canPredict())
return "";
int n[] = new int[1];
double p[] = new double[1];
//conver Mat to black and white
/*Mat gray_m = new Mat();
Imgproc.cvtColor(m, gray_m, Imgproc.COLOR_RGBA2GRAY);*/
IplImage ipl = MatToIplImage(m, WIDTH, HEIGHT);
faceRecognizer.predict(ipl, n, p);
if (n[0]!=-1)
{
mProb=(int)p[0];
Log.v(TAG, "Distance = "+mProb+"");
Log.v(TAG, "N = "+n[0]);
}
else
{
mProb=-1;
Log.v(TAG, "Distance = "+mProb);
}
if (n[0] != -1)
{
return labelsFile.get(n[0]);
}
else
{
return "Unknown";
}
}
IplImage MatToIplImage(Mat m,int width,int heigth)
{
Bitmap bmp;
try
{
bmp = Bitmap.createBitmap(m.width(), m.height(), Bitmap.Config.RGB_565);
}
catch(OutOfMemoryError er)
{
bmp = Bitmap.createBitmap(m.width()/2, m.height()/2, Bitmap.Config.RGB_565);
er.printStackTrace();
}
Utils.matToBitmap(m, bmp);
return BitmapToIplImage(bmp, width, heigth);
}
IplImage BitmapToIplImage(Bitmap bmp, int width, int height) {
if ((width != -1) || (height != -1)) {
Bitmap bmp2 = Bitmap.createScaledBitmap(bmp, width, height, false);
bmp = bmp2;
}
IplImage image = IplImage.create(bmp.getWidth(), bmp.getHeight(),
IPL_DEPTH_8U, 4);
bmp.copyPixelsToBuffer(image.getByteBuffer());
IplImage grayImg = IplImage.create(image.width(), image.height(),
IPL_DEPTH_8U, 1);
cvCvtColor(image, grayImg, opencv_imgproc.CV_BGR2GRAY);
return grayImg;
}
protected void SaveBmp(Bitmap bmp,String path)
{
FileOutputStream file;
try
{
file = new FileOutputStream(path , true);
bmp.compress(Bitmap.CompressFormat.JPEG, 100, file);
file.close();
}
catch (Exception e) {
// TODO Auto-generated catch block
Log.e("",e.getMessage()+e.getCause());
e.printStackTrace();
}
}
public void load() {
train();
}
public int getProb() {
// TODO Auto-generated method stub
return mProb;
}
}
I have faced similar challenges recently, here are the things which helped me in getting better results:
Crop the faces from images - this will remove unnecessary pixels at the time of inference
Resize the cropped face images - this impacts when detecting face landmarks, try different scales on test sets to understand what works best. Also, this impacts the inference time as well, smaller the size, faster the inference.
Improve the brightness of the face images - I found this really helpful, detecting face landmarks in darker images was not much good, this is mainly due to the model, which was pre-trained with mostly white faces - having understanding on training data will helps when dealing with bias.
Convert to grayscale images - this I have seen it in many forums and said that, this will helpful in finding the edges efficiently - and processing time is less when compared to colour images (3 channels -RGB) - however, this did not help much.
Try to capture (register) as many as images for individual person in different angles, lightings and other variations - this one really helps as it is comparing with encodings of the stored images.
Try to implement 1-1 comparison for face verification - for example, in my system, I have captured 10 pictures for each person, and at the time of verification, I am comparing against 10 pictures, instead of all the encodings of all the persons stored in the system. This will provide, false positives, however use-cases are limited in this setup, I am using it for face authentication, and compare the new face against existing faces where mobile number is same.
My understanding as of today, face recognition system works great and but not 100% accurate, we have to understand the model architecture, training data and our requirement and deploy it accordingly to get better outcome. Here are some points which helped me improve overall system:
Implement fallback method - provide option to user, when our system failed to detects them correctly, example, if face authentication failed for some reason, show them enter PIN option
In critical system - add periodic human intervention to confirm system result - for example, if a system not allows a user based on FR result - verify with a human agent for failed result and allow the user
Implement multiple factors for authentication - deploy face recognition system as addition to existing system - for example, after user logged in with credentials - verify them its intended person using face recognition system
Design your user interface in a way, at the time of verification, how user should behave like open eyes, close mouth, etc without impacting user experience
Provide clear instruction to users, when they are dealing with the system - for example, let user know, FR system integrated and they need to show their faces in good lighting condition, etc.

Image Processing with Kinect and AForge

I'm working on a project where I want to track a dice with the Microsoft Kinect using the AForge.NET-Library.
The project itself contains only the fundamentals such as initializing the Kinect, obtaining a Colorframe and applying one color filter but there already the problem occurs.
So here is the main part of the program:
void ColorFrameReady(object sender, ColorImageFrameReadyEventArgs e)
{
using (ColorImageFrame colorFrame = e.OpenColorImageFrame())
{
if (colorFrame != null)
{
colorFrameManager.Update(colorFrame);
BitmapSource thresholdedImage =
diceDetector.GetThresholdedImage(colorFrameManager.Bitmap);
if (thresholdedImage != null)
{
Display.Source = thresholdedImage;
}
}
}
}
The 'Update'-method of the 'colorFrameManager'-object looks like this:
public void Update(ColorImageFrame colorFrame)
{
byte[] colorData = new byte[colorFrame.PixelDataLength];
colorFrame.CopyPixelDataTo(colorData);
if (Bitmap == null)
{
Bitmap = new WriteableBitmap(colorFrame.Width, colorFrame.Height,
96, 96, PixelFormats.Bgr32, null);
}
int stride = Bitmap.PixelWidth * Bitmap.Format.BitsPerPixel / 8;
imageRect.X = 0;
imageRect.Y = 0;
imageRect.Width = colorFrame.Width;
imageRect.Height = colorFrame.Height;
Bitmap.WritePixels(imageRect, colorData, stride, 0);
}
And the 'getThresholdedImage'-method looks like this:
public BitmapSource GetThresholdedImage(WriteableBitmap colorImage)
{
BitmapSource thresholdedImage = null;
if (colorImage != null)
{
try
{
Bitmap bitmap = BitmapConverter.ToBitmap(colorImage);
HSLFiltering filter = new HSLFiltering();
filter.Hue = new IntRange(335, 0);
filter.Saturation = new Range(0.6f, 1.0f);
filter.Luminance = new Range(0.1f, 1.0f);
filter.ApplyInPlace(bitmap);
thresholdedImage = BitmapConverter.ToBitmapSource(bitmap);
}
catch (Exception ex)
{
System.Console.WriteLine(ex.Message);
}
}
return thresholdedImage;
}
Now the program slows down a lot/ doesn't respond when this line is executed:
filter.ApplyInPlace(bitmap);
So I already read this thread (C# image processing on Kinect video using AForge) and I tried EMGU but I couldn't get it to work because of inner exceptions and as the thread-starter wasn't online since four months my question to have a look at his working code wasn't answered.
Now firstly I'm intereseted in how the reason for the slow execution can be
filter.ApplyInPlace(bitmap);
Is this image processing really so complex? Or could this be a problem with my enviroment?
Secondly I would like to ask if skipping frames is a good solution? Or is it better to use polling and open frames only every - for instance - 500 milliseconds.
Thank you very much!
The HSL filter would not slow down the computation, is not an complex Filter.
Im utilizing it in 320x240 images with 30 fps without problems.
The problem may be in the resolution of the computed image or in a too high frame rate!
If the resolution of the image is high, i suggest to resize it before any filter application.
And i think a framerate of 20 (and maybe less) is enough to tracking a dice.

How get trap with indy TidSNMP component

I'm using c++builderXE with Indy 10.5.7 and I'm trying to receive trap from another agent snmp.
I have no info describing how to do the program to receive trap.
Below you can find the snippet of code which I'm trying to use now.
The ReceiveTrap() method always return 0, which means non data received.
I tested the PC configuration with another program I made several years ago using spare API and the trap is received so I don't this it should be a configuration problem.
Have you some suggestions of hat I'm wrong in the routine below?
void __fastcall TForm1::LabelReceiveTrapClick(TObject * Sender)
{
static bool status = false;
int ists;
String Fun = "[SimpleReceiveTrap] ";
TSNMPInfo * infoSnmp = 0;
try
{
status = !status;
if (status)
{
std::auto_ptr< TIdSNMP >clientSnmp(new TIdSNMP(NULL));
clientSnmp->Community = "public";
clientSnmp->ReceiveTimeout = 1000;
clientSnmp->Binding->Port = 162;
while (status)
{
Application->ProcessMessages();
ists = clientSnmp->ReceiveTrap();
Mylog(L"%s ReceiveTrap status = [%d]", Fun.c_str(), ists);
if (ists > 0)
{
infoSnmp = clientSnmp->Trap;
}
}
}
}
catch (Exception & ex)
{
Mylog(L"%s ERROR", Fun.c_str(), ex.Message.c_str());
}
}
That is not the correct way to set the listening Port for receiving traps. Reading the Binding property allocates and binds a socket to a local IP/Port using the TIdSNMP::BoundIP and TIdSNMP::BoundPort properties. You can't change that socket's local Port after it has already been bound, so your assignment of the Binding->Port property is effectively a no-op.
For that matter, you are trying to manipulate the wrong socket anyway. The Binding socket is used for sending queries to the remote SNMP system. TIdSNMP uses a separate socket for receiving traps. TIdSNMP has a separate TrapPort property for specifying the listening Port of that socket. When the Binding is accessed, the trap socket is allocated and bound to Binding->IP and TIdSNMP::TrapPort. The TrapPort property defaults to 162.
std::auto_ptr< TIdSNMP >clientSnmp(new TIdSNMP(NULL));
clientSnmp->Community = "public";
clientSnmp->ReceiveTimeout = 1000;
clientSnmp->TrapPort = 162; // <--
...
ists = clientSnmp->ReceiveTrap();
Looking at Indy's changelog, there have been some trap-related changes to the listening socket since 10.5.7 was released, so you may need to upgrade to a newer Indy version to get bug fixes. Or you could download the latest version and then just add IdSNMP.pas to your project directly, at least.
Using only the Indi component I can't read the trap rev 2c
But I found a solution using TWSocket and TSNMPInfo which seems to works well
Belowe the code I used:
To get the data I use a TWSocket fro FPiette components suite:
void __fastcall TForm1::LabelStartServerTracSnmpClick(TObject * Sender)
{
String Fun = "[LabelStartServerTracSnmp] ";
try
{
if (WSocket1->State == wsClosed)
{
WSocket1->Proto = "udp";
WSocket1->Addr = "0.0.0.0";
WSocket1->Port = 162;
WSocket1->Listen();
}
else
{
WSocket1->Close();
}
}
catch (Exception & ex)
{
Mylog(L"%s ERROR: [%s]", Fun.c_str(), ex.Message.c_str());
}
}
To analyze the data received I use the Indy
void __fastcall TForm1::WSocket1DataAvailable(TObject * Sender, WORD ErrCode)
{
char buffer[1024];
int len, cnt, srcLen;
TSockAddrIn srcSocket;
String rcvmsg, remHost, s1, s2, Fun = "[WSocket1DataAvailable] ";
TIdSNMP * clientSnmp = NULL;
TSNMPInfo * infoSnmp = NULL;
try
{
srcLen = sizeof(srcSocket);
len = WSocket1->ReceiveFrom(buffer, sizeof(buffer), srcSocket, srcLen);
if (len >= 0)
{
buffer[len] = 0;
rcvmsg = String(buffer, len);
__try
{
clientSnmp = new TIdSNMP(NULL);
infoSnmp = new TSNMPInfo(clientSnmp);
infoSnmp->DecodeBuf(rcvmsg);
cnt = infoSnmp->ValueCount;
if (cnt > 0)
{
// ---------------------------------------------------
for (int idx = 0; idx < cnt; ++idx)
{
s1 = infoSnmp->ValueOID[idx];
s2 = infoSnmp->Value[idx];
Mylog(L"[%s] Trap : [%s] => [%s]", s1.c_str(), s2.c_str());
}
}
}
__finally
{
if (infoSnmp)
{
delete infoSnmp;
infoSnmp = 0;
}
if (clientSnmp)
{
delete clientSnmp;
clientSnmp = 0;
}
}
}
}
catch (Exception & ex)
{
Mylog(L"%s ERROR", Fun.c_str(), ex.Message.c_str());
}
}

Resources