Problem with pinging in STM32F767 + FreeRTOS + LWIP - freertos

I have an application with freertos on stm32f767. There was no problem with tcp/ip running on my application with lwip middleware. But now adding some new code somewhere in other threads that is not relevant to lwip causes some problems in pinging. When I ping my board (after adding new code), the mcu does not respond completely. Some ping response packets receives too late and time-out occurred. This packet loss is occurred randomly and when I comment the added code everything is ok and ping response packet receives completely without any problem.
The problem is that there is no relation between the added code and lwip thread. In other words, adding code in some part of code (independent of what code is doing as I tested) makes this problem.
I guessed that there is a problem with stack or heap memory overflow but when I increase freertos stack or global stack the problem is not solved.
I think this a bug in lwip implementation but I don't know how can I solve?
Please help me.

Related

Debug Packet Loss In TCP Communication in iOS/iPad Application

I have an iOS application that remotely connects to 3 sockets(of some hardware). Each Socket has its own priority. One channel is only used for transferring messages between iPad App & hardware, one for Tx/Rx Images, another one for Tx/Rx Videos. I had implemented all the three sockets using GCDAsyncSocket API & things worked fine while using MSGSocket/ImageSocket (OR) MSGSocket/VideoSocket, but when I start using the VideoSocket/ImageSocket/MSGSocket simultaneously this is where things go a little haywire. I Lose Packets of Data.{Actually a chunk of file goes missing :-(} I went through the API & found some bug in the API: Unable to complete Read Stream which I assumed could be a cause of problem. Hence, I Switched to threads & implemented the same using NSThreads/CFSocket API.
I changed only the implementation for ImageSocket/VideoSocket code using NSThreads/CFSocket API & here is the implementation of the same dropbox-ed. I'm just unable to understand as to where the things are going wrong whether it is at iOS App end or at the Server side. In my understanding there shall be no loss of packets in TCP Communication.
Is there a way to Debug This issue. Also I request to go through the code & let me know if any thing is wrong(I know this can be too much that I'm asking for but I need some assurance as to the code implementation is correct). Any help to resolve this issue will be highly appreciated.
EDIT 1: After #JoeMcMahon Comment, I referred to this Technical Q&A & got a TCP Dump - trace.pcap file. I opened this tcp dump with Wireshark & it does show me the bytes transferred between the ports of hardware & iPad.
Also in the terminal when I stopped the tcp dump capture I saw these messages:
12463 packets captured
36469 packets received by filter
0 packets dropped by kernel
Can someone point out the difference between packets captured & packets received by filter?
Note - The TCP dump attached is not for a failed scenario.
EDIT 1.1: Found the answer to difference between packets captured & packets received by filter here
TCP communication is not guaranteed to be reliable. The basic ack-syn paradigm can break, that is why you have re-transmission mechanism etc. Wireshark reports such problem in your packet capture session.
For using wireshark/tcpdump, you generally want to provide a filter, since the amount of traffic goes through the wire is overwhelming (ping, ntp, etc), you want to filter the capture using some basic filter to see the packets which is relevant to you. The packets which are filtered out is not captured, hence the numerical difference.
If it is a chunk of file went missing, I doubt issue is at TCP level. Most likely it is something higher level went wrong. I would run a fixed size file repeatedly through the channel till I can reliably reproduce the loss.

Azure server got error "system lacked sufficient buffer space or because a queue was full Ip"

i have hosted my project on azure server of Asp.net MVC, and i have used azure sql, its work fine, but number of times while performing any operation , i.e. when fire calls from controller it gives error like,
"An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full Ip"
and after few minutes its starts to work fine,
can anyone tell me why this error is throwing or is there any solution for this??
This is most probably client side issue (ASP.Net app side). This could happen if you do A LOT of simultaneous socket connections or do not dispose connections properly. Please double check your application and make sure that:
You properly close all database connections (use using() or call Dispose()).
You properly close any other socket connection (if any).
If your code is fine, you can try to use Transient Fault Handling App Block. It won't solve the issue itself but could help your app to workaround it.

TCP connection timeout is 20 or 21 seconds on *some* PCs when set to 500ms

I was given 10 new PCs, all (supposedly) with Windows 7 Pro freshly installed and nothing else done to them.
I have a program, coded in Delphi XE2, using Indy 10 components for the networking. I set the "connect timeout" and "read timeout" properties of my TIdTcpCleint to 500ms, set "resuse socket" to 'o/s dependant'" (I also tried a build with it set to No) and leave "use Nagle" (whatever that is set to True (I also tried with false).
Here's the problem: when I run the same .EXE on these PCs and test the case where I pull the network cable, my debug trace shows the connect attempt / connect timeout happening in the same second or the next second (with a granularity of 1 second) - but on others it is 20 or 21 seconds before I see the conenction timeout.
It would seem some of that the PCs are not totally "fresh install" as claimed, although I see no aps installed. Maybe some one installed somethign then removed it, maybe they tried to tweak performance.
Before I reinstall Windows on 10 PCs, can anyone suggest where to look? Does 20 (or 21) seconds ring a bell with regard to TCP Client connect timeout?
[update] I am attempting to connect directly to a specific IP Address, so I am not sure if #Nikolai suggestion to check DNS is relevant. Sorry for not mentioning this originally.
[upperdate] the program does not attempt to keep the socket open. It connects, sends some data & disconnects - repeatedly, for each new piece of data.
Sadly, this is working as intended. The connect did already timeout. Indy made the determination that the connect would fail in the 500 milliseconds that you asked it to. However, that does not guarantee the function will return.
After the connect times out, Indy spins down the connection to release all of its resources. It does this synchronously. This means that you wind up waiting for the underlying TCP operation to fail. This typically takes 20 seconds.
The solution is to call connect in a thread. Believe it or not, this is what Indy already does to implement the timeout. However, when it times out waiting for the thread, it tries to shut down the connection in the main thread. You need to defer that to a worker thread.
As for why it happens immediately on some systems and in 20 seconds on others, it depends on the precise networking configuration. For example, if IPv6 is enabled, the stack may attempt to use an IPv6-to-IPv4 connection, and that may not report down even if the physical interface is down. Immediate detection of connection impossibility is never guaranteed and you shouldn't rely on it.
I've had same problems with INDY in the past (while using D6, year 1998-2000). I changed the component to IP*Works. At that time it was an external component, but as far as I know it is included in XE2. Ip*Works is a bit hard to understand at the beginning but the way they approach to the communication structure is a lot different.
I think that it would be worth to give it a try.

MPI: How many sockets?

I am working on an MPI application, which uses threaded-MPI calls between processes. Threads are added and removed as per the load requirements. Now, I have a question, which I could not find answer in the open-mpi forum.
If a set of MPI processes ("ranks") already has a connection, ie, they are already making send-receive calls, and then a new thread comes in (either processes) which also makes the send-receive calls between the same MPI peers, would MPI open up new set of sockets?
I know that the details are implementation dependent, so there may not be a general answer. But, is there a way to find out?
There are questions on the scalability of this technique, which was chosen for other reasons. It would be great to get some stats, on the number of new sockets per connection.
Anyone knows how to do this? For instance, query which socket is a particular instance of MPI_Send writing to?
I already tried adding --mca btl self,sm,tcp --mca btl_base_verbose 30 -v -report-pid -display-map -report-bindings -leave-session-attached
Thanks a lot.
To answer my own question, here is what I learnt from brilliant folks at Open-MPI:
On Jan 24, 2012, at 5:34 PM, devendra rai wrote:
I am trying to find out how many separate connections are opened by MPI as messages are sent. Basically, I have threaded-MPI calls to a bunch of different MPI processes (who, in turn have threaded MPI calls).
The point is, with every thread added, are new ports opened (even if the sender-receiver pairs already have a connection between them)?
In Open MPI: no. The underlying connections are independent of how many threads you have.
Is there any way to find out? I went through MPI APIs, and the closest thing I found was related to cartographic information. This is not sufficient, since this only tells me the logical connections (or does it)?
MPI does not have a user-level concept of a connection. You send a message, a miracle occurs, and the message is received on the other side. MPI doesn't say anything about how it got there (e.g., it may have even been routed through some other process).
Reading Open MPI FAQ, I thought adding "--mca btl self,sm,tcp --mca btl_base_verbose 30 -display-map" to mpirun would help. But I am not getting what I need. Basically, I want to know how many ports each process is accessing (reading as well as writing).
For Open MPI's TCP implementation, it's basically one TCP socket per peer (plus a few other utility fd's). But TCP sockets are only opened lazily, meaning that we won't open the socket until you actually send to a peer.
--
Jeff Squyres
Credits to Jeff Squyres.

Windows service using a lot of CPU (VB.NET)

HI,
We have a device on the field which sends TCP packets to our server once a day. I have a Windows service which constantly listens for those packets. The code in the service is pretty much a carbon copy from the MSDN example (Asynchronous Server Socket Example) – the only difference being our implementation doesn't send anything back. It just receives, processes the data and closes the socket. The service just starts a thread which immediately addresses the code linked above.
The problem is that when I goto the Task Manager of the server on which it is running, the service seems be to using all of the CPU (it says 99) all the time. I was notified of this by IT. But I don't understand what those CPU cycles are being used for, the thread just blocks on allDone.WaitOne() doesn't it?
I also made a Console Application with the same code, and that works just fine i.e. using CPU only when data is being processed. The task in each case is completed successfully each time, but the service implementation, from the looks of it seems very inefficient. What could I be doing wrong here?
Thanks.
Use a profiler to find out where your CPU is spent. That should have been yourfirst throught - profilers are one of the main tools for programmers.
It will pretty much exactly tell you what part of the code blows the CPU.
The code, btw., looks terrible. Like an example how to use async cockets, not like a good architecture for a multi connection server. Sorry to say, you possibly have to rewrite thse.

Resources