Wait for ms after a statement execution objective c - ios

I have a BLE value set after which I need to wait for 6.25ms for other device to write into its buffer.
so I have been using usleep(6250)... As I got to know usleep considers value in micro seconds... So I am considering 6250 microseconds=6.25 ms. Is it the right api to use ? as there are different posts which say usleep should never be used in iOS etc. I am not able to make a difference in wait time by debugging it with breakpoint as I think the wait time is too less to be visible like I can with sleep(2)... Pls confirm if its right API to use and if I am passing right value to API. If not please suggest.

In general, you shouldn't sleep a thread ever. That blocks the thread and wastes system resources.
Instead, use dispatch_after() or a similar API.
As well, do you really need to wait at all? Or does the device send some kind of acknowledgement that the write was successful? I.e. is there some signal from the device that you can react to to know that the write happened?

Related

grpc iOS stream, send only when GRXWriter.state is started?

I'm using grpc in iOS with bidirectional streams.
For the stream that I write to, I subclassed GRXWriter and I'm writing to it from a background thread.
I want to be as quick as possible. However, I see that GRXWriter's status switches between started and paused, and I sometimes get an exception when I write to it during the paused state. I found that before writing, I have to wait for GRXWriter.state to become started. Is this really a requirement? Is GRXWriter only allowed to write when its state is started? It switches very often between started and paused, and this feels like it may be slowing me down.
Another issue with this state check is that my code looks ugly. Is there any other way that I can use bidirectional streams in a nicer way? In C# grpc, I just get a stream that I write freely to.
Edit: I guess the reason I'm asking is this: in my thread that writes to GRXWriter, I have a while loop that keeps checking whether state is started and does nothing if it is not. Is there a better way to do this rather than polling the state?
The GRXWriter pauses because the gRPC Core only accepts one write operation pending at a time. The next one has to wait until the first one completes. So the GRPCCall instance will block the writer until the previous write is completed, by modifying its state!
In terms of the exception, I am not sure why you are getting the problem. GRXWriter is more like an abstract class and it seems you did your own implementation by inheriting from it. If you really want to do so, it might be helpful to refer to GRXBufferedPipe, which is an internal implementation. In particular, if you want to avoid waiting in a loop for writing, writing again in the setter of GRXWriter's state should be a good option.

How well do erlang timer scales

I have a timer project requirement in my web server. Some effects done by clients operations at the server needs to be reset after sometime the had occurred. To do this, I intend to use erlang:start_timer/3 function to send a reset message to a process that does the resetting for each effects. This is ok with few client's operations coming in. The question is, does erlang timer scales very well as the number of current effects to time for reset increases?
Don't guess, don't ask, try it and measure. Nobody know your use case and requirements better than you. Is it for profit? Then you are paid for it. Is it as a hobby, then be used to it. It is an integral part of your job.

How to do an operation, and if it doesn't complete in 6 seconds to stop it?

I am trying to receive information from a telnet connection in Lua using LuaSocket. I have all of that up and running except when I receive, if I receive anything less than the maximum number of bytes it takes 5 seconds. If I receive anything more than the number of bytes on the screen it takes upwards of half an hour.
My current idea for a solution is to try receiving for instance 750 bytes, then if that doesn't work within 6-7 seconds do 700 bytes, then 650 and so on until I can receive it very quickly. I need to parse the information and find two specific phrases, so if it's possible to do that inside of my telnet connection and just return that instead of the entire screen that would work as well. I also don't need ALL of it, but I need as much of the received information as possible to raise the chances that my information is in that block of it, hence why I'm only decrementing by 50 in my example.
I can't find any functions that allow you to start reading something (doing a function) and then quit it after a certain time interval. If anybody knows how to do this, or has any other solutions to my problem, let me know please! :) Thanks!
here is what I need repeated:
info = conn:receive(x)
with x decrementing each time it takes longer than 6 seconds to complete.
The solution you are proposing looks a bit strange as there are more straightforward ways to deal with asynchronous communications. First of all, you can use settimeout to limit the amount of time that send and receive calls will wait for the results (be careful as receive may return partial results in this case). Second option is to use select which allows you to check if a socket has something to read/write before issuing a blocking command.

How to get the CPU time of Delphi program?

Problem I'm trying to solve: My program uses System.Win.ScktComp.TServerSocket to communicate with another local process via Ethernet. Between receiving a packet from the local process and sending a response is 100ms--which shouldn't take this long. I'm trying to step through my program with the debugger to see where that 100ms is being spent.
The problem is that if I get the current time while I'm in the debugger it will obviously count the time it spent in the paused state of the debugger. Another problem is that the relevant part of my app is TTimer and event-driven so that when a routine returns you're not sure what routine will be called next.
My attempt: I can forgo using the debugger and print the current time everywhere like in all the OnTimer procedures and other events.
Much better solution: Step through with the debugger, getting the CPU time (which isn't affected by the time spent paused in the debugger) here and there to pinpoint where that 100ms is being lost.
I don't believe that you are tackling your problem the correct way, and have made that point in comments. Leaving that aside, the function that you are asking for is GetProcessTimes.
I'm trying to ... see where that 100ms is being spent.
A debugger will not be able to tell you that very easily. You need to use a profiler instead, like AQTime or similar, and let it clock your code in real-time and report the results, such as how much time was spent in specific functions and class methods.

NSURLConnection getting limited to a Single Connection at a time?

OK - let's rephrase this whole question shall we?
Is there any way to tell if iOS is holding onto an NSURLConnection after it has finished & returned it's data?
I've got 2 NSURLConnections I'm instantiating & calling into a server with. The first one initiates the connection with the server and then goes into a COMET style long-polling wait while another user interacts with the request. The second one goes into the server and triggers a cancel mechanism which safely ends the first request and causes both to return successfully with a "Cancelled by you" message.
In the happy path case the Cancel button will never be clicked. But it's possible to click it and exit the current action.
This whole scenario works GREAT once. And then never works again (until the app is reset).
It's as though the first time thru one of the connections is never released and we are from then on limited to only a single connection because one of them is locked.
BTW I've tried NSURLConnection, AFNetwork, MKNetworkKit, ASIHTTPRequest - no luck what-so-ever with any other frameworks. NSURLConnection should do what I want. It's just ... not letting go of one of my connections.
I suspect the cancellation request in Step 2 is leaving the HTTP connection open.
I don't know exactly how the NS* classes work with respect to the HTTP/1.1 recommendation of at most two simultaneous connections, but let's assume they're enforcing at most two connections. Let's suppose the triggering code in Instance A (steps 1 and 3 of your example) cleans up after itself, but the cancellation code in Instance B (steps 2 and 4) leaves the connection open. That might explain what you are observing.
If I were you, I'd compare the code that runs in step 1 against the code that runs in step 2. I bet there's a difference between them in terms of the way they clean up after themselves.
If I'm not wrong,
iOS/Mac holds on to a NSURLConnection as long as the "Keep-Alive" header dictates it to.
But as a iOS developer you shouldn't be worried. any reason why you would like to know that?
So unfortunately with the lack of a real solution to this issue being found in all my testing I've had to implement simple polling to resolve the issue.
I've also had to implement iOS only APIs on the server.
What this comes down to is an API to send up a command and put it into a queue on the server, then using an NSTimer on the client to check the status of the of the queued item on a regular interval.
Until I can find out how to make multiple connections on iOS with long-polling this is the only working solution. Once I have a decent amount of points I'll gladly bounty them away for a solution to this :(

Resources