How to know when `perf record -p <pid>` actually starts recording - perf

When I attach perf to a running process to trace some tracepoints, there is a lag of ~2sec between the perf command invocation and the actual recording. I know it, because if I send payload to the process too early (i.e. without waiting for 2sec after perf start), the execution doesn't appear in the recording.
My question is: is there a way to determine when perf record actually starts recording? Waiting for some arbitrary amount of time seems very brittle.

Related

How do i start a program from where it left off after shutdown and reboot

I would like to know how to resume a program from where the program left after a system shutdown and once the system is powered back on.
To be clear i have a program that needs the system to shut down and restart before it can execute a condition i want to know how i would continue execution from the condition that is supposed to run once rebooted
It's completely dependent on the program to implement that functionality. Very little, if anything you can do if the program itself is not providing it.
If it's a program you are writing then you need to be more specific about what it does and what it is written in. (Even then, it's quite hard to explain)

Is there any way not to get CPU-throttled in the background?

I have a CPU task that needs to occur when the app is running in the background (either by way of fetch or silent notification). This task takes about 1s when running in the foreground but about 9s when running in the background. It's basically saving out ~100K textual entries to a database. Whether I use FileHandle operations or a Core Data sqlite solution, the performance profile is about the same (Core Data is a little slower surprisingly).
I don't really want to get into the specifics of the code. I've already profiled the hell out of it and in the foreground it's quite performant. But clearly when the app is running in the background it's being throttled by iOS to the tune of a 9x slowdown. This wouldn't be such a big issue except in response to a silent notification iOS only gives the app 30-40s to complete and this 9s task can put it over the limit. (The rest of it is waiting on subsystems that I have no control over.)
So the question:
Is there any way to tell iOS Hi, yes, I'm in the background but I really need this chunk of code to run quickly and avoid your throttling ? FWIW I'm already running in a .userInitiated qos dispatch queue:
DispatchQueue.global(qos: .userInitiated).async {
// code to run faster goes here
}
Thanks!
First, no. The throttling is on purpose, and you can't stop it. I'm curious if using a .userInitiated queue is actually improving performance much over a default queue when you're in the background. Even if that's true today, I wouldn't bet on that, and as a rule you shouldn't mark something user initiated that is clearly not user initiated. I wouldn't put it past Apple to run that queue slower when in the background.
Rather than asking to run more quickly, you should start by asking the OS for more time. You do that by calling beginBackgroundTask(expirationHandler:) when you start processing data, and then call endBackgroundTask(_:) when you're done. This tells the OS that you're doing something that would be very helpful if you could complete, and the OS may give you several minutes. When you run out of whatever time it gives you, then it'll call your expirationHandler, and you can save off where you were at that point to resume work later.
When you run out of time, you're only going to get a few seconds to complete your expiration handler, so you may not be able to write a lot of data to disk at that point. If the data is coming from the network, then you address this by downloading the data first (using a URLSessionDownloadTask). These are very energy efficient, and your app won't even be launched until the data is finished downloading. Then you start reading and processing, and if you run out of time, you squirrel away where you were in user defaults so you can pick it up again when you launch next. When you're done, you delete the file.

Suspending already executing task NSOperationQueue

I have problem suspending the current task being executed, I have tried to set NSOperationQueue setSuspended=YES for pausing and setSuspended=NO for resuming the process.
According to apple docs I can not suspend already executing task.
If you want to issue a temporary halt to the execution of operations, you can suspend the corresponding operation queue using the setSuspended: method. Suspending a queue does not cause already executing operations to pause in the middle of their tasks. It simply prevents new operations from being scheduled for execution. You might suspend a queue in response to a user request to pause any ongoing work, because the expectation is that the user might eventually want to resume that work.
My app needs to suspend the time taking upload operation in case of internet unavailability and finally resume the same operation once internet is available. Is there any work around for this? or I just need to start the currently executing task from zero?
I think you need to start from zero. otherwise two problems will come there. If you resume the current uploading you cant assure that you are not missed any packets or not. At the same time if the connection available after a long period of time, server may delete the data that you uploaded previously because of the incomplete operation.
Whether or not you can resume or pause a operation queue is not your issue here...
If it worked like you imagined it could (and it doesn't) when you get back to servicing the TCP connection it may very well be in a bad state, it could have timed out, closed remotely...
you will want to find out what your server supports and use the parts of a REST (or similar) service to resume a stalled upload on a brand new fresh connection.
If you haven't yet, print out this and put it on the walls of your cube, make t-shirts for your family members to wear... maybe add it as a screensaver?

ERLANG wait() and blocking

Does the following function block on its running core?
wait(Sec) ->
receive
after (1000 * Sec) -> ok
end.
A great answer will detail the internal working of Erlang and/or the CPU.
The process which executes that code will block, the scheduler which runs that process currently will not block. The code you posted is equal to a yield, but with a timeout.
The Erlang VM scheduler for that core will continue to execute other processes until that timeout fires and that process will be scheduled for execution again.
Short answer: this will block only current (lightweight) process, and will not block all VM. For more details you must read about erlang scheduler. Nice description comes from book "Concurent Programming" by Francesco Cesarini and Simon Thompson.
...snip...
When a process is dispatched, it is assigned a number of reductions†
it is allowed to execute, a number which is reduced for every
operation executed. As soon as the process enters a receive clause
where none of the messages matches or its reduction count reaches
zero, it is preempted. As long as BIFs are not being executed, this
strategy results in a fair (but not equal) allocation of execution
time among the processes.
...snip...
nothing Erlang-specific, pretty classical problem: timeouts can only happen on a system clock interrupt. Same answer as above: that process is blocked waiting for the clock interrupt, everything else is working just fine.
There is another discussion about the actual time that process is going to wait which is not that precise exactly because it depends on the clock period (and that's system dependent) but that's another topic.

Problem stopping an Erlang SSH channel

NOTE: I'll use the ssh_sftp channel as an example here, but I've noticed the same behaviour when using different channels.
After starting a channel:
{ok, ChannelPid} = ssh_sftp:start_channel(State#state.cm),
(where cm is my Connection Manager), I'm performing an operation through the channel. Say:
ssh_sftp:write_file(ChannelPid, FilePath, Content),
Then, I'm stopping the channel:
ssh_sftp:stop_channel(ChannelPid),
Since, as far as I know, the channel is implemented as a gen_server, I was expecting the requests to be sequentialized.
Well, after a bit of tracing, I've noticed that the channel is somehow stopped before the file write is completed and the result of the operation is sent through the channel. As a conclusion, the response is not sent through the channel, since the channel doesn't exist anymore.
If I don't stop the channel explicitely, everything works fine and the file write (or any other operation performed through the channel) is completed correctly. But I would prefer to avoid to leave open channels. On the other hand, I would prefer to avoid implementing my own receive handler to wait for the result before the channel can be stopped.
I'm probably missing something trivial here. Do you have any idea why this is happening and/or I could fix it?
I repeat, the ssh_sftp is just an example. I'm using my own channels, implemented using the existing channels in the Erlang SSH application as a template.
As you can see in ssh_sftp.erl it forcefully kills channel after 5 sec timeout with exit(Pid, kill) which interrupts the process regardless of whether it's processing something or not.
Related quote from erlang man:
If Reason is the atom kill, that is if exit(Pid, kill) is called, an untrappable exit signal is sent to Pid which will unconditionally exit with exit reason killed.
I had a similar issue with ssh_connection:exec/4. The problem is that these ssh sibling modules ( ssh_connection, ssh_sftp, etc) all appear to behave asynchronously, therefore a closure of channel of ssh itself will shut down the ongoing action.
The options are:
1) do not close the connection : this may lead to leak of resources. Purpose of my question here
2) After the sftp, introduce a monitoring function that waits by monitoring on the file you are transfering at the remote server ( checksum check ). This can be based on ssh_connection:exec and poll on the file you are transferring. Once the checksum matches what you expect, you can free the main module

Resources