What are asynchronous circuits? - digital

there are combination and sequential circuits.In sequential circuits there is memory element used. is asynchronous circuit also used flip flop like memory element in circuit. and how they are unstable which make it poor choice for circuit. how can be explain the instability in asynchronous circuit?

Probably you are referring to a system that is clock driven. The data is latched at the certain part of the clock period (normally rising/falling edge). If you are using an asynchronous circuit, it can change its state anytime, including the data latch time. This leads to instability.

Related

Multitasking techniques

I have 2 tasks that need to be performed:
One is synchronized with refresh rate via the Present call; does fancy graphics.
The other does a bunch of computations on a virtually infinite workload; does not need to be synchronous with the first task; really does not like being interrupted (encourages coarser workload granularity).
Is there a way to optimally use the GPU in this situation with DirectX?
Perhaps the solution would:
issue Dispatch (or Draw) calls in a way that allows them to run/finish asynchronously.
signal the current shader to stop.
use hardware or driver scheduling.
Right now my soultion is to try and predict how long it would take to run the shaders, which is unreliable, unless I add a bunch of downtime...
Trying to avoid the th**ad word as it means a different thing on GPUs
Create two separate D3D11 devices. Use one for the rendering, and another one (driven from another CPU thread with lower priority) for the computations.
Rework your low-priority computations making each Dispatch() to take a couple milliseconds of GPU time to complete. Don’t submit many compute calls at once: use 2 queries or a single fence to never dispatch more than 2 pending compute calls. Dispatch 2 calls initially, when the first is complete dispatch the 3-rd one, etc.
While 3D rendering on your main thread, lock an std::mutex, release once you rendered the scene before Present. On the background thread, lock that mutex when submitting more compute tasks, but keep it unlocked while waiting for a query or fence.
You still gonna have some interference between these two tasks, but it might be good enough for your use case.
Ideally, consider using timestamp queries to measure GPU time spent computing your background tasks. Then adjust size of the single task dynamically based on these numbers, this should allow to achieve ideal granularity of these tasks regardless on the GPU performance. Don’t forget to apply some rolling average over the last 5-10 completed tasks before using the number for these adjustments.

The reasons why rps procedure use spinlock with local_irq_disable

These days, I'm studying kernel internal network code, especially RPS code. You know, there are a lot of functions about that. But I am focusing on some functions about SMP queue processing such as enqueue_to_backlog and process_backlog.
I wonder about synchronization btw two cores(or single core) by using two functions -enqueue_to_backlog and process_backlog-.
In that functions, A core(A) holds a spin_lock of the other core(B) for queueing packets into input_pkt_queue and scheduling napi of the core(B). And A Core(B) also holds a spin_lock for splicing input_pkt_queue to process_queue of the core(B) and removing napi schedule by itself. I know that spin_lock should be held to prevent two core from accessing the same queue each other during processing queue.
But I can't understand why spin_lock is called with local_irq_disable(or local_irq_save). I think that there is no accessing the queues or rps_lock of the core(B) by Interrupts Context(TH), when interrupts(TH) preempt current context(softirq, BH). - Of course, napi struct can be accessed for scheduling napi by TH, but it holds disabling irq until queueing packet- So I wonder about why spin_lock is called with irq disable.
I think it is impossible to preempt current context(napi, softirq) by other BH such as tasklet. Is it true? And I want to know whether local_irq_disable disable all cores irq or just current core's irq literally? Actually, I read a book about kernel development, but I think i don't understand preemption enough.
Would explain the reasons why rps procedure use spin_lock with local_irq_disable?
Disabling interrupts affects the current core (only). When disabled, therefore, no other code on the same core will be able to interfere with an update to a data structure. The point of spinlocks is to extend the "lock-out" to other cores (although it's cooperative, not hardware-enforced).
It's dangerous/irresponsible to take a spin lock in the kernel without disabling interrupts because, when an interrupt then occurs, the current code will be suspended, and now you are preventing other cores from making progress while some unrelated interrupt handler is running (even if another user process or tasklet on the original core won't be able to preempt). Other cores might be in an interrupt or BH context themselves and now you're delaying the entire system. Spin locks are supposed to be held for very brief periods to do critical updates to shared data structures.
It's also a good way to generate deadlocks. Consider if the scenario above were replicated in another subsystem (or possibly another device in the same subsystem, but I'll describe the former).
Here, core A takes a spinlock in subsystem 1 without disabling interrupts. At the same time, core B takes a spinlock in subsystem 2 also without disabling interrupts. Now what happens if an interrupt related to subsystem 2 happens on core A, and while executing the subsystem 2 interrupt handler, core A needs to update a structure protected by the spinlock held in core B. But at about the same time, a subsystem 1 interrupt happens on core B, which needs to update a data structure in that subsystem. Now both cores are busy-waiting for a spinlock held by the other core, and the entire system is frozen until you do a hard reset.

How does a gnuradio source block know how many samples to output?

I'm trying to understand how gnuradio source blocks work. I know how to make a simple one that outputs a constant and I understand what sample rate means, but I'm not sure how (or where) to combine the two.
Is the source block in charge of regulating the amount of data to output? Or does the amount that it outputs depend upon other blocks in the flow graph and how much they consume? Some source blocks take sample_rate as an input, which makes me think it's the former. But other blocks don't, which makes me think it's the later.
If a source block is in charge of its sample rate, how does it regulate it? Do they check the system clock and output samples based upon that?
Do they check the system clock and output samples based upon that?
Definitely not. All GNU Radio blocks operate at the maximum speed the processor can give.
However, GNU Radio relies on the fact that each flowgraph may have a source and/or sink device (e.g USRP, other SDR device, sound card) that produces/consumes samples in a constant rate. Consequently, the flowgraph is throttled at the rate of the hardware.
In order to avoid CPU saturation, if none of these hardware devices exist, GNU Radio provides the Throttle block that tries (it is not so accurate) to throttle the samples per second at the given rate, by sleeping for suitable amount of time between each sample that passes through the Throttle block.
As far the sample_rate parameter concerns, excluding the Throttle and device specific blocks, it is used only for graphical representation or internal calculations.

Should I programly put computation-heavy tasks on a separate thread on IOS to utilize multi-core?

I am making a real-time image processing app on IOS with my team. I am handling the custom computation kernel (mostly on CPU rather than GPU) and my teammates deals with the GUI. When I tested my kernel on a toy app, the core (ignoring any IO overhead ) runs steadily at 100ms per image. However, when put into the full-functioning one, it is slowed down to 500ms per image.
I have checked that the data is pretty much the same and I am only measuring time consumed within the kernel, on the same iphone6. There are hardly any other computation in the full-functioning app so I am not sure what is pulling behind. Though GPU-processing is definitely an alternative and I am working on it, I would like to know if there is any tricks to use for now.
Currently, there is no explicit multi-threading in the computation part, so my simple guess is: should I programly put the computation part on a separate thread so the second core can be utilized?
[Update]
It turns out that I made some mistakes in packing my code as library, as the copying over the source code works out nicely. I have not figured out my problem yet and am going to post it on a separate question.
GPU Acceleration
This massively depends on the tasks you're performing, the GPU is good a specific subset of tasks and simply utilising it can sometimes even slow things down. Check this out
A lot of image based tasks that are part of the Quartz framework e.t.c are GPU accelerated (like blurring). Also if you use a library like OpenCV you get GPU acceleration on certain tasks out the box.
Unless you're a real pro I would avoid using the GPU specifically and let the frameworks and libraries you use do that for you.
Concurrency
It will certainly help to put intensive tasks on a background thread. Just be aware of what it entails (i.e. you can't make any UIKit calls from a background thread.
The answer heavily depends on how you do the processing. Some methods in the SDK perform their job in a background thread, while others require the caller to create and use one.
In general, in the case of drawing, most methods require you to create one explicitly. This is important especially for the ones that perform their work on the CPU (e.g. using CoreGraphics to draw within a drawRect method). If you're using methods that use GPU for the processing, then creating threads won't be much of use since CPU won't be the cause of the bottleneck.
If you want to determine why your app slows down, use Instruments. (Time Profiler for CPU and Core Animation for drawing)

NSThread, NSOperation or GCD for CoreMotion and accurate timing purposes?

I'm looking to do some high precision core motion reading (>=100Hz if possible) and motion analysis on the iPhone 4+ which will run continuously for the duration of the main part of the app. It's imperative that the motion response and the signals that the analysis code sends out are as free from lag as possible.
My original plan was to launch a dedicated NSThread based on the code in the metronome project as referenced here: Accurate timing in iOS, along with a protocol for motion analysers to link in and use the thread. I'm wondering whether GCD or NSOperation queues might be better?
My impression after copious reading is that they are designed to handle a quantity of discrete, one-off operations rather than a small number of operations performed over and over again on a regular interval and that using them every millisecond or so might inadvertently create a lot of thread creation/destruction overhead. Does anyone have any experience here?
I'm also wondering about the performance implications of an endless while loop in a thread (such as in the code in the above link). Does anyone know more about how things work under the hood with threads? I know that iPhone4 (and under) are single core processors and use some sort of intelligent multitasking (pre-emptive?) which switches threads based on various timing and I/O demands to create the effect of parallelism...
If you have a thread that has a simple "while" loop running endlessly but only doing any additional work every millisecond or so, does the processor's switching algorithm consider the endless loop a "high demand" on resources thus hogging them from other threads or will it be smart enough to allocate resources more heavily towards other threads in the "downtime" between additional code execution?
Thanks in advance for the help and expertise...
IMO the bottleneck are rather the sensors. The actual update frequency is most often not equal to what you have specified. See update frequency set for deviceMotionUpdateInterval it's the actual frequency? and Actual frequency of device motion updates lower than expected, but scales up with setting
Some time ago I made a couple of measurements using Core Motion and the raw sensor data as well. I needed a high update rate too because I was doing a Simpson integration and thus wnated to minimise errors. It turned out that the real frequency is always lower and that there is limit at about 80 Hz. It was an iPhone 4 running iOS 4. But as long as you don't need this for scientific purposes in most cases 60-70 Hz should fit your needs anyway.

Resources