Distribution of computations to different cores - ios

I have a program that has multiple computations that a very reliant on time meaning that i need the computations to be handled almost at the same time.
I am refering to the iphone 4s - 5s dual core processor. Is there a way that i can delegate certain computations to certain cores in the processor? For example, if i have two computations (1c and 2c) that are very reliant on time and i need them to be executed together, can i run computation 1c on the "left" core and 2c on the "right" core?
I've read a little about parallel processing but I cannot find anything about it for ios development. If somebody could point me in the right direction or answer this directly i would appreciate it a lot! Thanks!

Related

If a computer can be Turing complete with one instruction what is the purpose of having many instructions?

I understand the concept of a computer being Turing complete ( having a MOV or command or a SUBNEG command and being able to therefore "synthesize" other instructions such as ). If that is true what is the purpose of having 100s of instructions like x86 has for example? Is to increase efficiency?
Yes.
Equally, any logical circuit can be made using just NANDs. But that doesn't make other components redundant. Crafting a CPU from NAND gates would be monumentally inefficient, even if that CPU performed only one instruction.
An OS or application has a similar level of complexity to a CPU.
You COULD compile it so it just used a single instruction. But you would just end up with the world's most bloated OS.
So, when designing a CPU's instruction set, the choice is a tradeoff between reducing CPU size/expense, which allows more instructions per second as they are simpler, and smaller size means easier cooling (RISC); and increasing the capabilities of the CPU, including instructions that take multiple clock-cycles to complete, but making it larger and more cumbersome to cool (CISC).
This tradeoff is why math co-processors were a thing back in the 486 days. Floating point math could be emulated without the instructions. But it was much, much faster if it had a co-processor designed to do the heavy lifting on those floating point things.
Remember that a Turing Machine is generally understood to be an abstract concept, not a physical thing. It's the theoretical minimal form a computer can take that can still compute anything. Theoretically. Heavy emphasis on theoretically.
An actual Turing machine that did something so simple as decode an MP3 would be outrageously complicated. Programming it would be an utter nightmare as the machine is so insanely limited that even adding two 64-bit numbers together and recording the result in a third location would require an enormous amount of "tape" and a whole heap of "instructions".
When we say something is "Turing Complete" we mean that it can perform generic computation. It's a pretty low bar in all honesty, crazy things like the Game of Life and even CSS have been shown to be Turing Complete. That doesn't mean it's a good idea to program for them, or take them seriously as a computational platform.
In the early days of computing people would have to type in machine codes by hand. Adding two numbers together and storing the result is often one or two operations at most. Doing it in a Turing machine would require thousands. The complexity makes it utterly impractical on the most basic level.
As a challenge try and write a simple 4-bit adder. Then if you've successfully tackled that, write a 4-bit multiplier. The complexity ramps up exponentially once you move to things like 32 or 64-bit values, and when you try and tackle division or floating point values you're quickly going to drown in the outrageousness of it all.
You don't tell the CPU which transistors to flip when you're typing in machine code, the instructions act as macros to do that for you, but when you're writing Turing Machine code it's up to you to command it how to flip each and every single bit.
If you want to learn more about CPU history and design there's a wealth of information out there, and you can even implement your own using transistor logic or an FPGA kit where you can write it out using a higher level design language like Verilog.
The Intel 4004 chip was intended for a calculator so the operation codes were largely geared towards that. The subsequent 8008 built on that, and by the time the 8086 rolled around the instruction set had taken on that familiar x86 flavor, albeit a 16-bit version of same.
There's an abstraction spectrum here between defining the behaviour of individual bits (Turing Machine) and some kind of hypothetical CPU with an instruction for every occasion. RISC and CISC designs from the 1980s and 1990s differed in their philosophy here, where RISC generally had fewer instructions, CISC having more, but those differences have largely been erased as RISC gained more features and CISC became more RISC-like for the sake of simplicity.
The Turing Machine is the "absolute zero" in terms of CPU design. If you can come up with something simpler or more reductive you'd probably win a prize.

fastest image processing library?

I'm working on robot vision system and its main purpose is to detect objects, i want to choose one of these libraries (CImg , OpenCV) and I have knowledge about both of them.
The robot I'm using has Linux , 1GHz CPU and 1G ram and I'm using C++ the size of image is 320p.
I want to have a real-time image processing near 20 out of 25 frames per seconds.
In your opinion which library is more powerful l although I have tested both and they have the same process time, open cv is slightly better and I think that's because I use pointers with open cv codes.
Please share your idea and your reason.
thanks.
I think you can possibly get best performance when you integrated - OpenCV with IPP.
See this reference, http://software.intel.com/en-us/articles/intel-integrated-performance-primitives-intel-ipp-open-source-computer-vision-library-opencv-faq/
Here is another reference http://experienceopencv.blogspot.com/2011/07/speed-up-with-intel-integrated.html
Further, if you freeze the algorithm that works perfectly, usually you can isolate your algorithm and work your way towards doing serious optimization (such as memory optimization, porting to assembly etc.) which might not be ready to use.
It really depends on what you want to do (what kind of objects you want to detect, accuracy, what algorithm you are using etc..) and how much time you have got. If it is for generic computer vision/image processing, I would stick with OpenCV. As Dipan said, do consider further optimization. In my experience with optimization for Computer Vision, the bottleneck usually is in memory interconnect bandwidth (or memory itself) and so you might have to trade in cycles (computation) to save on communication. Do understand the algorithm really well to further optimize the algorithm (which at times can give huge improvements as compared to compilers).

ipad2 neon floating point speed versus ipad1

When testing NEON instructions on ipad1 and ipad2, I notice allmost no speed up in the code on ipad2, where most functions actually run much faster on ipad2 than on ipad1.
This is for instructions like VMUL, VLD1, VADD and VSUB etc using quad word registers like q0 for things like FFT.
In addition I notice that apples own FFT function vdsp_fft_zrip does not speed up for ipad2 either.
So the question is, does ipad2 NEON execute faster than ipad1 NEON engine for the quad word SIMD type instructions?
The NEON unit on the A4 was extraordinarily powerful compared to the rest of the core. The rest of the core on the A5 is much improved from A4, but the NEON unit's performance is more-or-less unchanged. What you are observing is expected.
Of course, there are now two cores, so if you can take advantage of both of them, you can still see significant speedups. Also, double-precision computation on the A5 is vastly improved from the A4, as it is now fully pipelined.
NEON will remain the same for quite a while, even on the recently introduced 64bit ARM.
NEON doesn't benefit much from increased clock speed. NEON is already so fast that it spends the majority of the function execution time waiting for the data from memory.

Fastest method to compute convolution

I have to apply a convolution filter on each row of many images. The classic is 360 images of 1024x1024 pixels. In my use case it is 720 images 560x600 pixels.
The problem is that my code is much slower than what is advertised in articles.
I have implemented the naive convolution, and it takes 2m 30s. I then switched to FFT using fftw. I used complex 2 complex, filtering two rows in each transform. I'm now around 20s.
The thing is that articles advertise around 10s and even less for the classic condition.
So I'd like to ask the experts here if there could be a faster way to compute the convolution.
Numerical recipes suggest to avoid the sorting done in the dft and adapt the frequency domain filter function accordingly. But there is no code example how this could be done.
Maybe I lose time in copying data. With real 2 real transform I wouldn't have to copy the data into the complexe values. But I have to pad with 0 anyway.
EDIT: see my own answer below for progress feedback and further information on solving this issue.
Question (precise reformulation):
I'm looking for an algorithm or piece of code to apply a very fast convolution to a discrete non periodic function (512 to 2048 values). Apparently the discrete time Fourier transform is the way to go. Though, I'd like to avoid data copy and conversion to complex, and avoid the butterfly reordering.
FFT is the fastest technique known for convolving signals, and FFTW is the fastest free library available for computing the FFT.
The key for you to get maximum performance (outside of hardware ... the GPU is a good suggestion) will be to pad your signals to a power of two. When using FFTW use the 'patient' setting when creating your plan to get the best performance. It's highly unlikely that you will hand-roll a faster implementation than what FFTW provides (forget about N.R.). Also be sure to be using the Real version of the forward 1D FFT and not the Complex version; and only use single (floating point) precision if you can.
If FFTW is not cutting it for you, then I would look at Intel's (very affordable) IPP library. The have hand tuned FFT's for Intel processors that have been optimized for images with various bit depths.
Paul
CenterSpace Software
You may want to add image processing as a tag.
But, this article may be of interest, esp with the assumption the image is a power or 2. You can also see where they optimize the FFT. I expect that the articles you are looking at made some assumptions and then optimized the equations for those.
http://www.gamasutra.com/view/feature/3993/sponsored_feature_implementation_.php
If you want to go faster you may want to use the GPU to actually do the work.
This book may be helpful for you, if you go with the GPU:
http://www.springerlink.com/content/kd6qm361pq8mmlx2/
This answer is to collect progress report feedback on this issue.
Edit 11 oct.:
The execution time I measured doesn't reflect the effective time of the FFT. I noticed that when my program ends, the CPU is still busy in system time up to 42% for 10s. When I wait until the CPU is back to 0%, before restarting my program I then get the 15.35s execution time which comes from the GPU processing. I get the same time if I comment out the FFT filtering.
So the FFT is in fact currently faster then the GPU and was simply hindered by a competing system task. I don't know yet what this system task is. I suspect it results from the allocation of a huge heap block where I copy the processing result before writing it to disk. For the input data I use a memory map.
I'll now change my code to get an accurate measurement of the FFT processing time. Making it faster is still actuality because there is room to optimize the GPU processing like for instance by pipelining the transfer of data to process.

Reasons for NOT scaling-up vs. -out?

As a programmer I make revolutionary findings every few years. I'm either ahead of the curve, or behind it by about π in the phase. One hard lesson I learned was that scaling OUT is not always better, quite often the biggest performance gains are when we regrouped and scaled up.
What reasons to you have for scaling out vs. up? Price, performance, vision, projected usage? If so, how did this work for you?
We once scaled out to several hundred nodes that would serialize and cache necessary data out to each node and run maths processes on the records. Many, many billions of records needed to be (cross-)analyzed. It was the perfect business and technical case to employ scale-out. We kept optimizing until we processed about 24 hours of data in 26 hours wallclock. Really long story short, we leased a gigantic (for the time) IBM pSeries, put Oracle Enterprise on it, indexed our data and ended up processing the same 24 hours of data in about 6 hours. Revolution for me.
So many enterprise systems are OLTP and the data are not shard'd, but the desire by many is to cluster or scale-out. Is this a reaction to new techniques or perceived performance?
Do applications in general today or our programming matras lend themselves better for scale-out? Do we/should we take this trend always into account in the future?
Because scaling up
Is limited ultimately by the size of box you can actually buy
Can become extremely cost-ineffective, e.g. a machine with 128 cores and 128G ram is vastly more expensive than 16 with 8 cores and 8G ram each.
Some things don't scale up well - such as IO read operations.
By scaling out, if your architecture is right, you can also achieve high availability. A 128-core, 128G ram machine is very expensive, but to have a 2nd redundant one is extortionate.
And also to some extent, because that's what Google do.
Scaling out is best for embarrassingly parallel problems. It takes some work, but a number of web services fit that category (thus the current popularity). Otherwise you run into Amdahl's law, which then means to gain speed you have to scale up not out. I suspect you ran into that problem. Also IO bound operations also tend to do well with scaling out largely because waiting for IO increases the % that is parallelizable.
The blog post Scaling Up vs. Scaling Out: Hidden Costs by Jeff Atwood has some interesting points to consider, such as software licensing and power costs.
Not surprisingly, it all depends on your problem. If you can easily partition it with into subproblems that don't communicate much, scaling out gives trivial speedups. For instance, searching for a word in 1B web pages can be done by one machine searching 1B pages, or by 1M machines doing 1000 pages each without a significant loss in efficiency (so with a 1,000,000x speedup). This is called "embarrassingly parallel".
Other algorithms, however, do require much more intensive communication between the subparts. Your example requiring cross-analysis is the perfect example of where communication can often drown out the performance gains of adding more boxes. In these cases, you'll want to keep communication inside a (bigger) box, going over high-speed interconnects, rather than something as 'common' as (10-)Gig-E.
Of course, this is a fairly theoretical point of view. Other factors, such as I/O, reliability, easy of programming (one big shared-memory machine usually gives a lot less headaches than a cluster) can also have a big influence.
Finally, due to the (often extreme) cost benefits of scaling out using cheap commodity hardware, the cluster/grid approach has recently attracted much more (algorithmic) research. This makes that new ways of parallelization have been developed that minimize communication, and thus do much better on a cluster -- whereas common knowledge used to dictate that these types of algorithms could only run effectively on big iron machines...

Resources