Is there a way to speed up Xilinx ISE build process? I have multiple verilog HDL files in my project. Sometimes I implement a minor change in at a place in one file. However the build time is same as if the whole project were changed. I think software does not offer any advantage for already build modules.
I know its hardware, but is there some way out. I am really trouble with my slow progress. Any other tips to make the process will be appreciated.
Yours Truely
Abu Bakar
There are quite a few things you can do to speed up an FPGA build. Among them:
- floorplanning
- design partitioning (Xilinx and Altera have some differences)
- adding false paths and mulitcycle paths constraints
- playing with synthesis and physical implementation tool options
- choice of the reset scheme can also affect the build time
- not over-constraining timing
I discuss this very topic in more detail in my book.
Thanks.
You can partition the design to help speed up the place and route process in a large design. But to be honest, FPGA builds are always going to be pretty lengthy :(
That's why most of us start out doing builds and debugging on the bench and very quickly move to debugging the code in a simulator (which is very fast to compile - seconds), and only when it works there doing the loooong build for silicon (hours).
Related
do you have any experience with SD Erlang project?
There seems to be implemented many interesting concepts regarding the comm mesh optimalizations and I'm just curious if some of you used those in production already or in some real project at least.
SD erlang repo
Thanks!
The project has finished a week ago. The main ideas behind SD Erlang are reducing the number of connections Erlang nodes maintain while keeping transitivity and common namespace for groups of nodes. Benchmarks that we used (Orbit, Ant Colony Optimization (ACO), and Instant Messenger) showed very promising results. Unfortunately, we didn't have enough human resources to refactor Sim-Diasca simulation engine. So, no, SD Erlang hasn't been used yet in a real application.
At the moment we are writing up the last deliverable that will provide an overview of what has been achieved. It will appear here in a few weeks (D6.2). In general we are happy with the results we get using SD Erlang, so there are plans for a follow up project to continue to work on it but currently this is work in progress.
This is not a direct answer but I will use SD-Erlang in a embedded application which needs to scale to hundreds of nodes (small embedded CPUs). From what I have seen its ready to be tried out in a real application. To furtehr evaluate lets consider the alternatives:
You have only a few distributed nodes: then you probably don't need it and can just connect all the nodes and for name registry use either the global module (slow but sturdy) or gprocwith the new locks_leader branch which avoids the quite broken gen_leader which so far prevented using gproc in distributed mode in production.
You need many nodes (how many depends on your hardware and requirements but you start to get into interesting territory with > 70 nodes)
Use SD-Erlang and fix whatever problems you encounter in production, or at least report them. It certainly solves a lot of the problems you get with normal Erlang distribution
Roll your own solution either with playing with different cookie values or with hidden nodes: hint you can set different cookie values for different peer nodes. But then you need to roll your own global name registry and management code: looks like a variant of Greenspuns 10th rule or closer to Erlang Virdings 1st rule : you probably will result in implementing half of SD Erlang yourself.
Don't use Erlang distribution at all. That seems to be the industry standard that for anything involving more nodes or crossing data-centers you shouldn't use Erlang distribution at all but run your own protocols. My personal opinion is to rather fix Erlang Distributions problems than just ditch it. Its much too useful and time saving when it works for a use case to just give up on it. And I see SD-Erlang as being the fix for the "too many nodes" problem, its at least the right starting point.
I really wonder if using OpenCV's setNumThreads(); really allows my code to run in parallel. I've searched a lot on the internet without finding any answer for my question.
Someone there have any answer for my question?
The effect depends greatly on the configuration options you select on cmake configure, see for example CMakeLists.txt, plus the catches of the different configuration options:
/* IMPORTANT: always use the same order of defines
1. HAVE_TBB - 3rdparty library, should be explicitly enabled
2. HAVE_CSTRIPES - 3rdparty library, should be explicitly enabled
3. HAVE_OPENMP - integrated to compiler, should be explicitly enabled
4. HAVE_GCD - system wide, used automatically (APPLE only)
5. HAVE_CONCURRENCY - part of runtime, used automatically (Windows only - MSVS 10, MSVS 11)
*/
And with those, you can understand the code itself. All that said, the parallelising engine won't do much if you're running an inherently sequential algorithm, which is practically everything under OpenCV... My guess is that if you would have several OpenCV programs running in parallel, you could see a meaningful difference.
Feel the need to build on miguelao's answer: most of OpenCV's functionality is NOT multithreaded. setNumThreads only effects multithreaded functions, such as calcOpticalFlowPyrLK.
Normally by default, OpenCV will use as many threads as you have cores. So setNumThreads won't give you a speed gain.
My main use for it is disabling multithreading, so that I may do my own with coarser granularity.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I am looking at Erlang for a future version of a distributed soft-real-time hosted web-based telephony app (i.e. Erlang looks like absolutely the perfect choice for this kind of app). I come from a .NET background and the current version of this app uses a combination of C#, WCF and JQuery to deliver the service. I now need Erlang to allow me to add extra 9s to my up-time and to allow me to get more bang for my server bucks.
Previously I'd set up a development process here combining VS.NET, GIT, TeamCity and auto-deployment of MSI files to the various environments we maintain. It's not perfect, but we're all now pretty comfortable with it. I'm wondering whether a process like we have is even appropriate for such a radically different technology stack (LYME)?
I'm confident that all of the programming challenges we previously solved using .NET can be better solved in less code with Erlang, so I'm completely sold on the language choice. What I don't yet understand from reading the Pragmatic and O'Reilly books on Erlang, is how I should adapt my software engineering and application life-cycle management (ALM) processes to suit the new platform. I see that in-place code updates could make my (and my testing and ops team's) life much easier (compared to the god-awful misery of trying to deploy MSI files across a windows network) but I am not sure how things should change when I use Erlang.
How would you:
do continuous integration in Erlang (is it commonly used?)
use it during a QA cycle (we often run concurrent topic branches using GIT, that get their own mini-QA cycle, so they all get deployed into a test environment)
build and distribute your code to DEV, TEST, UAT, STAGING, and PROD environments
integrate code generation phases into your build cycle (we currently use MSBUILD + T4 templates)
centralize logging for a bunch of different servers (we currently use Log4Net, MSMQ, etc)
do alerting with tools like SCOM
determine whether someone/something has misconfigured your production servers
allow production hot-fixes only after adequate QA (only by authorized personnel)
profile the performance (computation and communication) of your apps
interact with windows-based active directory servers
I guess I need to know what worked for you and why! What tools and frameworks did you use? What did you try that failed? What would you do differently if you could start over, knowing what you know now?
Whoa, what a long post. First, you should be aware that the 99.9% and better kool-aid is a bit dangerous to drink while blind. Yes, you can get some astounding stability figures, but you need to write your program in a way facilitating this. It does not come for free. It does not happen by magic either. Your application must be designed in a way such that other subsystems recover. OTP will help you a lot - but it still takes time to learn.
Continuous integration: Easily done. If you can call rebar or make through your build-bot you are probably set here already. Look into eunit, cover and Erlang QuickCheck (the mini variant is free for starters) - all can be run from rebar.
QA Cycle: I have not had any problems here. Again, if using rebar you can build embedded releases that are minimized erlang vm's you can copy anywhere and run (they are self-contained). You can even hot deploy fixes to such a system pretty easily by altering the code path a bit so you have an overlay of newer fixes. Your options are numerous. Git already help you here a lot.
Environmentalization: Easily done.
Logging centralization: Look into SASL and the error_logger. You can do anything you want here.
Alerting: The system can be probed for all you need (introspection is strong in Erlang). But you might have to code a bit to hook it up to the system of your choice.
Misconfiguration: Configuration files are Erlang terms. If it can be computed, it can be done.
Security: Limit who has access. It is a people problem, not a technical one in my opinion.
Profiling: cprof, cover, eprof, fprof, instrument + a couple of distributed systems for doing the same. Random sampling is also easy (introspection is strong in Erlang).
Windows interaction: Dunno. (Bias: last time I used windows professionally was in 1998 or so).
Some personal observations:
Your largest problem might end up being that you try to cram Erlang into your existing process and it might resist. It is a new environment, so new approaches will be needed in places and you should expect to adapt and workaround limitations you find along the way. The general consensus is that it can work (it is working for several big sites).
It looks like you have a well-established and strict process. How much is that process allowed to be sacrificed to give way to a new kind of thinking?
Are your programmers willing to throw out almost all of their OO knowledge? If not, you will end with a social problem rather than a technical one. If they are like me however, they will cheer, clap in their hands and get a constant high by working with an interesting language solving an interesting problem in a new way.
How many Erlang-experienced programmers do you have? If you have rather few, then better cut your teeth on some smaller subsystems first and then work towards the larger goal. Getting the full benefit of the system takes months if not years. Getting partial benefit can be had in weeks though.
We are planning to develop a datamining package for windows. The program core / calculation engine will be developed in F# with GUI stuff / DB bindings etc done in C# and F#.
However, we have not yet decided on the model implementations. Since we need high performance, we probably can't use managed code here (any objections here?). The question is, is it reasonable to develop the models in FORTRAN or should we stick to C (or maybe C++). We are looking into using OpenCL at some point for suitable models - it feels funny having to go from managed code -> FORTRAN -> C -> OpenCL invocation for these situations.
Any recommendations?
F# compiles to the CLR, which has a just-in-time compiler. It's a dialect of ML, which is strongly typed, allowing all of the nice optimisations that go with that type of architecture; this means you will probably get reasonable performance from F#. For comparison, you could also try porting your code to OCaml (IIRC this compiles to native code) and see if that makes a material difference.
If it really is too slow then see how far that scaling hardware will get you. With the performance available through a modern PC or server it seems unlikely that you would need to go to anything exotic unless you are working with truly brobdinagian data sets. Users with smaller data sets may well be OK on an ordinary PC.
Workstations give you perhaps an order of magnitude more capacity than a standard dekstop PC. A high-end workstation like a HP Z800 or XW9400 (similar kit is available from several other manufacturers) can take two 4 or 6 core CPU chips, tens of gigabytes of RAM (up to 192GB in some cases) and has various options for high-speed I/O like SAS disks, external disk arrays or SSDs. This type of hardware is expensive but may be cheaper than a large body of programmer time. Your existing desktop support infrastructure shouldn be able to this sort of kit. The most likely problem is compatibility issues running 32 bit software on a 64-bit O/S. In this case you have various options like VMs or KVM switches to work around the compatibility issues.
The next step up is a 4 or 8 socket server. Fairly ordinary wintel servers go up to 8 sockets (32-48 cores) and perhaps 512GB of RAM - without having to move off the Wintel platform. This gives you fairly wide range of options within your platform of choice before you have to go to anything exotic1.
Finally, if you can't make it run quickly in F#, validate the F# prototype and build a C implementation using the F# prototype as a control. If that's still not fast enough you've got problems.
If your application can be structured in a way that suits the platform then you could look at a more exotic platform. Depending on what will work with your application, you might be able to host it on a cluster, cloud provider or build the core engine on a GPU, Cell processor or FPGA. However, in doing this you're getting into (quite substantial) additional costs and exotic dependencies that might cause support issues. You will probably also have to bring a third-party consultant who knows how to program the platform.
After all that, the best advice is: suck it and see. If you're comfortable with F# you should be able to prototype your application fairly quickly. See how fast it runs and don't worry too much about performance until you have some clear indication that it really will be an issue. Remember, Knuth said that premature optimisation is the root of all evil about 97% of the time. Keep a weather eye out for issues and re-evaluate your strategy if you think performance really will cause trouble.
Edit: If you want to make a packaged application then you will probably be more performance-sensitive than otherwise. In this case performance will probably become an issue sooner than it would with a bespoke system. However, this doesn't affect the basic 'suck it and see' principle.
For example, at the risk of starting a game of buzzword bingo, if your application can be parallelized and made to work on a shared-nothing architecture you might see if one of the cloud server providers [ducks] could be induced to host it. An appropriate front-end could be built to run locally or through a browser. However, on this type of architecture the internet connection to the data source becomes a bottleneck. If you have large data sets then uploading these to the service provider becomes a problem. It may be quicker to process a large dataset locally than to upload it through an internet connection.
I would advise not to bother with optimizations yet. First try to get a working prototype, then find out where computation time is spent. You can probably move the biggest bottlenecks out into C or Fortran when and if needed -- then see how much difference it makes.
As they say, often 90% of the computation is spent in 10% of the code.
The software development team in my organization (that develops API's - middleware) is gearing to adopt atleast one best practice at a time. The following are on the list:
Unit Testing (in its real sense),
Automated unit testing,
Test Driven Design & Development,
Static code analysis,
Continuous integration capabilities, etc..
Can someone please point me to a study that shows which 'best' practices when adopted have a better ROI, and improves software quality faster. Is there a study out there?
This should help me (support my claim to) prioritize the implementation of these practices.
"a study that shows which 'best' practices when adopted have a better ROI, and improves software quality faster"
Wouldn't that be great! If there was such a thing, we'd all be doing it, and you'd simply read it in DDJ.
Since there isn't, you have to make a painful judgement.
There is no "do X for an ROI of 8%". Some of the techniques require a significant investment. Others can be started for free.
Unit Testing (in its real sense) - Free - ROI starts immediately.
Automated unit testing - not free - requires automation.
Test Driven Design & Development - Free - ROI starts immediately.
Static code analysis - requires tools.
Continuous integration capabilities - inexpensive, but not free
You can't know the ROI. So you can only prioritize on investment. Some things are easier for people to adopt than others. You have to factor in your team's willingness to embrace the technique.
Edit. Unit Testing is Free.
"time spend coding the test could have been taken to code the next feature on the list"
True, testing means developers do more work, but support does less work debugging. I think this is not a 1:1 trade. A little more time spent writing (and passing) formal unit tests dramatically reduces support costs.
"What about legacy code?"
The point is that free is a matter of managing cost. If you add unit tests to legacy code, the cost isn't free. So don't do that. Instead, add unit tests as part of maintenance, bug-fixing and new development -- then it's free.
"Traning is an issue"
In my experience, it's a matter of a few solid examples, and management demand for unit tests in addition to code. It doesn't require more than an all-hands meeting to explain that unit tests are required and here are the examples. Then it requires everyone report their status as "tests written/tests passed". You aren't 60% done, you're 232 out of 315 tests.
"it's only free on average if it works for a given project"
Always true, good point.
"require more time, time aren't free for the business"
You can either write bad code that barely works and requires a lot of support, or you can write good code that works and doesn't require a lot of support. I think that the time spent getting tests to actually pass reduces support, maintenance and debugging costs. In my experience, the value of unit tests for refactoring dramatically reduces the time to make architectural changes. It reduces the time to add features.
"I do not think either that it's ROI immediately"
Actually, one unit test has such a huge ROI that it's hard to characterize. The first test to pass becomes the one think that you can really trust. Having just one trustworthy piece of code is a time-saver because it's one less thing you have to spend a lot of time thinking about.
War Story
This week I had to finish a bulk data loader; it validates and loads 30,000 row files we accept from customers. We have a nice library that we use for uploading some internally developed files. I wanted to use that module for the customer files. But the customer files are enough different that I could see that the library module API wasn't really suitable.
So I rewrote the API, reran the tests and checked the changes in. It was a significant API change. Much breakage. Much grepping the source to find every reference and fix them.
After running the relevant tests, I checked it in. And then I reran what I thought was an not-closely-related test. Ooops. It had a failure. It was testing something that wasn't part of the API, which also broke. Fixed. Checked in again (an hour late).
Without basic unit testing, this would have broken in QA, required a bug report, required debugging and rework. Look at the labor: 1 hour of QA person to find and report the bug + 2 hours of developer time to reconstruct the QA scenario and locate the problem + 1 hour to determine what to fix.
With unit testing: 1 hour to realize that a test didn't pass, and fix the code.
Bottom Line. Did it take me 3 hours to write the test? No. But the project got three hours back for my investment in writing the test.
Are you looking for something like this?
The ROI of Software Process Improvement A New 36 Month Case Study by Capers Jones
Agile Practices with the Highest Return on Investment
You're assuming that the list you present constitutes a set of "best practices" (although I'd agree that it probably does, btw)
Rather than try to cherry-pick one process change, why not examine your current practices?
Ask yourself this:
Where are you feeling the most pain? What might you change to reduce/eliminate it?
Repeat until pain-free.
You don't mention code reviews in your list. For our team, this is probably what gave us the greatest ROI (yes, investment was steep, but return was even greater). I know Code Complete (the original version at least) mentioned statistics relative to the efficiency of reviews in finding defect VS testing.
There are some references for ROI with respect to unit testing and TDD. See my response to this related question; Is there hard evidence of the ROI of unit testing?.
There is such a thing as “local optimum”. You can read about it in Goldratt book Goal. It says that innovation is of any value only if it improves overall throughput. Decision to implement new technology should be related to critical paths inside of projects. If technology speeds up already fast enough process it only creates unnecessary backlog of ready modules. Which is not necessary improve overall speed of projects development.
I wish I had a better answer than the other answers, but I don't, because what I think really pays off is not conventional at present. That is, in design, to minimize redundancy. It is easy to say but takes experience.
In data it means keeping the data normalized, and when it cannot be, handling it in a loose fashion that can tolerate some inconsistency, not relying on tightly-bound notifications. If you do this, it simplifies the code a lot and reduces the need for unit tests.
In source code, it means if some of your "input data" changes at a very slow rate, you could consider code generation, as a way to simplify source code and get additional performance. If the source code is simpler, it is easier to review, and the need for testing it is reduced.
Not to be a grump, but I'm afraid, from the projects I've seen, there is a strong tendency to over-design, with way too many "layers of abstraction" whose correctness would not have to be questioned if they weren't even there.
One practice at a time is not going to give the best ROI. The practices are not independent.