We currently use SonarQube 4.3.3 on a fairly big project with 30+ developers, 300k LOC, 2200 Unit Tests and it is currently used to help us tracking issues on our Continuous Integration cycle (Merge Requests, etc).
With the increase of custom XPath rules, we're seeing a big increase on the analysis time. Moreover, we currently can't run more than one analysis at the same time (the second one hangs on "waiting for a checkpoint from Job #xxx) so this usually makes things slow. We're seeing build times (Jenkins + JUnit + SonarQube) around 30 minutes. With the release of the 5.x branch, we'd like to know if the effort to update everything is worth the while, so we've got some questions?
Are Java rules faster than the old XPath rules to process?
Will we be able to run more than one analysis at the same time?
I saw on SonarSource's JIRA that the 5.2 release (which will detach the analysis from the DB) is due to the 23rd of September. Will this really happen? Are the users expected to see a big performance increase with this?
We're curious to hear real-world cases on this, so any input will be highly appreciated.
Thanks!
Related
I work on a large open source project based on ruby on rails. We use Github, Travis, Code Climate and others. Our test suite takes a long time to run and we have many pull requests opened and updated through the day, which creates a large backlog. We even implemented a build killer in our bot to prevent any unnecessary builds, however we still have a backlog. Is it possible for us to host our own runner to increase the number of workers?
There's Travis CI Enterprise (https://enterprise.travis-ci.com/) that lets people host their own runners, but that's probably mostly only for paid-for customers. Have you guys swapped over to the container-based builds? Might speed things up a bit. What's the project?
We have around 32 datamarts loading around 200+ tables out of which 50% of tables are on 11g Oracle database and 30% on 10g and rest 20 are flat files.
Lately we are facing performance issues while loading the datamarts.
Database parameters as well are network parameters are looking and as throughput is decreasing drastically we are of the opinion now that it is informatica which has problem.
Recently when through put had gone down and server was utilized to its 90% informatica application was restarted and the performance there after was little better than previous performance.
So my question is should we have Informatica restart as a scheduled activity ? Does restart actually improves the performance of the application or there are some other things which can play a role in the same?
What you have here is a systemic problem, but you have not established which component(s) of the system are the cause.
Are all jobs showing exactly the same degradation in performance? If not, what is the common characteristic of those that are? Not all jobs will have the same reliance on the Informatica server -- some will be dependent more on the performance of their target system(s), some on their source system(s), so I would be amazed if all showed exactly the same level of degradation.
What you have here is an exercise in data gathering, and then turning that data into useful information.
If you can isolate the problem to only certain jobs then I would take a log file from a time when the system is performing well, and from a time when it is not, and compare them directly, looking for differences in the performance of their components. You can also look at any database monitoring tools for changes in execution plan.
Rebooting servers? Maybe, but that is not necessarily the solution -- the real problem is the lack of data you have to diagnose your system.
Yes, It is good to do a restart every quarter.
It will refresh the Integration service cache.
Delete files from Cache and storage before you restart.
Since you said you have recently seen some reduced performance recently it might be due various reasons.
Some tips that may help:
Ensure all Indexes are in valid and compiled state.
If you are calling a procedure via worflow check the EXPLAIN plan and Cost ensure it is not doing a full table scan(cost should be less).
3.Gather stats on the source or target tables (especially which have deletes )) - This will help in de fragmentation - deleting the un allocated space. DBMS_STATS
Always good to have an house keeping scheduled weekly to do the above checks on indexes,remove temp/unnecessary files and gather stats (analyze indexes and tables).
Some best practices here performance tips
I've been looking around online at ways of improving our build time (which is currently ~30-40 minutes, depending on which build agent gets the task), and one common theme I've seen is use CI builds.
I understand the logic behind this, and it makes sense that it would reduce the time each build takes. Our problem, however, is that building on every check-in is a pointless use of our resources, because in our development branch, we only keep the latest successful build. This means that if 2 people check-in in a short space of time, whoever checked-in last will be the one whose build is kept.
It's this reason (along with disk space limitations) that we changed to using Rolling Builds, so that we only built the development branch a maximum of once every 45 minutes (obviously we could manually trigger builds on otp of that).
What I want to know (and haven't been able to find anywhere) is whether there's a way of combining rolling builds AND continuous integration. So keep building only once every 45 minutes, but only get and build files that have changed.
I'm not even sure it's possible, and if not then I'll look into other ways, but this seems like something that should be possible.
I have noticed that after my Grails app has been deployed for about 2 weeks, performance degrades significantly, and I have to redeploy. I am using the Spring Security plugin and caching users. My first inclination is that it has something to do with this and the session cache size, but I'm not sure how to go about verifying this.
Does it sound like I'm on the right track? Has anyone else experienced this and narrowed down the problem? Any help would be great.
Thanks!
Never guess where to optimize, it's going to be wrong.
Get a heap dump and a profile it a little (VisualVM worked fine for me).
It might be a memory leak, like it happened to me. What is your environment - OS, webserver, Grails?
I would recommend getting YourKit (VisualVM has limited information) and use this to profile your application in production (if possible).
Alternatively you could create a performance test (with JMeter for example) and performance test the pieces of your application that you suspect is causing the performance degredation.
Monitoring memory,cpu,threads,gc and such while running some simple JMeter performance tests will definitely find the culprit. This way you can easily re-test your system over time and see if you have incorporated new "performance killing" bugs.
Performance testing tools/services:
JMeter
Grinder
Selenium (Can performance test with selenium grid, need hw though)
Browsermob (Commercial, which uses Selenium + Selenium-Grid)
NeoLoad by NeoTys (Commercial, trial version available)
HP Loadrunner (Commercial, The big fish on the market, trial version available)
I'd also look into installing the app-info plugin and turning on a bunch of the options (especially around hibernate) to see if things are getting out of control there. Could be something that's filling the hibernate session but never closing a transaction.
Another area to look at is if you're doing anything with the groovy template engine. That has a known memory leak, that's sort of unfixable if you're not caching the class/results. Recently fixed a problem around this on our app. If you're seeing any perm gen errors, this could be the case.
Try to install Javamelody plugin. In our case it helped to find problem with GC.
I have 357 tests (534 assertions) for my app (using Shoulda). The whole test suite runs in around 80 seconds. Is this time OK? I'm just curious, since this is one of my first apps where I write tests extensively. No fancy stuff in my app.
Btw.: I tried to use in memory sqlite3 database, but the results were surprisingly worse (around 83 seconds). Any clues here?
I'm using Macbook with 2GB of RAM and 2GHz Intel Core Duo processor as my development machine.
I don't feel this question is rails specific, so I'll chime in.
The main thing about testing is that it should be fast enough for you to run them a lot (as in, all the time). Also, you may wish to split your tests into a few different sets, specifically things like 'long running tests' and 'unit tests'.
One last option to consider, if your database setup is time consuming, would be to create your domain by restoring from a backup, rather than doing a whole bunch of inserts.
Good luck!
You should try this method https://github.com/dchelimsky/rspec/wiki/spork---autospec-==-pure-bdd-joy- using spork to spin up a couple of processes that stay running and batch out your tests. I found it to be pretty quick.
It really depends on what your tests are doing. Test code can be written efficiently or not in exactly the same way as any other code can.
One obvious optimisation in many cases is to write your test code in such a way that everything (or as much as possible) is done in memory, as opposed to many read/writes to the database. However, you may have to change your application code to have the right interfaces to achieve this.
Large test suites can take some time to run.
I generally use "autospec -f" when developing, this only runs the specs that have changed since the last run - makes it much more efficient to keep your tests running.
Of course, if you are really serious, you will run a Continuous Integration setup like Cruise Control - this will automate your build process and run in the background, checking out your latest building and running the suite.
If you're looking to speed up the runtime of your test suite, then I'd use a test server such as this one from Roman Le NĂ©grate.
You can experiment with preloading fixtures, but it will be harder to maintain, and, IMHO, not worth it's speed improvements (20% maximum I think, but it depends)
It's known that SQLite is slower than mysql/pgsql, excepting very small, tiny DBs.
As someone already said, you can put mysql (or other DB) datafiles on some kind of RAMDisk (I use tmpfs on linux).
PS: we have 1319 Rspec examples now, and it runs for 230 seconds on C2D-3Ghz-4GRam, and I think it's fine. So, yours is fine too.
As opposite to in-memory SQLite, you can put a MySQL database on RAMDISK (on Windows) or on tmpfs on Linux.
MySQL has a very efficient buffering, so putting database in memory does not help a lot until you update a lot of data really often.
More significant is the way of test isolation and data preparation for each test.
You can use transactional fixtures. That means that each test will be wrapped into transaction and thus next test will start at the initial point.
This is faster than cleaning up the database before each test.
There are situations when you want to use both transactions and explicit data erasing, here is a good article about it: http://www.3hv.co.uk/blog/2009/05/08/switching-off-transactions-for-a-single-spec-when-using-rspec/