What is impact of configuration option:
<!-- Update in-memory cache if xml file is changed -->
<XmlContentCheckForDiskChanges>False</XmlContentCheckForDiskChanges>
Questions I have in mind
Does this impact performance and how?
Does it improve it?
Does it cause app recycle everything this happens?
Documentation does not provide much useful help. (http://our.umbraco.org/documentation/Using-Umbraco/Config-files/umbracoSettings/)
Related
I am working on an application which has multi-level caching. So its guavacache->redis->database to fetch anything without directly going to database. But I read somewhere that having multi level caching might always not be a good idea. I am totally confused now. As in my understanding this will decrease the throughput on the database. Can someone help me in understanding this?
PS: I am new to this platform, so please excuse for this vague way of asking question.
Since Neo4j works primarily in memory, I was wondering if it would be advantageous to enable hugepages (https://www.kernel.org/doc/Documentation/vm/hugetlbpage.txt) in my Linux Kernel, and then XX:+UseLargePages or maybe -XX:+UseHugeTLBFS in the (OpenJDK 8) JVM ?
If so, what rule of thumb should I use to decide how many hugepages to configured?
The Neo4j Performance Guide (http://neo4j.com/docs/stable/performance-guide.html) does not mention this, and Google didn't turn up anyone else discussing it (in the first couple of search pages anyway), so I thought I'd ask.
I'm wrestling to get acceptable performance from my new Neo4j instance (2.3.2-community). Any little bit will help. I want to know if this is worth trying before I bring down the database to change JVM flags... I'm hoping someone else has done some experiments along these lines already.
Since Neo4j does its own file paging and doesn't rely on the OS to do this, it should be advantageous or at least not hurt. Huge pages will reduce the probability of TLB cache misses when you use a large amount of memory, which Neo4j often would like to do when there's a lot of data stored in it.
However, Neo4j does not directly use hugepages even though it could and it would be a nice addition. This means you have to rely on transparent huge pages and whatever features the JVM provides. The transparent huge pages can cause more-or-less short stalls when smaller pages are merged.
If you have a representative staging environment then I advise you to make the changes there first, and measure their effect.
Transparent huge pages are mostly a problem for programs that use mmap because I think it can lead to changing the size of the unit of IO, which will make the hard-pagefault latency much higher. I'm not entirely sure about this, though, so please correct me if I'm wrong.
The JVM actually does use mmap for telemetry and tooling, through a file in /tmp so make sure this directory is mounted on tmpfs to avoid gnarly IO stalls, for instance during safe-points (!!!). Always do this even if you don't use huge pages.
Also make sure you are using the latest Linux kernel and the latest Java version.
You may be able to squeeze some percentage points out of it with tuning G1, but this is a bit of a black art.
I am attempting to determine what can possibly be the causative factor for 20+ second response times from a Rails 3 application located in EC2 using Elasticache. I have reason to believe the problem is in fact cache related, but I have no numbers to prove it. I'd like to get those numbers. For the sake of completeness, we're running the applications atop Ubuntu 12.04 .
Searching Google, I found nothing directly relevant to my situation, and no StackOverflow topics I could find were even remotely relevant to my situation. If anyone can point me to some documentation on the matter, I'd be quite appreciative. Thank you!
I've found the best tool for this to be New Relic.
http://newrelic.com/
I don't work for them and get no benefit from you trying them.
They have a free level that you can start with. If you go up to the non-free version you can literally trace all your requests through different models and into the database telling you how long the app spent in each section. It's a great tool for profiling.
Do you, by any chance have access to standard web logs including URLs and response times?
I faced a similar situation, searched the web, found nothing relevant, and eventually decided to roll my own, which I shared in this SO post:
Profiling a multi-tiered, distributed, web application (server side)
While it is far from perfect and may be too high level for some use-cases, it gave me a pretty quick and broad insight into where the application I was trying to profile is spending most of its time in, and what the slowest parts are. HTH.
The best parts of it are that:
It is 100% platform and programming language independent.
It is a 100% free software solution
this is not a question... I'm sorry if that is against the rules but I find that google rates this site very high and for me this info would have helped a lot. If it is not acceptable we could delete this message and the 'harm' stays limited.
I switched from MyFaces-2.0.10 with RichFaces-3.3.3 to MyFaces-2.1.6 with RichFaces-4.2-Final and the memory usage of our application dropped enormously. From a staggering 50MB+ per session to almost none. We used to consume at least 1GB for every 20 users and that dropped to less then 200MB for any amount of users (<50 tested). Another effect is that it all seems faster, but we did not benchmark that.
It was a lot of work to migrate and it took two programmers about 4 months (total 30 hours/week) to learn the new ways and get it implemented. But that obviously will depend on the size of the project. We had to cope with a lot of bugs/issues in RF and MyFaces that are now fixed. I think that I could do it in a third of the time with what I know now. BalusC would do it in a week :)
So my advise is that if you have memory issues it might be an idea to start upgrading. It has to be done someday so why not now?
MAG,
Milo
It is great people has started to notice the big improvements done in MyFaces Core 2.1.6. Really a lot of cool tricks has been done, but only in 2.1.6 the lastest lines were added, and the final effect is a big improvement in memory usage / code speed / session size. MyFaces Core 2.1.7 will contain another bunch of improvements too, so stay tuned following MyFaces Team Twitter
The state saving has been improved since JSF 2.0. "Partial State Saving" was introduced which enables saving the state of only the relevant components (UIForm, UIInput, etc) instead of the entire component tree (UIViewRoot). As the view state is by default saved in server side session, this will indeed drop the memory usage, for sure if you have relatively large views.
While RichFaces 3.3.x, which is designed for JSF 1.x, works on JSF 2.0 (with some hacks), it didn't utilize the new JSF 2.0 partial state saving at all. RichFaces 4.x, which is is designed for JSF 2.x, supports it, so you would surely see a drop in memory usage when done right.
To improve it further, you could consider setting the state saving method to client with only a minimum of network bandwidth increase. This way the memory usage will be further reduced and any potential ViewExpiredException would be eliminiated.
See also:
Why JSF saves the state of UI components on server?
We have decided to use Rails/Oracle for a new project. My understanding is that ActiveRecord does not support bind variables and that this hamstrings Oracle's ability to cache queries and leads to significant performance problems. Cursor sharing has been said to help, but not completely, solve this problem.
If this description is reasonably accurate, what is the actual impact? Is it simply a bad idea to use ActiveRecord with Oracle or is there a set of best practices that can reduce the impact to some possibly acceptible level?
It doesn't appear that any support had been released for bind variables on Oracle with Active Record. This Oracle tutorial describes the cursor sharing approach.
http://www.oracle.com/technology/pub/articles/mearelli-optimizing-oracle-rails.html
Whether you will have significant performance problems really depends on your application and underlying hardware.
Cursor sharing set to similar should help improve performance a good deal versus nothing, but you will really have to test your application with production data and production load to see how your performance will be and whether it will be satisfactory.