Portable Development Environment - grails

I am working on a huge Grails Project, and more and more people are going to be working on it.
The problem is that I have been spending ages setting up people's machines (java etc etc).
I heard that there is something like a VM that you can set up on your machine once and transfer on other people's computers.
Also what about performance issues? Lets say I install a VM on other people machine that will slow down things no?

Yes it will be very bad in performance as VM runs as a full fledged machine with OS on top of it. If you have all systems with very high configuration like quad-core i7's and minimum of 8Gb rams then only systems will be able to handle such load.
Still it will be a bad idea to do so. And while running grails you already have multiple instances of java running like in my case with netbeans I have 2 resource hungry java instances running with 700mb and 500mb respectively.

Related

Neo4j enterprise cluster master not fully utilizing CPU

We have a Neo4j v3.0.4 Enterprise cluster running on a machine in AWS with 16 cores and when we issue a lot of requests to it, it seems to only utilize at most ~40% of the CPU (when looking on the box using htop it only seems to utilize 6 cores?). Disk + network IO on said box all look negligible during the test.
Screen cap of CPU profile - the flat part is when we hit it with load:
Requests are routed to the DB via a cluster of Spring Boot apps using Spring Data Neo4j 4 and from our investigations it does not look like those servers are forming any bottlenecks, from a memory, CPU, and network IO POV.
We currently are NOT using Bolt, nor are we using causal clustering; however we are planning on moving towards both. In the interim though, is there anything that may cause this type of behavior? Could our DB be misconfigured? Could this be a JVM level problem?
Any advice is much appreciated - thanks!

Ruby on Rails server requirements

I use rails for small applications, but I'm not at all an expert. I'm hosting them on a Digital Ocean server with 512MB ram, which seems to be insufficient.
I was wondering what are Ruby on Rails server requirements (in terms of RAM) for a single app.
Besides I can I measure if my server is able to support the number of application on my server?
Many thanks
It depends on how much traffic you think you need to handle. We have two machines (a 32GB RAM, usage see below) with 32 unicorn workers two serve one app with loads of traffic and we have one machine with loads of 2 worker apps that have very few traffic.
We also have to consider the database (which needs the most RAM by far in our case due to big caches we granted it). And on top of that all we have *nix which caches the filesystem in unused RAM.
Conclusion: It is very hard to tell without you telling us what sort of traffic you expect.
Our memory usage on one of the two servers for the big app: https://gist.github.com/2called-chaos/bc2710744374f6e4a8e9b2d8c45b91cf
The output is from a little ruby script I made called unistat: https://gist.github.com/2called-chaos/50fc5412b34aea335fe9

Memory requirements for WildFly

I never found an official documentation about it and I generally install WildFly 8.x on servers with, at least, 4GB.
How much memory should my server have in order to run a WildFly instance?
Is there a minimum recommended?
The minimum value of Xmx is the one that let you start an empty instance of WildFly. On my try, it is 24 MB. There is no other valid value for a minimal Xmx. 4g is a totally arbitrary value. It's absolutely depending on your application, on the number of user,...
You have to run a stress test on your application and measure the memory. It is the only way to know the minimum for your application.
That depends on your application requirement.
In general I would recommend 4GB as a minimum. Note that you should have enough memory for the OS and it's caches.
Some small applications run perfect with <1GB for WildFly some need >32GB as they have lots of data.
So it's on you and you should test and measure it.
Along with the other answers I'd like to add a few points.
This would also depend upon :
Are you running the server in domain mode or standalone mode ?
Do you want all the components of the profile you are running the server with? If not you can create custom profile by removing unwanted components.
How many apps do you plan to deploy on the server ?
What is your performance/availability requirements ?
You need not always need 4GB ram, we run wildfly on our production servers with min memory set to 512MB and max as 1GB, till date no memory issues :)

How to deploy a [Ruby on Rails] site in a scalable way?

I have been working on my [first] startup for a month now, and while it's probably atleast one more month away from an alpha release, I want to know how to deploy it the right way. The site will have an initial high amount of load (network + CPU) for a new user, so I am thinking of having a separate server/queue for this initial process, so that it doesn't slow down the site for existing users.
Based on my research so far, I am currently leaning towards nginx + haproxy + unicorn/thin + memcached + mysql, and deploying on Linode. However, I have no prior experience in any of the above; hence I am hoping to tap the community's experience.
Does the above architecture seem reasonable? Any suggestions/articles/books that you would recommend?
Is Linode a good choice? Heroku/EY seem too expensive for me (atleast until I have enough revenue), but am I missing some other better option? MediaTemple?
Any good suggestions on the load balancing architecture? I am still reading up on this.
Is it better have 2 separate Rails server instances on 2 separate linodes, or running 1 instance on a linode of twice capacity (in terms of RAM/storage/bandwidth)? How many Linodes should I start with?
Which Linux distribution should I choose? (Linode offers 8 different ones - http://www.linode.com/faq.cfm) Are there any relative advantages/disadvantages between them for a Rails site?
I apologize if any of my questions are stupid or contradictory; please attribute it to my inexperience.
Architecture
You're on the right track. I personally prefer Passenger over thin/unicorn (having run nginx to thin backends for a long while) just for the convenience, but your proposed setup is fairly standard. If you're on Ruby 1.8.7, I'd recommend that you consider REE + Passenger for framework memory savings, though.
Hosting & Load Balancing
Linode is fantastic, and I use them for just about everything I can, but you will need to be aware of RAM limits. Each Rails processes uses a nontrivial amount of RAM, and you'll want to avoid getting the machine into swap. Plan on running enough Rails instances per machine so that your memory allocation is about 90% of the memory on the Linode. You'll likely want another Linode dedicated to your database, though you can start with them both on the same machine; just be prepared to split off MySQL as you grow. You can set up communications between Linodes in the same data center on private IPs, which don't count against your bandwidth quota.
Your scaling strategy should be as horizontal as possible, so I'd recommend just getting a second Linode and adding it to your haproxy pool when you need more horsepower - Linode charges you $20 for 512mb more RAM, or you can just get a whole 'nother Linode (with CPU, RAM, HDD, and bandwidth quota) for that same $20. Seems a no-brainer!). In Rails' case, an instance is an instance is an instance, so it really doesn't matter if it's on the same VM or not, as long as the time to connect to your database machine or whatnot are more or less the same. You could be running 10 Linodes each running 10 Rails processes apiece without much of an issue. Linode also offers IP failover, so that if your primary Linode (with haproxy) goes down, it can fail over automatically to a secondary Linode, which you would then have haproxy running on, and ready to act in the same capacity as the first.
Distribution
Honestly, this is up to you! Many folks go with Ubuntu or Redhat (CentOS/Fedora) distros - I like CentOS myself - but it's really just about what you feel most comfortable with. If you don't have a favorite distro, I would recommend trying Ubuntu/CentOS, as they tend to be quite friendly to the beginner, and have extremely robust community support.
You will probably want to pick a 32-bit distro unless you have a compelling reason to pick a 64-bit distro; 64-bit executables require more RAM than their 32-bit counterparts, and since RAM is likely to be your most precious resource, it makes sense to save it where you can.

make full use of 24G memory for jboss

we have a solaris sparc 64 bit running the jboss. it has 24G mem. but because of JVM limitation, i can only set to JAVA_OPTS="-server -Xms256m -Xmx3600m -XX:MaxPermSize=3600m".
i don't know the exactly cap. but if i set to 4000m, java won't like it.
is there any way to use this 24G mem fully or more efficiently?
if i use cluster on one machine, is it stable? it need rewrite some part of code, i heard.
All 32-bit processes are limited to 4 gigabytes of addressable memory. 2^32 == 4 gibi.
If you can run jboss as a 64-bit process (usually just adding "-d64" to JAVA_OPTS), then you can tune jboss with increased thread and object pools to make use of that memory. As others have mentioned, you might have horrific garbage collection pauses, but you may be able to figure out how to avoid those with some load testing and the right choice of garbage collection algorithms.
If you have to run as a 32-bit process, then yes, you can at least run multiple instances of jboss on the same server. In your case, there's three options: zones, virtual interfaces, and port binding.
Solaris Zones
Since you're running solaris, it's relatively easy to create virtual servers ("non-global zones") and install jboss in each one just like you would the real server.
Multi-homing
Configure an extra IP address for each jboss instance you want to run (usually by adding virtual interfaces, but you could also install extra NICs) and bind each instance of jboss to its own IP address with the "-b" startup option.
Service Binding Manager
This is the most complicated / brittle option, but at least it requires no OS changes.
Whether or not to actually configure the jboss instances as a cluster depends on your application. One benefit is the ability to use http session failover. One downside is cluster merge issues if your application is unstable or tends to become unresponsive for periods of time.
You mention your application is slow; before you go too far down any of these paths, try to understand where your bottlenecks are. Use jvisualvm or jstat to observe if you're doing frequent garbage collections. If you're not, then adding heap memory or extra jboss instances may not resolve your performance woes.
you can't use the full physical memory, JVM requires max contined memory trunck, try use java -Xmxnnnnm -version to test the max available memory on your box.

Resources