Does anyone have any experience using CloudFoundry with Grails? - grails

I am at the point with my Grails app that I want to deploy it up onto Amazon EC2 so I can start showing it to people. I have a little experience using EC2 "manually" to put other projects onto the web, but I was hoping to find something which made the deployment cycle a little more manageable. In steps CloudFoundry.
I have read through the web site and watched the tutorials, and on the face of it I like what I see. However, before I commit to it I wondered whether any of you have experiences to share from the coal face.
Specifically I am going to be deploying a MySQL database along with the app and it's not clear what exactly you need to supply (SQL scripts?) and how to best configure my project to deploy through CloudFoundry so that the host name is configured correctly. I also have a small amount of standard rows which I insert in my BootStrap.groovy and I wonder whether that stuff makes it through deployment.
Lastly, it is free at the moment, but they are sayin they will introduce charging later. Are there any open source alternatives that it may be better to investigate in case CloudFoundry ends up being expensive?
Thanks

I have a little experience with CloudFoundry. They have been so kind to sponsor the GR8Conf website, deployed through their service.
For configuring the SQL, it appears to me, that the simple solution is to use the CloudFoundry plugin, and enter
cloudFoundry.db.schemaName="myName"
in the config/CloudFoundry.groovy file.
In your config/DataSource.groovy you should have:
production {
dataSource {
driverClassName = 'com.mysql.jdbc.Driver'
dbCreate = "update"
url = "jdbc:mysql://localhost/myName" // or url = "jdbc:mysql://${System.getProperty("dbHostName", "localhost")}/myName"
dialect = 'org.hibernate.dialect.MySQLDialect'
username = "myName_user"
password = "myName_password"
}
}
(I got some of this info from: http://www.cloudfoundry.com/getting_started.html)
I do not think that you have to supply additional SQL scripts. What you define in your BootStrap will make through deployment.
On pricing, I have no ideas. I'd suggest you write to their support to ask.
On a side notice: The www.gr8conf.org website is not running on EC2 yet, but that is beacuse I have not yet figured out, how to back up my database from EC2 to S3, and that's rather important, because when an EC2 instance ends, everything in it is lost, if not backed up.
/Søren

Related

Is it possible to have Centralised Logging for ElasticBeanstalk Docker apps?

We have custom Docker web app running in Elastic Beanstalk Docker container environment.
Would like to have application logs be available for viewing outside. Without downloading through instances or AWS console.
So far neither of solutions been acceptable. Maybe someone achieved centralised logging for Elastic Benastalk Dockerized apps?
Solution 1: AWS Console log download
not acceptable - requires to download logs, extract every time. Non real-time.
Solution 2: S3 + Elasticsearch + Fluentd
fluentd does not have plugin to retrieve logs from S3
There's excellent S3 plugin, but it's only for log output to S3. not for input logs from S3.
Solution 3: S3 + Elasticsearch + Logstash
cons: Can only pull all logs from entire bucket or nothing.
The problem lies with Elastic Beanstalk S3 Log storage structure. You cannot specify file name pattern. It's either all logs or nothing.
ElasticBeanstalk saves logs on S3 in path containing random instance and environment ids:
s3.bucket/resources/environments/logs/publish/e-<random environment id>/i-<random instance id>/my.log#
Logstash s3 plugin can be pointed only to resources/environments/logs/publish/. When you try to point it to environments/logs/publish/*/my.log it does not work.
which means you can not pull particular log and tag/type it to be able to find in Elasticsearch. Since AWS saves logs from all your environments and instances in same folder structure, you cannot chose even the instance.
Solution 4: AWS CloudWatch Console log viewer
It is possible to forward your custom logs to CloudWatch console. Do achieve that, put configuration files in .ebextensions path of your app bundle:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/AWSHowTo.cloudwatchlogs.html
There's a file called cwl-webrequest-metrics.config which allows you to specify log files along with alerts, etc.
Great!? except that configuration file format is neither yaml,xml or Json, and it's not documented. There is absolutely zero mentions of that file, it's format either on AWS documentation website or anywhere on the net.
And to get one log file appear in CloudWatch is not simply adding a configuration line.
The only possible way to get this working seem to be trial and error. Great!? except for every attempt you need to re-deploy your environment.
There's only one reference to how to make this work with custom log: http://qiita.com/kozayupapa/items/2bb7a6b1f17f4e799a22 I have no idea how that person reverse engineered the file format.
cons:
Cloudwatch does not seem to be able to split logs into columns when displaying, so you can't easily filter by priority, etc.
AWS Console Log viewer does not have auto-refresh to follow logs.
Nightmare undocumented configuration file format, no way of testing. Trial and error requires re-deploying whole instance.
Perhaps an AWS Lambda function is applicable?
Write some javascript that dumps all notifications, then see what you can do with those.
After an object is written, you could rename it within the same bucket?
Or notify your own log-management service about the creation of a new object?
Lots of possibilities there...
I've started using Sumologic for the moment. There's a free trial and then a free tier (500mb /day, 7 day retention). I'm not out of the trial period yet and my EB app does literally nothing (it's just a few HTML pages serve by Nginx in a docker container. Looks like it could get expensive once you hit any serious amount of logs though.
It works ok so far. You need to create an IAM user that has access to the S3 bucket you want to read from and then it sucks the logs over to Sumologic servers and does all the processing and searching over there. Bit fiddly to set up, but I don't really see how it could be simpler and it's reasonably well-documented.
It lets you provide different path expressions with wildcards, then assign a "sourceCategory" to those different paths. You then use those sourceCategories to filter your log searching to a specific type of logging.
My plan long-term is to use something like your solution 3, but this got me going in very short order so I can move on to other things.
You can use a Multicontainer environment, sharing the log folder to another docker container with the tool of your preference to centralize the logs, in our case we connected an Apache Flume to move the files to an HDFS. Hope this helps you with this.
The easiest method I found to do this was using papertrail via rsyslog and .ebextensions, however it is very expensive for logging everything.
The good part is with rsyslog you can essentially send your logs anywhere and you are not tied to papertrail.
example ebextension
I've found loggly to be the most convenient.
It is a hosted service which might not be what you want. However if you check out their setup page you can see a number of ways your situation is supported (docker specific solutions, as well as like 10 amazon specific options). Even if loggly isn't to your taste, you can look at those solutions and easily see how some of them could be applied to most any centralized logging solution you might use or write.

Changing Service Account Passwords

I have been tasked with changing the password to all service accounts within the organization, and would appreciate a few pointers from sombody who has tackled this before..
I have identified each service account- as well as each machine and service using that account. What I would like is some guidance as to how this process is actually executed. This is a production environment, and I don't want to go breaking things during work hours.
Is the process as simple/tedious as changing the service account password, then logging onto each server, locating each service and changing relevant info under the "Log on" tab?
is there a better way of doing this? Thank you for the advice/guidance.
Thats pretty much it.
What I would suggest tho is duplicating the accounts with the same permissions (but affix '2013' on the end or something) and then while you go around changing the passwords, redirect the services to that account as well.
The reason for this is that, at least a few times, some random legacy application has gone down during service account resets purely because noone knew it was using it/had missed it in the refresh/didn't know about it. This way everything you touch should be OK, and you can then monitor the now 'legacy' accounts for any use.
/edit
Actually changing the username/password CAN be scripted, but that all depends on how cautious you want to be about the change and whether you want to be able to easily halt the execution! See http://gallery.technet.microsoft.com/scriptcenter/79644be9-b5e1-4d9e-9cb5-eab1ad866eaf for an example. (You will also need to think about what range of OSs you need to do this on - PowerShell will only work on some, VBS will for the others but then you have further considerations, and NT4...... ;) )

emails sent from production server end up in spam

I use sendmail to send emails from my application. I always send the emails from SOME_NAME#MY_DOMAIN.com but they always endup in spam folder.
I know that I should do some stuff on the DNS side to make my emails be marked as non-spam, but I don't know what they are.
I am a newbie and this is my first time setting up a production server, a domain and everything else myself. I appreciate if someone helps me.
What sort of environment are you deploying to?
This frequently happens to applications deployed to cloud services, like Amazon or RackSpace. Their entire IP blocks are registered as spam at services like Spamhaus, which is a sensible precaution or else we'd be getting even more spam than usual. You should enter your server's IP address in that field to see if you're listed as a spammer.
If you are, you can request to Spamhaus that the block be lifted. Getting in touch with Amazon's support stuff also helps. Finally, you can get around the issue entirely by using a sendmail service of some sort -- Amazon SES is pretty good, and there's even a Gem out there that provides integration to Rails apps.

How to configure login when using multiple servers running a distributed service (HAProxy, Apache, Ruby on Rails)

I have 3 servers running a website. I now need to implement login system and I am having problems with it as user gets a different behavior (logged in or logged out) depending on the server it gets connected to.
I am using Memcache for session store in Rails -
config.action_controller.session_store = :mem_cache_store
ActiveSupport::Cache::MemCacheStore.new("server1","server2","server3")
I thought the second line will either keep caches in sync or something like that ...
Each server has its own db with 1 master, 2 slaves. I have tried going the route of doing sessions in sql store but that really hurts sql servers and replication load becomes really heavy.
Is there an easy way to say, use this Memcache for all session store on all 3 servers?
Will that solve my problem?
I will really appreciate it.
I haven't used memcached to store sessions before ( I feel like redis is a better solution ), but I think as long as you have the
ActiveSupport::Cache::MemCacheStore.new("server1","server2","server3")
line on each of your application servers, your sessions should stay synced up.
I've had a lot of success with just using regular cookie sessions using the same setup you've described.

Best Practices for a Web App Staging Server (on a budget) [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I'd like to set up a staging server for a Rails app. I use git & github, Cap, and have a VPS with Apache/Passenger. I'm curious as to the best practices for a staging setup, as far as both the configuration of the staging server as well as the processes for interacting with it. I do know it should be as identical to the production server as possible, but restricting public access to it will limit that, so tips on securing it only for my use would also be great.
Another specific question would be whether I could just create a virtual host on the VPS, so that the staging server could reside alongside the production one. I have a feeling there may be reasons to avoid this, though.
Cheap and Easy answer:
1) Point staging.domainname.com at your VPS.
2) Add in a virtual host for staging, pointing to the staging copy of the app.
3) Add in a staging environment setting. (Did you know you could define new
environments in Rails? Fun stuff!) I think this is as simple as copying production.rb to staging.rb and tweaking as necessary, plus updating database.yml.
4) In ActionController, add in code similar to the following
if (ENV["RAILS_ENV"] == "staging")
before_filter :verifies_admin
end
Where verifies_admin can be anything you want. I suggest using HTTP basic authentication -- cheap and easy.
def verifies_admin
authenticate_or_request_with_http_basic do |username, password|
username == "foo" && password == "bar"
end
end
Note that this may bork your connection to that payment site if they are making inbound requests to you, although that is simple enough to fix (just turn off the before_filter for the appropriate controllers and/or actions.)
Better answer:
1) Buy a second VPS configured from the same image as your regular VPS, and/or configured from the same install-from-the-bare-metal script (I like Capistrano & Deprec for this).
2) Point staging.domainname.com at it.
3) Otherwise its the same as the other option.
Things to think about:
1) Should I have a staging database as well? Probably, especially if you're going to be testing schema changes.
2) Should I have some facility for moving data between the staging and production systems?
3) Can catastrophic failure of my staging application take down the main application? Best hope the answer is no.
I would add to this that Jamis Buck, who created Capistrano, also created a gem specifically for setting up multi-stage environments with Capistrano. You can do it without the gem, but the gem makes it even easier. You can find his post on it with instructions here: http://weblog.jamisbuck.org/2007/7/23/capistrano-multistage
There's also a very helpful article in the Pragmatic Programmer book "Advanced Rails Recipes" that walks you through exactly how to set this up. I found that the answers to this post, combined with the Rails Recipes book made this incredibly easy to set up.
StackOverflow won't let me add another link, but if you google Advanced Rails Recipes, the book is the first result.
I guess it depends if the staging server needs be accessible by anyone but you. If other people need to be able to access it then you would need another small slice somewhere and then you can use htaccess, or firewall rules to limit who gets access to it. If no one else needs to access it I would suggest using VMWare. You can run it on your own machine, or a spare box you have around, or a very cheap PC. We use the free VMWare Server 2 for our staging and deployment test servers and it works great. It also makes it very easy to create new test servers by just duplicating your base VM setup. If you are on a Mac you can use VMWare Fusion, costs money, but I have to use it already to test IE.
I'll probably get shot for saying this, but for small sites on tight budgets, I see nothing wrong with running the staging site right alongside the production one.
You're using Rails, Apache, and Passenger. Set up different Rails configurations (and databases), and set each one up as a named VirtualHost. Protect one with htaccess. Create an A record from your domain (staging.*) and point it there.
Sure, they're not completely insulated from each other. You might crash everything. Oops! It probably won't matter. :)
Use two separate servers (VPS or whatever) as similar as you can make them (hardware and software) at the base image. Automate all configuration of your production environment so nothing is done by hand. Use that automation to produce a staging server that's identical to your production environment. Maintain the automation to ensure both environments stay in sync and can be replicated on demand.
Solves both your staging-out-of-sync problem and your first-order scaling problem.
As far as cost goes, VPSes are cheap as chips. The number of production downtime-inducing failures you'll avoid by having a staging server will pay for your staging environment in no time (unless you're not actually making any money at all, in which case downtime isn't so much of a problem and you can go nuts with the breakage).

Resources