I have an application developed by Grails 2.5.1 , i need PaaS provider to deploy it for production use , but must got these options :
SMTP server as my application needs to send emails
preferred to have access to file system but not necessary
MYSql DB
Able to deploy PHP applications in it.
Easy to deploy application's packages on
Good customer support
some adviced with Jelastic , but unfortunately they don't have SMTP server , and Heroku the deploying in it is a little bit hard.
any recommendations?
I would recommend Heroku or Elastic Beanstalk. Amazon RDS for MySQL as a Service, or Aurora which is MySQL compatible.
Not sure what you mean by "able to deploy PHP". You won't be able to run a PHP app and a Grails/Java app on the same PaaS server, but you could spin up a separate PHP app on the same PaaS.
You could use Amazon SES to send emails, or a SendGrid account. Email server really shouldn't be a deciding factor.
Sheriff
It is very easy to add SMTP to Jelastic.
All you need to set up your SMPT Server is here.
Also, how to use external SMTP Server in your environment - is here.
With reference to the rest of your list Jelastic provides:
Access to file system (FTP/FTPS, SFTP/FISH, WebDAV, Dashboard)
MYSql DB
Able to deploy PHP applications in it
Easy to deploy application's packages on (Direct, GIT/SVN, Bitbucket)
Good customer support (You can choose hoster by your significant criteria here).
Have a nice day,
Jelastic Support
Related
I've successfully deployed my Rails application to Digital Ocean by configuring a git post-receive hook and running my puma server through screen (screen rails server).
It seems to be working and accessible at http://178.128.12.158:3000/
Do I still need to implement nginx? My purpose is only serving my API and a CMS website at the same domain.
And about deployment packages like capistrano/mina? Why should I care about them if git hook is serving me well?
Thank you in advance
If you're going to manage large number of traffic with load balancing mode nginx will help. We can add some constraint like block some sent of IP access, etc...
For more refer the following link: https://www.nginx.com/resources/glossary/application-server-vs-web-server/
If you want static resources to be served by a web server, which is often faster, you'll want to front-end your rails app with something like nginx. Nginx will offer a lot more flexibility for tuning how you serve your app.
Capistrano is for deployments, and again, is more flexible than the basic hook approach. For instance, if you intend to have different hosts (for db, web, assets, etc.), or multiples of them, then Cap is your friend.
I want to know how to build a rails server and host an app on it locally. I know I can use heroku or aws but in the case of this app I can’t, the database should be hosted locally in the company for security reasons; they do not want to store their data on servers that are not theirs.
How do I start?
What are the main things to consider?
Do I host on heroku and link the local database to the site or do I host them all in the same place?
How much power does the machine need for around 10-20k users?
What OS should I use Ubuntu or what?
Would really appreciate if you have any tutorials or article links.
You can just use ngrok. It doesn't require any side server deployments.
I have developed rails application, which I now want to deploy to an Amazon server. How can I do this?
Also I have registered domain name from godaddy.
The easiest way to do so is to have your application stored on a Git repository, Github for example. Then on your Amazon machine, clone your repo and you will have all your file on the cloud (almost) ready to use. Take a look at this documentation from Amazon for more infos.
For your domain, you must create a redirection to your Amazon machine IP. Here is a link you should see.
I would like to discuss about AngularJS and Ruby on Rails working together and deployed in AWS (Amazon Web Services).
So far, I have a development environment with an AngularJS frontend that sends request to a Ruby on Rails API backend. These both are two separate applications (they are in separated git repositories).
The AngularJS app is running in a Node.js server listening on one port, and Rails is running in a Webrick server listening on another port.
Although they work together, AngularJS is not physically integrated in the RoR app.
Now its time to deploy in production environment. For that, I will use an EC2 AWS instance (currently deploying using Elastic Beanstalk). As far as I understand, I can't have the same architecture here.
I would like to know your suggestions this point. Do you see any advantages or disadvantages?
Should I update my development environment, so the AngularJS app is integrated inside the RoR application (and deploy just one application)?
This is something I don't like, because I guess I have to modify many things.
On the other hand, is it possible to run both applications separately, the same way I do in development?
Can I install a node.js and a Unicorn or whichever server manually in production in the same instance?
I finally deployed with two separated applications as described above. The main difference is around the servers. My AngularJS frontend finally runs on an Nginx. And my Rails API is running on a Unicorn.
I am currently using Heroku's Memcached in a Rails 3 app and would like to move over to Elasticache because the pricing is much more favorable. Is this possible? Is the configuration relatively straightforward? Is there anything that I should be aware of as regards the performance?
No, it isn't recommended you use Elasticache as there is no authentication mechanism with it. As such, anyone can access your cache! This is normally fine as you would use AWS security rules to restrict what machines can access it to yours. However, this obviously doesn't work with Heroku since your app is run on a randomly chosen machine of Herokus.
You could deploy memcache yourself with SASL authentication on an EC2 machine. ElastiCache doesn't really give you anything more than an EC2 machine with memcache pre-installed anyway.
There is another option: MemCachier
(Full disclaimer, I work for MemCachier).
There is another memcache provider on Heroku that is significantly cheaper than the membase provided one. It's called MemCachier, addon home page is here.
It's comparable in price to ElasticCache depending on your cache size and if you use reserved instances or not (at the very large cache sizes ElatiCache is cheaper).
Update (June, 2013): The membase memcache addon has shutdown, so MemCachier is the only provider of Memcache on Heroku.
Please reach out to me if you need any help even if you go with ElastiCache.
DANGER: I do NOT recommend using this solution for production use. While this does work, #btucker pointed out that it allows any Heroku-hosted app to access your ElastiCache cluster.
Yes you can. The setup is similar to the guide Heroku has on Amazon RDS. The steps that differ go like this:
Follow the "Get Started with Amazon ElastiCache" guide to create a cache cluster and node
Install the ElastiCache Command Line Toolkit
Allow Heroku's servers ingress to your ElastiCache cluster like the RDS guide explains but replace the rds- commands with elasticache- ones:
elasticache-authorize-cache-security-group-ingress \
--cache-security-group-name default \
--ec2-security-group-name default \
--ec2-security-group-owner-id 098166147350 \
# If your AWS_CREDENTIAL_FILE environment setting is configured,
# this option is not necessary.
--aws-credential-file ../credential-file-path.template
Set a Heroku config value for your production app with your cluster's hostname:
heroku config:set MEMCACHE_SERVERS=elasticachehostname.amazonaws.com
After that, follow the Memcache Rails setup, and you're set.
It's worth noting that while #ssorallen's answer above will work as described, it also allows ANY heroku-deployed app to access your memcached server. So if you store anything at all confidential, or you're concerned about other people making use of your ElatiCache cluster, don't do it. In the context of RDS you have the access control built into the database, but memcached has no such authentication supported by ElastiCache. So opening up the security group to all of Heroku is a pretty big risk.
There are several Heroku addons that will kinda solve this problem. They provide a SOCKS5 proxy with a static IP address that you can whitelist.
https://elements.heroku.com/addons/proximo
https://elements.heroku.com/addons/quotaguardstatic
https://elements.heroku.com/addons/fixie-socks
You can also do this yourself by setting up your own SOCKS5 proxy on ec2.
Note the caveats here though:
http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/Access.Outside.html
It's slower, unencrypted, and some NAT monkey business will be required to get it working.
If you are using Heroku Private spaces, then it should be possible to do using VPC peering. Follow the instructions here so that your AWS VPC and Heroku VPC can access each other's resources:
https://devcenter.heroku.com/articles/private-space-peering
Once you have the above setup working, just create an elastic cache cluster in the AWS VPC and allow access from the dyno CIDR ranges in the AWS security group or to the complete Heroku VPC CIDR and your dynos will be able to access elastic cache URLs. I was able to get a working setup for Redis, and it should work for any other resource in the AWS VPC.