I am uploading my first vertx project to Openshift.
The vertx project runs fine on my laptop.
When I push it using git to openshift and try to load a webpage I get a 503 error.
I did rhc tail:
==> vertx/logs/console.log <==
GOT CONFIG: {"index_page":"index.html","host":"localhost","port":443,"ssl":true,
"key_store_password":"0449666331abcdef$","key_store_path":"colomedia.tv-key.keys
tore","bridge":true,"inbound_permitted":[{"address":"colo.listboxes"},{"address"
:"colo.rebootbox"},{"address":"colo.addbox"}]}
/usr/bin/tail: vertx/logs/vertx.log: file truncated
==> vertx/logs/console.log <==
Succeeded in deploying verticle
==> vertx/logs/vertx.log <==
[vert.x-eventloop-thread-0] 2014-11-29T13:58:43.786-05:00 INFO [org.vertx.java.p
latform.impl.cli.Starter] Succeeded in deploying verticle
==> vertx/logs/console.log <==
colo: Connector deployed
Which all looks like it is fine.
Please can you help in any way?
Many thanks
James
Related
I have a rails(Rails 3.2.12, ruby 1.9.3p547) app running on AWS ubuntu cloud server, with nginx as app server and unicorn as reverse proxy server.
I have made few changes in the view file but those changes are not reflecting on browser. Code is currently live ("env=production")
I tried
sudo service my_app restart
[sudo] password for me:
Shutting down my_app: Starting my_app: Already running
[ OK ]
$ sudo service nginx restart
Stopping nginx: [ OK ]
Starting nginx: [ OK ]
$
but still got no help its still rendering the previous one. I tried commenting the whole controller file for the same view but still the app is not showing any error. I have confirmed that the app is running from the same folder in which I am making changes
I got stuck at this point please help. Thanks in Advance
Got the issue resolved with the help of the comments which gave me the idea.
located the .pid file in tmp/pids/unicorn.pid which was having the current process id
$ kill -QUIT 5454 #process id
and then again started the unicorn service and nginx service, and changes started reflecting
I have been struggling with this issue for days, i am able to authenticate with the mongo shell.
But when i access it my application from the browser, i got the above mentioned error.
Ruby on rails logs:
2016-07-05T04:29:34.415943099Z app[web.1]: MONGODB | xx.xx.xx.xx:4121
| [db].count | STARTED | {"count"=>"listings", "query"=>{}}
2016-07-05T04:29:34.418337913Z app[web.1]: MONGODB | xx.xx.xx.xx:4121
| [db].count | FAILED | not authorized on [db] to execute command {
count: "listings", query: {} } (13) | 0.0021065790000000004s
Background
Hosting on Digital Ocean, with the One-Click Deployment of Dokku.
Dokku Version : 0.6.4
MongoDB : 3.2.6
Ruby: 2.2.4
Rails 4.2.6
I have added user (with DbOwner) to MongoDB and the same to mongoid.yml.
Here is mongoid.yml
production:
clients:
default:
database: sample
hosts:
- ip:4121
user: "user"
password: "password"
options:
read:
mode: :primary
max_pool_size: 5
Ok. I found the solution for this and i thought i will close the loop on this and save someone a couple of hours/days trying to figure this out.
Basically, the one-click from digital ocean DOES work!
But you need the following in your mongoid.yml, add the URI in and remove the database, user and password.
To generate the URI, do the following in your ssh terminal console:
dokku mongo:info <<db name>>
Update the uri value with what was displayed on screen.
production:
clients:
default:
uri: <<URI>>
options:
read:
mode: :primary
max_pool_size: 1
I don't know if this solves your problem, but a general piece of advice is, don't go for one click deployment of dokku. Spin up a droplet and install dokku manually, it doesn't take too much of effort also.
One click deployment creates unwanted config errors. Its suggested to go with that approach if you want to do some RnD and play around with it.
dokku one click images works normally all the time, there is no problems
have you installed the mongodb plugin of dokku?
https://github.com/jeffutter/dokku-mongodb-plugin
if you already instaled the plugin, you don't upload the yml file, you then just need to run the link command in your server
mongodb:create <app>
mongodb:link <app> <database>
or you can create and link in the same command
mongodb:create <app> <database>
So I'm migrating from Heroku to AWS Elastic Beanstalk and testing out the waters. I'm following this documentation:
AWS Docs :: Deploy Rails app to AWS
However after following the documentation I keep receiving a Bad Gateway 502 (error).
Here's the specs of my app:
Rails 4.1.8
Ruby 2.1.7
Server Puma
So I checked my /log/nginx/error.log and here is what I see:
2015/11/24 06:44:12 [crit] 2689#0: *4719 connect() to unix:///var/run/puma/my_app.sock failed (2: No such file or directory) while connecting to upstream, client: 172.31.13.129, server: _, request: "G ET / HTTP/1.1", upstream: "http://unix:///var/run/puma/my_app.sock:/", host: "my-app-env-mympay5afd.elasticbeanstalk.com"
From this AWS Forum thread it appears as though Puma is not starting correctly.
So the three log files that I have taken a look at are:
/var/log/eb-activity.log
/var/log/eb-commandprocessor.log
/var/log/eb-version-deployment.log
and none of them seem to indicate any errors except for the "secret_key_base" error which I fixed (I used the eb setenv SECRET_KEY_BASE=[some_special_key] command).
One thing that could hint at the source of the issue is /var/log/nginx/rotated/error.log1448330461.gz has the following content
2015/11/24 01:06:55 [warn] 2680#0: duplicate MIME type "text/html" in /etc/nginx/nginx.conf:39
2015/11/24 01:06:55 [warn] 2680#0: conflicting server name "localhost" on 0.0.0.0:80, ignored
But they seem to be warnings rather than severe show stoppers.
Are there any other files that I should be taking a look at?
As another point of reference, I've looked at this SO Post which would seem to imply that I need to enable SSL in order for all of this to work.
Thanks in advance!
Got it.
In my 'production.rb' I had a force_ssl setting and I didn't set up SSL yet since I was just starting out.
I have hosted a ror app in aws ec2. Development env works fine but when i start production mode it says - "We're sorry, but something went wrong.We've been notified about this issue and we'll take a look at it shortly."
I chechked production.log
production.log
[2015-07-02T16:37:21.257777 #12834] INFO -- : Migrating to <Table Name> (20150608154559)
Like this all table name are shown with a migrating msg.
Is this is an error? How to resolve this ?
I tried running its production mode in localhost where it displayed the error was due to secret_key_base.
I have installed CloudFoundry on Ubuntu and tried to push a sample helloworld application. I am getting the below exception. Can anyone faced the same issue. Please let me know how to resolve this problem. Springs applications are pushed correctly but this exception is raised when I push rails or sinatra applications.
root#CFDemo1:~/helloworld# vmc push myapp03
Would you like to deploy from the current directory? [Yn]:
Application Deployed URL: 'myapp03.vcap.me'?
Detected a Sinatra Application, is this correct? [Yn]:
Memory Reservation [Default:128M] (64M, 128M, 256M, 512M, 1G or 2G)
Creating Application: OK
Would you like to bind any services to 'myapp03'? [yN]:
Uploading Application:
Checking for available resources: OK
Processing resources: OK
Packing application: OK
Uploading (0K): OK
Push Status: OK
Staging Application: OK
Starting Application: .
Error: Application [myapp03] failed to start, logs information below.
====> /logs/staging.log <====
Logfile created on 2011-08-02 16:56:28 +0530 by logger.rb/25413
Adding rack-1.3.1.gem to app...
Adding sinatra-1.2.6.gem to app...
Adding tilt-1.3.2.gem to app...
Adding bundler-1.0.10.gem to app...
====> logs/stderr.log <====
/usr/local/rvm/rubies/ruby-1.8.7-p352/bin/ruby:
No such file or directory -- ./rubygems/ruby/1.8/bin/bundle (LoadError)
This issue is resolved in the latest Cloud Foundry source at http://github.com/cloudfoundry