I am trying out TorqueBox and having issues with my deployment descriptor. I'm using 2.0-beta2 with jruby-1.6.5. When I deploy to using the torquebox deploy command, the application gets deployed within the application server; however, it is always at the root context (/) instead of the context I am specifying within my config. Here is my config/torquebox.rb:
TorqueBox.configure do |cfg|
cfg.environment do
RACK_ENV "qa"
end
cfg.web do |web|
web.host "localhost"
web.context "/my_application"
end
cfg.ruby do |ruby|
ruby.version "1.9"
end
end
I tried it with and without having the host defined as well, and nothing changed. Its interesting because I know that its reading my config as I see the following within the run log:
14:53:00,497 INFO [org.torquebox.core] (MSC service thread 1-2) evaling: "/Users/ejlevin1/Documents/Workspace/my_application/config/torquebox.rb"
However, I feel like the line within the log a few lines down is showing it isn't honoring my context:
14:53:01,499 INFO [org.torquebox.core.runtime] (Thread-95) Creating ruby runtime (ruby_version: RUBY1_9, compile_mode: JIT, app: my_application, context: web)
Does anyone know what I am doing wrong? I tried deploying 2 applications to see if the server only honored this in the case of multiple applications running; however, that just gave me an error that seemed to be because they were both mounting off of root (/).
I think what's happening is your "external" descriptor is overriding your "internal" one. Your internal one is what you have above. But the 'torquebox deploy' command generates an external descriptor that tries to deploy your app at the root by default. Try running 'torquebox deploy /path/to/your/app --context-path=/my_application'
Related
I have a Rails 6 application running on Debian buster. In one place I am using "low-level" caching. Here is the relevant code:
# Get the value.
def self.ae_enabled?()
Rails.cache.fetch("ae_enabled", expires_in: 1.hour)
end
# Change the value.
def self.ae_toggle()
ac = AdminConfiguration.find_by(name: "ae-enabled")
ac.value = ! ac.value
ac.save()
# Invalidate the cache.
Rails.cache.delete("ae_enabled")
return ac
end
This works fine ... for a while. At some point, and for reasons I cannot figure out, the cache directory tmp/cache/3F1/ where the above value is cached changes ownership from www-data:www-data (the user Apache runs under) to root:root. Once this happens Apache can no longer read this cached value and the application throws an error.
The odd thing is none of the other directories under tmp/cache/ have their permissions change, it is only the one associated with this low-level cache.
Why is that particular cache directory changing ownership?
Technical details: Rails version 6.0.3.3.
Apache usually does not relate to rails cache, unless you're using passenger, in which case it may be passenger's bug/misconfiguration, check if user sandboxing is enabled and configured correctly.
A typical rails deployment usually has multiple processes:
a web server handling static files and proxying requests to rails (usually nginx, you've mentioned apache)
rails web server (in case of passenger - "inside" the previous, but in fact there's still a child process)
some background workers or processes run from cron
File ownership confusion most probably originates from one of the above writing to disk while running under a different os user.
Look into how your processes are started. First suspect is some cron job that may be configured as system-wide, these run under root.
I got already deployed server on JELASTIC. The problem is that if I do a deploy, rails_env in /etc/nginx/ruby.env is setting up by itself on ROOT -- not production as I want to. Then my application crash, because rails cannot see a ROOT env in database.yml. The case is that i do not want to add ROOT on database.yml. I want to keep it as clean as possible so I want to deploy my app in production environment.
When I click on "edit project" i got sth like this:
Which says that context production is already in use (and it is, because THIS app is deployed in production mode, like you can see in previous screen). When i do not choose any application deployment type, i have blank selectbox with ROOT placeholder (which is apparently used as deployment type because rails_env in ruby.env is set like this).
I tried also deploy app from deployment manager:
which allow me to choose environment and deployment type and
tells me that this context is already in use and if I want to redeploy context
but it deploy also as a ROOT and i have to change manually nginx ruby.env and restart nginx to make it work.
Do you have any idea, what am I doing wrong?
Any suggestions?
Two more questions, why deploy hooks does not save (for example it run previous hooks even if i delete them and replace with simple "echo")
And last question is, can i somehow create new deployment type called "staging", becasue as we know "development", "production" and "test" are used to other things like staging. I need staging for other purposes, like for example disabling mailers for client test servers.
My Travis tests for a Rails app have been working fine, but have suddenly started failing about one time in three with:
$ bundle exec rails test
FATAL: Listen error: unable to monitor directories for changes.
Visit
https://github.com/guard/listen/wiki/Increasing-the-amount-of-inotify-watchers
for info on how to fix this.
Looking at that suggested URL, it proposes ways to increase the number of inotify-watchers, but requires the use of sudo to change the limit. That's fine on my dev machine (though I'm actually not experiencing the error on my machine), but I don't know if that's possible (or advisable) in the Travis environment.
I looked at the Travis docs to see if there's a config setting for increasing the number of watchers, but I couldn't find anything.
So: what's the best way to deal with this error in a Travis CI test?
If you're running this on TravisCI and a CI/staging/testing server, you should not need to watch files for changes. The code should be deployed to the server, and then bundle exec rails test should run and that's it. No file watching necessary.
What I suspect is that the config for your environment is not set up correctly and that listen gem is somehow activated for the testing environment when it should only be activated for the development environment.
Try running the tests locally with the same environment as TravisCI (testing in this example):
RAILS_ENV=testing bundle exec test
and see what it says. If it gives you that error, check the config/environments/testing.rb file and look for config.cache_classes.
When config.cache_classes is set to true, the classes are cached and the listen/file-watcher will not be active. In your local development environment, config/environments/development.rb, the config.cache_classes setting should be set to false so that file-watching and reloading happens.
I have a ruby on rails app running on elastic beanstalk and I wanted to upload some large files - possibly around 5gb.
To do this, I added a config file on .ebextensios/nginx/01_upload_file_size.config with the following content:
files:
"/etc/nginx/conf.d/proxy.conf" :
mode: "000755"
owner: root
group: root
content: |
client_max_body_size 20G;
After I deploy the code to EB, I restart the nginx server using the command sudo service nginx reload. This seem to work for a while.
Uploading large files the next day gives me 'Connection is reset' error. The log file log/nginx/error.logs tells me error client intended to send too large body: 24084848 bytes
I have no idea why this occurs. Seems like the config file is ignored after a short term or maybe reset but I can't see any reference of this happening in the documentation. Note that when I SSH into the eb environment again and restart nginx again, I can upload large files without a problem.
After looking after everything, I saw these events on my EB console.
Added instance [i-076127f714faac566] to your environment.
Removed instance [i-0c51791325b54873c] from your environment.
I also notice that the IP address of the host changes when the config resets.
I think that when the instances were automatically added and removed from EB, it didn't apply the config file or didn't restart the nginx server like I did manually via SSH.
So the question is: How do I make sure that the client_max_body_type is always set to 20G, even after instance is removed and re added? Or, how to make the config persistent so I don't have to manually restart the nginx server?
I think you have two questions here - why is EB replacing your instance, and how can you automate the restart of nginx.
Answering the first question will take a bit of research on your part, but I suspect it may be the default CloudWatch alarm that kills instances when network traffic drops below a certain threshold.
The second question should be fairly straightforward; following the documentation, you should be able to add a section to 01_upload_file_size.config that automatically restarts nginx during the deployment process:
container_commands:
01_restart_nginx:
command: "service nginx reload"
I would also check to make sure that the /etc/nginx/conf.d/proxy.conf file is actually being created - I don't know if folders under .ebextensions are supported. You might need to move your config file to .ebextensions/01_upload_file_size.config.
I'm new to RoR.
I was able to install Rails and host it in Webrick (Sample App with "Welcome" controller) in my windows.
Now i have a Unix Weblogic Server along with a dedicated domian.
After exporting the .WAR file using Warbler, i accessed the Oracle Admin Console from where i deployed the .WAR file in the dedicated domain. I did all this for the Sample app with only the Welcome controller in it.
But even after deploying the WAR file, on accessing the Domain along with the Port Number (:9002) i ended up with 404 file not found error On looking at the server logs,there wasn't any records relating to any error. The Application must have been deployed properly. I assume that i must have missed out on some basic configurations in the routes.rb or similar files before deploying. Can anyone Guess what are all the possibilities and if possible can anyone help me by pointing to any tuts that cover the Steps to be carried out for configuration before deployment. do i need to install both JRuby and Rails inside the server before depolyment?
I can't really guess with Eror 404 only.
You can try mapping your rails app rack config to a different base_uri.
All you need to do is wrap the existing 'run' command in a map block
try doing this in your rails 'config.ru' file:
map '/mydepartment' do
run Myapp::Application
end
Now when you 'rails server' the app should be at localhost:3000/mydepartment .
Not sure if this will give you the desired outcome, but worth a try.
One more thing you also add this to your config/environments/production.rb and config/environments/development.rb (if on production mode):
config.action_controller.asset_path = proc { |path| "/abc#{path}" }
otherwise when you call your helpers such as stylesheet_link_tag in your views, they will generate links without the "/abc".
Also, find some guides you may refer for good support.
JRubyOnRailsOnBEAWeblogic.
Use JRuby with JMX for Oracle WebLogic Server 11g
Let me know if it is not resolved.