New Relic Developer Mode - wrong base URL when run in production - ruby-on-rails

I am trying to use New Relic to investigate a performance issue with my Rails 4 app. On my development machine, I run the app using Thin, and can successfully use the New Relic console by accessing http://localhost:3000/newrelic.
However, in production my app runs at a different base URL (http://server/app) and is being served with Nginx and Passenger. When I try to access the New Relic console by visiting http://server/app/newrelic, I do get a response - but the console page is blank, with no data. Looking at the source of the page, I can see that it references resources at http://server/newrelic, WITHOUT the necessary base URL suffix of /app.
Can I configure New Relic to use the correct base URL?

The Ruby agent's "developer mode" feature should never be used in a production environment. The memory and CPU overhead of developer mode is significantly higher than the production monitoring performed by the agent, and it contains no mechanism for restricting access to the information it gathers, because it is only intended to be run locally.
In production, you should use the regular monitor_mode instead (configurable in your newrelic.yml), and view the graphs at your New Relic dashboard (rpm.newrelic.com/accounts/xxx/applications/xxx).

Ok, I did work around the problem by adding this to nginx.conf on the production server:
location /newrelic/ {
proxy_pass http://127.0.0.1/app/newrelic;
}
But this is only good for one application, so the original question still stands.

Related

Docker (rails) - Changes in server side code, require restart app

I'm using a windows 10 machine. I'm also running a docker container that is running a rails application. Whenever I make a change to any server side code (i.e controllers or models) I'm required to do a docker restart app.
However my friend is using the same container on his apple machine but when he makes changes to any server side code he does not have to restart his app.
Why is this?
Rails has a configuration option (config.cache_classes) that specifies whether or not your application code should be cached in memory between requests. Having this option set to true will require you to restart your app if you make changes; having it set to false reloads your code on every request, so you don't have to restart.
It is recommended to set this to false in the development environment only, because Rails works faster if it doesn't have to reload your code every time it starts processing a request. In production, you should leave it set to true.

More efficient way to view changes on a mobile device without pushing rails app to heroku?

Probably a dumb question:
Right now, to see changes made in development, I run rails s and see the changes on the local version of my site. To see how changes look on my phone, I currently commit to Git (no matter how small the changes) and then push to heroku. This takes some time and results in lots of commits and deployments for minor changes (i.e. CSS stuff).
What is a more efficient way to test changes for rails web apps on mobile?
NOTE: I am aware I can shrink my browser but it never fails I get different outcomes on my phone.
Any help is appreciated.
RELATED: how do i run a development rails app / website on an ipod
You can also use Nitrous.io which is a cloud development environment. I like it because not only can I view my work on mobile, but since it's a hosted URL, I can share it with others while my server is running.
1) connect your phone to the same network that your local server is running on and point it to http://[your server's ip]:3000
2) use the XCode iOS Simulator and/or the Android Emulator
you can also use ngrok
https://ngrok.com/
which gives you a way to make an external tunnel to the outside world (for free) so you can use it outside of your local network

Apache sub-route/symlink exceptions-to-the-rule

I don't know how to describe this problem without a specific setup, so I'm not sure if the post title is very meaningful.
Anyway, we have a Rails app called marketing running on Phusion Passenger. We have the Passenger VirtualHost set up with a RailsBaseURI /marketing entry, and the /var/www/html/marketing symlink that points to the Rails directory in a separate part of the file system (say /home/user1/marketing). This all works fine, and we can hit the app at ourdomain.com/marketing. The Rails app has a couple of routes like /marketing/businesses and /marketing/certificates that point to different "subapps" corresponding to various functionalities of our marketing division.
Now, I have a standalone Adobe AIR app called MarketingPeanuts that supports the AIR autoupdate feature. In short, the autoupdate requires the AIR installer package and a config file on the server ("autoupdate contents"), and the AIR application code points to this URL. I would like the URL for the autoupdate to be ourdomain.com/marketing/peanuts to maintain semantics. However, I do not want to put the autoupdate contents into the Rails directory because 1) the MarketingPeanuts AIR app is not related in any way to the Rails app (other than being another function of the marketing division), and 2) any time I need to update the AIR app, I would have to redeploy the Rails app just to get the most recent autoupdate contents onto the server.
So what I want to do is put the AIR autoupdate contents in a completely different part of the filesystem (say /home/user2/marketing_peanuts), and somehow tell Apache that if it sees the specific sub-URL /marketing/peanuts, point to this location, otherwise, send all other /marketing/* sub-URLs to the Rails directory (/home/user1/marketing from above). All while not having Rails complain about the non-existent route (although if Apache can solve this problem, then the Rails problem shouldn't even exist).
Is it possible to do this kind of thing in Apache (I'm guessing yes), and if yes, how?
You should be able to do this with mod_alias (which ships with Apache). Enable the module, then put this somewhere in your Apache config:
Alias /marketing/peanuts /home/user2/marketing_peanuts

Setting Up a Test Environment For an ASP.NET MVC3 Website

I've been working for a client's website over the past year. I usually test things locally and then deploy straight to the production website. This has caused us some issues lately so I thought I should create a test/staging environment in which we could thoroughly test new features before pushing them into production.
Anyway, we have a VPS hosting account. I usually use remote desktop to manage the website in IIS. So in order to create a test environment, I copy pasted the folder of the production website inside the same directory (so they are both at the same level) and changed the name of the folder. Then I created a new website in IIS and mapped the physical path to the httpdocs folder inside the copied folder. After that, I setup a new application pool which basically has the same settings of the production website's application pool. I also changed the connection string of the test website.
But then when I tried to view the test website, it did not work the way I expected it to do. I keep getting &ReturnUrl=%2f appended to the query string, and the website is stripped out of its styles (the CSS). I remember this used to happen before when we were still using a shared hosting account, but I have no idea how to fix that.
I really do not know what's wrong. I basically have the same exact setup except I'm using a different port and a different database. I even tried running the test website with the application pool of the production website, but that did not work either...
Any suggestions?
looks like permission problem to me, check if your user has correct privileges in the new folder/app pool :)

Can't resolve "UnauthorizedAccessException" with MVC 2 application running under IIS7

We use MVC controllers that access System.File.IO in our application and they work fine in localhost (IIS 6.0-based Cassini). Deploying to IIS7, we have problems getting the controllers to work because they throw UnauthorizedAccessExceptions.
We have done the following to try to resolve the issue:
- Set NETWORK SERVICE and IUSR accounts to have permission on the files and folders in question
- Ensured the App Pool is running under NETWORK SERVICE and loading the user profile
- Application is running under full trust
- We tried adding impersonation to web.config and giving NETWORK SERVICE write permissions (which was not a great idea because that's not what we want to do)
Now, we alternate between getting UnauthorizedAccessException and an IIS7 404 page that suggests the routes are being ignored completely (for example we serve "/favicon.ico" via a controller when the physical file actually lives at /content/images/favicon.ico). We used ProcessMonitor to try to track down the issue but weren't successful.
UPDATE:
This issue is intermittent. We had a brief few minutes where everything worked without making any configuration changes. We're running on EC2, so this could be related to a distributed file system. We're also using a separate drive to store all web site data, we're not using inetpub/wwwroot.
UPDATE 2:
The site works without incident under IIS 7.5, with no configuration changes needed but this is likely due to running with the new AppPoolIdentity. Otherwise it's an identical deployment. Unfortunately we can't run R2 on this EC2 instance.
One of the ways to identifying the cause is using Procmon tool from Sysinternals
Procmon will show the reason for unable to open the file , it will also show who is holding the file.
The issue turned out to be the controller factory we were using not handling file requests properly.

Resources