We use the default ruby logging module to log the errors. We use the delayed_job which is using many worker processes. So we could not manage the log files.
We need the ruby based log server with rolling file appender and archive facility so that we can push the logs to the log server and let the log server to manage the logging task.
Do we have ruby based solution or other recommended solutions to manage this problem?
Have you looked at Ruby's syslog in the standard library? Normally the docs are non-existent, but the Rails docs seem to cover it, kinda. http://stdlib.rubyonrails.org/libdoc/syslog/rdoc/index.html
Otherwise you can find out some info by looking through the syslog files themselves and reading http://glu.ttono.us/articles/2007/07/25/ruby-syslog-readme which is what I did when I started using it.
You don't say what OS you are on, but Mac OS and Linux have a very powerful syslog built in, which Ruby's syslog piggybacks on so you should be able to send to the system's syslog and have it split out your stream, roll the files, forward them, and do lots of other stuff with them.
Related
Is there a way to pick up the log messages which are logged to a log file, when using the syslog log driver of Docker?
Whatever I write to sysout are getting picked by Rsyslog but anything logged to a log file is not picked. I don't see any option in the syslog driver option which could help indicate a log file to be picked up.
Thanks
Dockers logging interface is defined as stdout and stderr, so the best way is to modify the log settings of your process to send any log data to stdout and stderr.
Some applications can configure logging to go directly to syslog. Java processes using log4j are a good example of this.
If logging to file is the only option available, scripts, logstash, fluentd, rsyslog, and syslog-ng can all ingest text files and output syslog. This can either be done inside the a container with an additional service, or using a shared, standardised logging area on each Docker host and running the ingestion from there.
I have rails application on passenger web server running in docker container. I'm trying to redirect application logs to Logstash. I redirect rails logs to STDOUT and configure container to use gelf log driver, wich redirects STDOUT to given Logstash server. But problem arises: Passenger web server writes his own logs to STDOUT too. And I get mixture of two logs, what make it difficult to separate and analyze.
What is best practices in such situation? How could I label each log stream to separate it in logstash?
If you really wanted, you could configure Passenger to write to its own stdout log, but I would avoid using STDOUT as an intermediary for logstash.
Try a library like logstash-logger. You could then write to a separate file, socket, or database. I think that's a cleaner approach, and potentially faster depending on the log destination.
My problem is similar to AWS: None of the Instances are sending data but has a slightly different error message.
I have a Rails application running on ElasticBeanstalk, and it appears to be running correctly. Periodically, Enhanced Health Monitoring sends me error messages such as:
Environment health has transitioned from Ok to Degraded. 20.0 % of the
requests are failing with HTTP 5xx.
where the percentage varies up to 100%. Even though I've made no changes, a minute later I get a followup message telling me that everything is back to normal:
Environment health has transitioned from Degraded to Ok.
I've downloaded the full logs from ElasticBeanstalk but I don't know exactly where to look (there are around 20 different log files in various directories).
I'm currently using the free AWS tier with the smallest instances of database, server, etc. Could this be the cause? Which of the log files should I be looking in, and what should I be looking for?
I run rails apps on Elastic Beanstalk and have found it helpful to think about Beanstalk as a computer (in this case an Amazon EC2 instance) running your rails app and a web server (either Passenger or Puma). When you get a 500 error, it could be because your rails app didn't properly deploy–in which case Passenger or Puma will return an error—or your app is deployed properly but encountered an error just like it might on your local machine.
In either case, to diagnose an error, download the full logs from your AWS console (open the correct app environment and then choose Logs > Request Logs > Full logs > Download). Deployment errors are harder to diagnose, but I recommend starting by looking in var-XX/logs/log/eb-activity.log. I suspect your error is coming from your rails app itself, in which case I recommend looking in var-XX/app/support/logs/passenger.log and production.log. To find a 500 error, search for "500 Internal" and then treat the error like you would any other rails error.
You can go to the EC2 instance and run the application just like you would run on your local machine and see the logs.
You can ssh into your EC2 instance using the command eb ssh and go to /opt/python/ directory (It will be different for Ruby or other programming languages).
/opt/python/run is the directory where you will find the version of your application which is run from the EC2 instance. Look for the directory venv and app inside run directory.
Note: The above folder structure is for Python but a similar folder structure post deployment can be found for any other programming language. Just look for the standard directory structure for the deployment environment for your programming language.
For Python:
/opt/python: Root of where you application will end up.
/opt/python/current/app: The current application that is hosted in the environment.
/opt/python/on-deck/app: The app is initially put in on-deck and then, after all the deployment is complete, it will be moved to current. If you are getting failures in your container_commands, check out out the on-deck folder and not the current folder.
/opt/python/current/env: All the env variables that eb will set up for you. If you are trying to reproduce an error, you may first need to source /opt/python/current/env to get things set up as they would be when eb deploy is running.
/opt/python/run/venv: The virtual env used by your application; you will also need to run source /opt/python/run/venv/bin/activate if you are trying to reproduce an error.
I know it is a little late but I wanted to comment the trick I use to find the error, I use to connect via ssh and then, once in the app I try to enter "rails console" It uses to fail, but it shows normally the error you´re making. This little trick saved my life several times. Hope it helps!
Is it feasible to have a Ruby on Rails app, which is:
a) deployed on Heroku, and
b) working with a remote SQL Server database?
I take it that I'll need unixODBC installed on Heroku, but I cannot find a way to do so. Is this possible?
Or, is there any other way (without ODBC?) to accomplish this?
Thank you very much for any guidance or tip.
Updated:
Some info on the subject:
1) Heroku pre-installs both unixODBC and FreeTDS by default, so you already have them.
2) Also, it is possible to run shell commands via Heroku Console in backticks, e.g.:
heroku console
`odbcinst`
(runs "odbcinst" command in Heroku shell and shows the result)
3) You do not have access to filesystem outside of your slice where the packages are installed. If you only need a driver path, Heroku support can provide it (/usr/lib/odbc/libtdsodbc.so in my case).
4) You cannot run sudo commands in Heroku shell.
At the moment, to connect to MS SQL Server you at least need to append ‘freetds.conf’ file. Even when using tinyTDS (there is an open ticket#2 in tinyTDS gitgub issue page). DSN-less connection instructions from "wiki.rubyonrails.org SLASH database-support SLASH ms-sql" didn’t work for me, I guess this connection requires some extra-configuration either.
‘freetds.conf’ cannot be modified without sudo. Therefore, I conclude that currently there is no way to make MS SQL and Heroku work together.
I’ve managed to set up this connection with EngineYard and activerecord-sqlserver-adapter.
I followed these instructions:
https://github.com/rails-sqlserver/activerecord-sqlserver-adapter/wiki/Platform-Installation---Ubuntu
(there are only some filepath differences, e.g. ‘odbc.ini’ is located in ‘/etc/unicodbc’, not in ‘/etc’ - this is easy to work out).
I installed 'unixODBC' and 'freetds' packages using EY Unix Packages feature, and made all configurations manually through SSH. Sudo is available in EY (no password required). There is also Chef Recepes feature to automate those configurations (seems to be pretty easy, I'm going to try it tomorrow).
Hope this is helpful.
It is possible.
Because Heroku copies/symlinks its own config/database.yml over whatever you supply in your repository, you may need to take additional steps (e.g. in config/environments/production.rb or in config/initializers/remote_mssql_from_heroku.rb) to set up your application appropriately.
You will face the challenge, however, that traffic from Heroku to your MSSQL database will traverse the public internet. By default, this traffic will not be encrypted. Potentially everyone in the world will be able to monitor your traffic between your Heroku application and your database, and even alter the traffic in-flight, whether for benign or malicious purpose, without you being able to detect it. MS SQL offers the capability to connect over SSL. This capability requires explicit configuration in the MSSQL server, so you must be able to access and modify that configuration. Additionally, this configuration requires that your client library be up-to-date and capable of talking with MSSQL over SSL. Note that MSSQL server will enforce that your server certificate list a Common Name or Subject Alternative Name exactly matching or wildcard-matching the server's FQDN (at least, the FQDN that the server knows about), and that the client use an FQDN for the server exactly matching or wildcard-matching one of the names on the certificate.
I've successfully used the following article which uses Heroku's newer buildpack feature to use TinyTDS and connect remotely to SQL Server 2008 R2. I'm still investigating how I could encrypt traffic. Hope this helps others!
http://blog.firmhouse.com/connecting-to-sql-server-from-heroku-with-freetds-here-is-how-on-cedar#
We're having a similar problem where we're needing to import old data from a SQL Server database into our new app. The data isn't a straight table import, but needs to undergo some processing and conversions. We've built an import layer for this which lives in a private gem, so as to not pollute the new app with the old data conversion issues. This approach is also designed to permit incremental updates, as we get closer to launch we'll keep syncing records up to the moment of switch-over.
Heroku told us that it's not trivial to connect to SQLServer, in particular as they don't support FreeTDS. Their support staff recommended to run an instance with the import gem from a laptop in our office and configure it to connect to their database (which requires a dedicated DB, not the free shared one). This sounded like the most palatable approach to us.
Secondly, regarding security that was mentioned by #Justice, we discussed configuring SSL for SQLServer with the hosting company and they pointed out the complexities of this. They recommended VPN as an easier solution. As we don't have office-side VPN hardware, the simplest and free solution proved to be an SSH tunnel.
We've set up an SSH tunnel from the laptop to the SQLServer Windows box. That was straightforward. We had CopSSH installed on Windows (which comes with a Linux shell, by the way) and we were able to simply set up a tunnel, having the laptop talk to localhost for its SQLServer connection, i.e.:
ssh -L 1433:localhost:1433 user#windows_server_name
I did not know Heroku has FreeTDS on it? I was told they did not. TinyTDS if used with FreeTDS 0.91 can have a zero freetds.conf dependency and be driving by runtime connection args. We are looking into building an Ubuntu 10.4 native gem that statically links 0.91 with OpenSSL so you can just drop it into Heroku and us it to connect to Azure and/or you own outside DB.
I've got everything working with ferret and acts_as_ferret for development (or localhost DRb), but I can't get my multiple host deployment working. All of the remote systems get ECONNREFUSED when accessing the port. On the ferret server, the daemon is listening on localhost only despite the configuration listing the FQDN as the host.
I also tried switching to a UNIX socket to share data between the ferret DRb daemon and the app code but it too gets ECONNREFUSED. (The socket is available to all of the machines via an NFS mount).
Is there a better way to do this or should I be looking for another search indexer? Thanks.
I did figure out that if the address is changed to druby://0.0.0.0:port that it would listen on all ips on the DRb server; however, it doesn't provide any protection against bad code injection into the DRb process.
Basically don't use ferret. I'm on to Xapian with acts_as_xapian for RoR. It supports multiple processes reading but only one writing, so it's an offline index. However, I will be able to make use of sharing the same index between multiple servers via the shared file system (NFS).
Check out Pitfalls of acts_as_ferret, with DrbServer to the rescue
http://www.subelsky.com/2007/03/pitfalls-of-actsasferret-with-drbserver.html
worked pretty well for me. The only thing I'd add is be sure to set the host value to where you're ferret is running.