My Rails 3 site won't start on Ubuntu/Apache2/Passenger - ruby-on-rails

This server runs on Ubuntu 10.04, particularly on Linode VPS.
Passenger Error:
A source file that the application requires, is missing.
It is possible that you didn't upload your application files correctly. Please check whether all your application files are uploaded.
A required library may not installed. Please install all libraries that this application requires.
Further information about the error may have been written to the application's log file. Please check it in order to analyse the problem.
Error message:
no such file to load -- bundler
Exception class:
LoadError
Application root:
/srv/rails_app/current
I do have bundler installed, I know this because I did "bundle".
Here is my apache configs:
/etc/apache2/sites-enabled/000-default.conf:
<VirtualHost *:80>
PassengerRoot /usr/local/rvm/gems/ruby-1.9.2-p0/gems/passenger-2.2.15
PassengerRuby /usr/local/rvm/rubies/ruby-1.9.2-p0/bin/ruby
ServerName 173.230.152.41
DocumentRoot /srv/rails_app/current/public
<Directory "/srv/rails_app/current/public">
AllowOverride all
Options -MultiViews
</Directory>
</VirtualHost>
/etc/apache2/apache2.conf:
#
# Based upon the NCSA server configuration files originally by Rob McCool.
#
# This is the main Apache server configuration file. It contains the
# configuration directives that give the server its instructions.
# See http://httpd.apache.org/docs/2.2/ for detailed information about
# the directives.
#
# Do NOT simply read the instructions in here without understanding
# what they do. They're here only as hints or reminders. If you are unsure
# consult the online docs. You have been warned.
#
# The configuration directives are grouped into three basic sections:
# 1. Directives that control the operation of the Apache server process as a
# whole (the 'global environment').
# 2. Directives that define the parameters of the 'main' or 'default' server,
# which responds to requests that aren't handled by a virtual host.
# These directives also provide default values for the settings
# of all virtual hosts.
# 3. Settings for virtual hosts, which allow Web requests to be sent to
# different IP addresses or hostnames and have them handled by the
# same Apache server process.
#
# Configuration and logfile names: If the filenames you specify for many
# of the server's control files begin with "/" (or "drive:/" for Win32), the
# server will use that explicit path. If the filenames do *not* begin
# with "/", the value of ServerRoot is prepended -- so "/var/log/apache2/foo.log"
# with ServerRoot set to "" will be interpreted by the
# server as "//var/log/apache2/foo.log".
#
### Section 1: Global Environment
#
# The directives in this section affect the overall operation of Apache,
# such as the number of concurrent requests it can handle or where it
# can find its configuration files.
#
#
# ServerRoot: The top of the directory tree under which the server's
# configuration, error, and log files are kept.
#
# NOTE! If you intend to place this on an NFS (or otherwise network)
# mounted filesystem then please read the LockFile documentation (available
# at <URL:http://httpd.apache.org/docs-2.1/mod/mpm_common.html#lockfile>);
# you will save yourself a lot of trouble.
#
# Do NOT add a slash at the end of the directory path.
#
ServerRoot "/etc/apache2"
#
# The accept serialization lock file MUST BE STORED ON A LOCAL DISK.
#
#<IfModule !mpm_winnt.c>
#<IfModule !mpm_netware.c>
LockFile /var/lock/apache2/accept.lock
#</IfModule>
#</IfModule>
#
# PidFile: The file in which the server should record its process
# identification number when it starts.
# This needs to be set in /etc/apache2/envvars
#
PidFile ${APACHE_PID_FILE}
#
# Timeout: The number of seconds before receives and sends time out.
#
Timeout 300
#
# KeepAlive: Whether or not to allow persistent connections (more than
# one request per connection). Set to "Off" to deactivate.
#
KeepAlive On
#
# MaxKeepAliveRequests: The maximum number of requests to allow
# during a persistent connection. Set to 0 to allow an unlimited amount.
# We recommend you leave this number high, for maximum performance.
#
MaxKeepAliveRequests 100
#
# KeepAliveTimeout: Number of seconds to wait for the next request from the
# same client on the same connection.
#
KeepAliveTimeout 15
##
## Server-Pool Size Regulation (MPM specific)
##
# prefork MPM
# StartServers: number of server processes to start
# MinSpareServers: minimum number of server processes which are kept spare
# MaxSpareServers: maximum number of server processes which are kept spare
# MaxClients: maximum number of server processes allowed to start
# MaxRequestsPerChild: maximum number of requests a server process serves
<IfModule mpm_prefork_module>
StartServers 5
MinSpareServers 5
MaxSpareServers 10
MaxClients 150
MaxRequestsPerChild 0
</IfModule>
# worker MPM
# StartServers: initial number of server processes to start
# MaxClients: maximum number of simultaneous client connections
# MinSpareThreads: minimum number of worker threads which are kept spare
# MaxSpareThreads: maximum number of worker threads which are kept spare
# ThreadsPerChild: constant number of worker threads in each server process
# MaxRequestsPerChild: maximum number of requests a server process serves
<IfModule mpm_worker_module>
StartServers 2
MinSpareThreads 25
MaxSpareThreads 75
ThreadLimit 64
ThreadsPerChild 25
MaxClients 150
MaxRequestsPerChild 0
</IfModule>
# event MPM
# StartServers: initial number of server processes to start
# MaxClients: maximum number of simultaneous client connections
# MinSpareThreads: minimum number of worker threads which are kept spare
# MaxSpareThreads: maximum number of worker threads which are kept spare
# ThreadsPerChild: constant number of worker threads in each server process
# MaxRequestsPerChild: maximum number of requests a server process serves
<IfModule mpm_event_module>
StartServers 2
MaxClients 150
MinSpareThreads 25
MaxSpareThreads 75
ThreadLimit 64
ThreadsPerChild 25
MaxRequestsPerChild 0
</IfModule>
# These need to be set in /etc/apache2/envvars
User ${APACHE_RUN_USER}
Group ${APACHE_RUN_GROUP}
#
# AccessFileName: The name of the file to look for in each directory
# for additional configuration directives. See also the AllowOverride
# directive.
#
AccessFileName .htaccess
#
# The following lines prevent .htaccess and .htpasswd files from being
# viewed by Web clients.
#
<Files ~ "^\.ht">
Order allow,deny
Deny from all
Satisfy all
</Files>
#
# DefaultType is the default MIME type the server will use for a document
# if it cannot otherwise determine one, such as from filename extensions.
# If your server contains mostly text or HTML documents, "text/plain" is
# a good value. If most of your content is binary, such as applications
# or images, you may want to use "application/octet-stream" instead to
# keep browsers from trying to display binary files as though they are
# text.
#
DefaultType text/plain
#
# HostnameLookups: Log the names of clients or just their IP addresses
# e.g., www.apache.org (on) or 204.62.129.132 (off).
# The default is off because it'd be overall better for the net if people
# had to knowingly turn this feature on, since enabling it means that
# each client request will result in AT LEAST one lookup request to the
# nameserver.
#
HostnameLookups Off
# ErrorLog: The location of the error log file.
# If you do not specify an ErrorLog directive within a <VirtualHost>
# container, error messages relating to that virtual host will be
# logged here. If you *do* define an error logfile for a <VirtualHost>
# container, that host's errors will be logged there and not here.
#
ErrorLog /var/log/apache2/error.log
#
# LogLevel: Control the number of messages logged to the error_log.
# Possible values include: debug, info, notice, warn, error, crit,
# alert, emerg.
#
LogLevel warn
# Include module configuration:
Include /etc/apache2/mods-enabled/*.load
Include /etc/apache2/mods-enabled/*.conf
# Include all the user configurations:
Include /etc/apache2/httpd.conf
# Include ports listing
Include /etc/apache2/ports.conf
#
# The following directives define some format nicknames for use with
# a CustomLog directive (see below).
# If you are behind a reverse proxy, you might want to change %h into %{X-Forwarded-For}i
#
LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined
LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined
LogFormat "%h %l %u %t \"%r\" %>s %O" common
LogFormat "%{Referer}i -> %U" referer
LogFormat "%{User-agent}i" agent
#
# Define an access log for VirtualHosts that don't define their own logfile
CustomLog /var/log/apache2/other_vhosts_access.log vhost_combined
# Include of directories ignores editors' and dpkg's backup files,
# see README.Debian for details.
# Include generic snippets of statements
Include /etc/apache2/conf.d/
# Include the virtual host configurations:
Include /etc/apache2/sites-enabled/
LoadModule passenger_module /usr/local/rvm/gems/ruby-1.9.2-p0/gems/passenger-2.2.15/ext/apache2/mod_passenger.so
So how do I go about getting this working?

You need to follow RVM's instructions for generating a passenger wrapper script to use on your PassengerRuby line. Without it, you won't have the proper environment variables set, and Apache won't be able to find the gems installed in that RVM install.

A follow up to #Chris's answer. If your on Ubuntu and using Apache do the following:
rvm install 1.9.2
rvm 1.9.2 --passenger
gem install passenger
rvmsudo passenger-install-apache2-module
Then in your apache.conf add/modify the following lines:
LoadModule passenger_module /usr/local/rvm/gems/ruby-1.9.2-p0/gems/passenger-2.2.15/ext/apache2/mod_passenger.so
PassengerRoot /usr/local/rvm/gems/ruby-1.9.2-p0/gems/passenger-2.2.15
PassengerRuby /usr/local/bin/passenger_ruby
This worked for me, after hours of trawling SO and the rest of the internet.
James

It should be noted that Passenger 3 supports RVM natively without special instructions. The Passenger instructions on the RVM website will become obsolete.

Related

increase fastCGI idle-timeout on MAMP Pro

If you use mamp in cgi mode, to support opcache for instance if your page will take more than 30 seconds to load you get some error similar to
FastCGI: comm with server "/Applications/MAMP/fcgi-bin/php7.4.12.fcgi" aborted: idle timeout (30 sec)
How to increase that?
Solution No 1
Enable xdebug, xdebug is for debugging purposes and you might be in the process of debugging a couple of breakpoints and moving around for a couple of minutes does it make sense that your debugger after 30 seconds stops and tells you OH, well I have to go we can't do this anymore!
So that's why turning the xdebug works, but should you do it?
If you get stuck once in the development and want a fast workaround use xdebug, otherwise don't! Xdebug makes your request a lot slower and gives you a development environment equal to hell, don't ever use xdebug constantly!!! only when you need to debug.
Solution No 2
Add -idle-timeout.
First let's talk about this little nifty menu.
This will let you modify a httpd.conf file, but where is it? Is it the httpd.conf file that will actually be used? NO.
This is just a file full of placeholders, mamp will replace the placeholders in this file and generate a new file and that file will be used at last.
Knowing that was very important since we are going to see the actual output and see how we can edit that to add idle-timeout support.
Step 1. Find the generated file
The generated httpd.conf file is somewhere in your computer, you have to find it first if you are on Linux or MacOS you can use
locate httpd.conf
I have a MacOS and I found that it was at
/Library/Application Support/appsolute/MAMP PRO/conf/httpd.conf
At the top of this file you will read that it is auto-generated by Mamp Pro.
# !!!!!!! DO NOT EDIT THIS FILE !!!!!!!
# It is machine-generated by MAMP PRO, any changes made here will be lost!
Step 2. Find the section about fastcgi in the generated output
Now in this file look for mod_fastcgi.c you will find something like:
<IfModule mod_fastcgi.c>
# URIs that begin with /fcgi-bin/, are found in /var/www/fcgi-bin/
Alias /fcgi-bin/ "/Applications/MAMP/fcgi-bin/"
# Anything in here is handled as a "dynamic" server if not defined as "static" or "external"
<Directory "/Applications/MAMP/fcgi-bin/">
SetHandler fastcgi-script
Options +ExecCGI
</Directory>
# Anything with one of these extensions is handled as a "dynamic" server if not defined as
# "static" or "external". Note: "dynamic" servers require ExecCGI to be on in their directory.
AddHandler fastcgi-script .fcgi .fpl
FastCgiIpcDir /Applications/MAMP/Library/logs/fastcgi
FastCgiServer /Applications/MAMP/fcgi-bin/php8.1.1.fcgi -socket fgci8.1.1.sock
FastCgiServer /Applications/MAMP/fcgi-bin/php7.4.21.fcgi -socket fgci7.4.21.sock
</IfModule>
Step 3. add idle-timeout to FastCgiServer lines from generated file
That is great now all we need to do is to add the -idle-timeout number eg: -idle-timeout 3600 at the end of lines that start with FastCgiServer, so in this example we should have
FastCgiServer /Applications/MAMP/fcgi-bin/php8.1.1.fcgi -socket fgci8.1.1.sock -idle-timeout 3600
FastCgiServer /Applications/MAMP/fcgi-bin/php7.4.21.fcgi -socket fgci7.4.21.sock -idle-timeout 3600
But remember this is the generated file in order to achieve this, we must modify the source file.
Step 4. Put the lines we wrote inside the source file of httpd.conf
Using the Mamp Pro menu we open the source httpd.conf file, again we search for mod_fastcgi.c
this time we'll find
<IfModule mod_fastcgi.c>
# URIs that begin with /fcgi-bin/, are found in /var/www/fcgi-bin/
Alias /fcgi-bin/ "/Applications/MAMP/fcgi-bin/"
# Anything in here is handled as a "dynamic" server if not defined as "static" or "external"
<Directory "/Applications/MAMP/fcgi-bin/">
SetHandler fastcgi-script
Options +ExecCGI
</Directory>
# Anything with one of these extensions is handled as a "dynamic" server if not defined as
# "static" or "external". Note: "dynamic" servers require ExecCGI to be on in their directory.
AddHandler fastcgi-script .fcgi .fpl
MAMP_ActionPhpCgi_MAMP
FastCgiIpcDir /Applications/MAMP/Library/logs/fastcgi
MAMP_FastCgiServer_MAMP
</IfModule>
Matching this to the output, you'll see that MAMP_FastCgiServer_MAMP is the placeholder that gets replaced by FastCgiServer lines! So let's get rid of this placeholder and we'll add those lines ourselves here, in order to do this change the placeholder name slightly eg to: # M#A#M#P_FastCgiServer_MAMP and add the FastCgiServer lines with idle-timeout under or above it eg:
# M#A#M#P_FastCgiServer_MAMP
FastCgiServer /Applications/MAMP/fcgi-bin/php8.1.1.fcgi -socket fgci8.1.1.sock -idle-timeout 3600
FastCgiServer /Applications/MAMP/fcgi-bin/php7.4.21.fcgi -socket fgci7.4.21.sock -idle-timeout 3600
So the full section became
<IfModule mod_fastcgi.c>
# URIs that begin with /fcgi-bin/, are found in /var/www/fcgi-bin/
Alias /fcgi-bin/ "/Applications/MAMP/fcgi-bin/"
# Anything in here is handled as a "dynamic" server if not defined as "static" or "external"
<Directory "/Applications/MAMP/fcgi-bin/">
SetHandler fastcgi-script
Options +ExecCGI
</Directory>
# Anything with one of these extensions is handled as a "dynamic" server if not defined as
# "static" or "external". Note: "dynamic" servers require ExecCGI to be on in their directory.
AddHandler fastcgi-script .fcgi .fpl
MAMP_ActionPhpCgi_MAMP
FastCgiIpcDir /Applications/MAMP/Library/logs/fastcgi
# M#A#M#P_FastCgiServer_MAMP
FastCgiServer /Applications/MAMP/fcgi-bin/php8.1.1.fcgi -socket fgci8.1.1.sock -idle-timeout 3600
FastCgiServer /Applications/MAMP/fcgi-bin/php7.4.21.fcgi -socket fgci7.4.21.sock -idle-timeout 3600
</IfModule>
Step 5. Test our changes by checking the new generated file
Save that restart the servers and look at the generated file
<IfModule mod_fastcgi.c>
# URIs that begin with /fcgi-bin/, are found in /var/www/fcgi-bin/
Alias /fcgi-bin/ "/Applications/MAMP/fcgi-bin/"
# Anything in here is handled as a "dynamic" server if not defined as "static" or "external"
<Directory "/Applications/MAMP/fcgi-bin/">
SetHandler fastcgi-script
Options +ExecCGI
</Directory>
# Anything with one of these extensions is handled as a "dynamic" server if not defined as
# "static" or "external". Note: "dynamic" servers require ExecCGI to be on in their directory.
AddHandler fastcgi-script .fcgi .fpl
FastCgiIpcDir /Applications/MAMP/Library/logs/fastcgi
# M#A#M#P_FastCgiServer_MAMP
FastCgiServer /Applications/MAMP/fcgi-bin/php8.1.1.fcgi -socket fgci8.1.1.sock -idle-timeout 3600
FastCgiServer /Applications/MAMP/fcgi-bin/php7.4.21.fcgi -socket fgci7.4.21.sock -idle-timeout 3600
</IfModule>
The operation was successful, if you changed your php versions just put the placeholder back-on, and repeat these steps.

filebeat not running with elastic search and kibana on ec2 not using logstash

I want to display nginx logs on kibana.
Elastic search and kibana are running fine.
Nginx logs are stored in /var/log/nginx/*.log
I installed filebeat and enbled the nginx service with it.
filebeat.yml
###################### Filebeat Configuration Example #########################
# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html
# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.
# ============================== Filebeat inputs ===============================
filebeat.inputs:
# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.
# filestream is an input for collecting log messages from files.
- type: filestream
# Change to true to enable this input configuration.
enabled: false
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /var/log/*.log
#- c:\programdata\elasticsearch\logs\*
# Exclude lines. A list of regular expressions to match. It drops the lines that are
# matching any regular expression from the list.
#exclude_lines: ['^DBG']
# Include lines. A list of regular expressions to match. It exports the lines that are
# matching any regular expression from the list.
#include_lines: ['^ERR', '^WARN']
# Exclude files. A list of regular expressions to match. Filebeat drops the files that
# are matching any regular expression from the list. By default, no files are dropped.
#prospector.scanner.exclude_files: ['.gz$']
# Optional additional fields. These fields can be freely picked
# to add additional information to the crawled log files for filtering
#fields:
# level: debug
# review: 1
# ============================== Filebeat modules ==============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
# Period on which files under path should be checked for changes
#reload.period: 10s
# ======================= Elasticsearch template setting =======================
setup.template.settings:
index.number_of_shards: 1
#index.codec: best_compression
#_source.enabled: false
# ================================== General ===================================
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:
# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]
# Optional fields that you can specify to add additional information to the
# output.
#fields:
# env: staging
# ================================= Dashboards =================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false
# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:
# =================================== Kibana ===================================
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:
# Kibana Host
# Scheme and port can be left out and will be set to the default (http and 5601)
# In case you specify and additional path, the scheme is required: http://localhost:5601/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
host: "localhost:3000"
# Kibana Space ID
# ID of the Kibana Space into which the dashboards should be loaded. By default,
# the Default Space will be used.
#space.id:
# =============================== Elastic Cloud ================================
# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).
# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:
###################### Filebeat Configuration Example #########################
# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html
# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.
# ============================== Filebeat inputs ===============================
filebeat.inputs:
# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.
# filestream is an input for collecting log messages from files.
- type: filestream
# Change to true to enable this input configuration.
enabled: false
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /var/log/*.log
#- c:\programdata\elasticsearch\logs\*
# Exclude lines. A list of regular expressions to match. It drops the lines that are
# matching any regular expression from the list.
#exclude_lines: ['^DBG']
# Include lines. A list of regular expressions to match. It exports the lines that are
# matching any regular expression from the list.
#include_lines: ['^ERR', '^WARN']
# Exclude files. A list of regular expressions to match. Filebeat drops the files that
# are matching any regular expression from the list. By default, no files are dropped.
#prospector.scanner.exclude_files: ['.gz$']
# Optional additional fields. These fields can be freely picked
# to add additional information to the crawled log files for filtering
#fields:
# level: debug
# review: 1
# ============================== Filebeat modules ==============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
# Period on which files under path should be checked for changes
#reload.period: 10s
# ======================= Elasticsearch template setting =======================
setup.template.settings:
index.number_of_shards: 1
#index.codec: best_compression
#_source.enabled: false
# ================================== General ===================================
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:
# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]
# Optional fields that you can specify to add additional information to the
# output.
#fields:
# env: staging
# ================================= Dashboards =================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false
# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:
# =================================== Kibana ===================================
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:
# Kibana Host
# Scheme and port can be left out and will be set to the default (http and 5601)
# In case you specify and additional path, the scheme is required: http://localhost:5601/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
host: "localhost:3000"
# Kibana Space ID
# ID of the Kibana Space into which the dashboards should be loaded. By default,
# the Default Space will be used.
#space.id:
# =============================== Elastic Cloud ================================
# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).
# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:
# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:
# ================================== Outputs ===================================
# Configure what output to use when sending the data collected by the beat.
# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["localhost:9200"]
# Protocol - either `http` (default) or `https`.
#protocol: "https"
# Authentication credentials - either API key or username/password.
#api_key: "id:api_key"
#username: "temp"
#password: "temp#1234"
# ------------------------------ Logstash Output -------------------------------
#output.logstash:
# The Logstash hosts
#hosts: ["localhost:5044"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
# ================================= Processors =================================
processors:
- add_host_metadata:
when.not.contains.tags: forwarded
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
# ================================== Logging ===================================
# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug
# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publisher", "service".
#logging.selectors: ["*"]
# ============================= X-Pack Monitoring ==============================
# Filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster. This requires xpack monitoring to be enabled in Elasticsearch. The
# reporting is disabled by default.
# Set to true to enable the monitoring reporter.
#monitoring.enabled: false
# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:
# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:
# ============================== Instrumentation ===============================
# Instrumentation support for the filebeat.
#instrumentation:
# Set to true to enable instrumentation of filebeat.
#enabled: false
# Environment in which filebeat is running on (eg: staging, production, etc.)
#environment: ""
# APM Server hosts to report instrumentation results to.
#hosts:
# - http://localhost:8200
# API Key for the APM Server(s).
# If api_key is set then secret_token will be ignored.
#api_key:
# Secret token for the APM Server(s).
#secret_token:
# ================================= Migration ==================================
# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true
And /etc/filebeat/modules/nginx.yml file
# Module: nginx
# Module: nginx
access:
enabled: true
# Set custom paths for the log files. If left empty,
# Filebeat will choose the paths depending on your OS.
var.paths: ["var/logs/nginx/*access.log"]
# Error logs
error:
enabled: true
# Set custom paths for the log files. If left empty,
# Filebeat will choose the paths depending on your OS.
var.paths: ["var/logs/nginx/*error.log"]
# Ingress-nginx controller logs. This is disabled by default. It could be used in Kubernetes environments to parse ingress-nginx logs
ingress_controller:
enabled: false
# Set custom paths for the log files. If left empty,
# Filebeat will choose the paths depending on your OS.
#var.paths:
after i run filebeat setup -e command I got the following output
2021-12-18T19:19:44.352+0530 INFO cfgfile/reload.go:262 Loading of config files completed.
2021-12-18T19:19:44.352+0530 INFO [load] cfgfile/list.go:129 Stopping 2 runners ...
Loaded Ingest pipelines
After this when i run systemctl restart filebeat
I got the following error
● filebeat.service - Filebeat sends log files to Logstash or directly to Elasticsearch.
Loaded: loaded (/usr/lib/systemd/system/filebeat.service; enabled; vendor preset: disabled)
Active: failed (Result: start-limit) since Sat 2021-12-18 19:03:53 IST; 17min ago
Docs: https://www.elastic.co/beats/filebeat
Main PID: 49606 (code=exited, status=1/FAILURE)
Dec 18 19:03:53 ip-10-249-5-178.ap-south-1.compute.internal systemd[1]: Unit filebeat.service entered failed state.
Dec 18 19:03:53 ip-10-249-5-178.ap-south-1.compute.internal systemd[1]: filebeat.service failed.
Dec 18 19:03:53 ip-10-249-5-178.ap-south-1.compute.internal systemd[1]: filebeat.service holdoff time over, scheduling restart.
Dec 18 19:03:53 ip-10-249-5-178.ap-south-1.compute.internal systemd[1]: Stopped Filebeat sends log files to Logstash or directly to Elasticsearch..
Dec 18 19:03:53 ip-10-249-5-178.ap-south-1.compute.internal systemd[1]: start request repeated too quickly for filebeat.service
Dec 18 19:03:53 ip-10-249-5-178.ap-south-1.compute.internal systemd[1]: Failed to start Filebeat sends log files to Logstash or directly to Elasticsearch..
Dec 18 19:03:53 ip-10-249-5-178.ap-south-1.compute.internal systemd[1]: Unit filebeat.service entered failed state.
Dec 18 19:03:53 ip-10-249-5-178.ap-south-1.compute.internal systemd[1]: filebeat.service failed.
Any help ??
Solved by deleting the file
rm -r /var/lib/filebeat/registry
Don't know the reason behind it but after this the service started successfully.

Apache 2.4.7 error client denied by server configuration: /home/appname

So I am using digital ocean to host my ruby on rails application
I used this tutorial on how to deploy it: https://www.digitalocean.com/community/tutorials/how-to-deploy-a-rails-app-with-passenger-and-apache-on-ubuntu-14-04
After completing it I have tried to access my server through its IP address and it gives me a 403 Access Denied Error and checking the apache error log I find the following message:
AH01630: client denied by server configuration: /home/appname, referer: http://IPADDRESS/
My /etc/apache2/sites-available/appname.conf files is as follows:
<VirtualHost *:80>
# The ServerName directive sets the request scheme, hostname and port t$
# the server uses to identify itself. This is used when creating
# redirection URLs. In the context of virtual hosts, the ServerName
# specifies what hostname must appear in the request's Host: header to
# match this virtual host. For the default virtual host (this file) this
# value is not decisive as it is used as a last resort host regardless.
# However, you must set it for any further virtual host explicitly.
ServerName www.peerparking.com
ServerAlias www.peerparking.com
ServerAdmin webmaster#localhost
DocumentRoot /home/peerparking/peerparking/peerparking/public
RailsEnv development
# Available loglevels: trace8, ..., trace1, debug, info, notice, warn,
# error, crit, alert, emerg.
# It is also possible to configure the loglevel for particular
# modules, e.g.
#LogLevel info ssl:warn
DocumentRoot /home/peerparking/peerparking/peerparking/public
RailsEnv development
# Available loglevels: trace8, ..., trace1, debug, info, notice, warn,
# error, crit, alert, emerg.
# It is also possible to configure the loglevel for particular
# modules, e.g.
#LogLevel info ssl:warn
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
<Directory /home/peerparking/peerparking/peerparking/public>
Require all granted
Options FollowSymLinks
</Directory>
# For most configuration files from conf-available/, which are
# enabled or disabled at a global level, it is possible to
# include a line for only one particular virtual host. For example the
# following line enables the CGI configuration for this host only
# after it has been globally disabled with "a2disconf".
#Include conf-available/serve-cgi-bin.conf
</VirtualHost>
# vim: syntax=apache ts=4 sw=4 sts=4 sr noet
My ruby on rails app is located at /username/home/appname/appname/appname
Dont ask why I have three successive folders named the same, I messed up pulling from my repository

Deploy to EC2 with Rubber

I have been having issues trying to deploy with rubber
in the Terminal
rubber vulcanize complete_passenger_postgresql
and in my rubber.yml
# REQUIRED: The name of your application
app_name: app-name
# REQUIRED: The system user to run your app servers as
app_user: app
# REQUIRED: Notification emails (e.g. monit) get sent to this address
#
admin_email: "root##{full_host}"
# OPTIONAL: If not set, you won't be able to access web_tools
# server (graphite, graylog, monit status, haproxy status, etc)
# web_tools_user: admin
# web_tools_password: sekret
# REQUIRED: The timezone the server should be in
timezone: US/Eastern
# REQUIRED: the domain all the instances should be associated with
#
domain: foo.com
# OPTIONAL: See rubber-dns.yml for dns configuration
# This lets rubber update a dynamic dns service with the instance alias
# and ip when they are created. It also allows setting up arbitrary
# dns records (CNAME, MX, Round Robin DNS, etc)
# OPTIONAL: Additional rubber file to pull config from if it exists. This file will
# also be pushed to remote host at Rubber.root/config/rubber/rubber-secret.yml
#
rubber_secret: "#{File.expand_path('~') + '/.ec2' + (Rubber.env == 'production' ? '' : '_dev') + '/rubber-secret.yml' rescue 'rubber-secret.yml'}"
# OPTIONAL: Encryption key that was used to obfuscate the contents of rubber-secret.yml with "rubber util:obfuscation"
# Not that much better when stored in here, but you could use a ruby snippet in here to fetch it from a key server or something
#
# rubber_secret_key: "XXXyyy=="
# REQUIRED All known cloud providers with the settings needed to configure them
# There's only one working cloud provider right now - Amazon Web Services
# To implement another, clone lib/rubber/cloud/aws.rb or make the fog provider
# work in a generic fashion
#
cloud_providers:
aws:
# REQUIRED The AWS region that you want to use.
#
# Options include
# ap-northeast-1 # Asia Pacific (Tokyo) Region
# ap-southeast-1 # Asia Pacific (Singapore) Region
# ap-southeast-2 # Asia Pacific (Sydney) Region
# eu-west-1 # EU (Ireland) Region
# sa-east-1 # South America (Sao Paulo) Region
# us-east-1 # US East (Northern Virginia) Region
# us-west-1 # US West (Northern California) Region
# us-west-2 # US West (Oregon) Region
#
region: us-east-1
# REQUIRED The amazon keys and account ID (digits only, no dashes) used to access the AWS API
#
access_key: XXX
secret_access_key: YYY
account: ZZZ #entered in
# REQUIRED: The name of the amazon keypair and location of its private key
#
# NOTE: for some reason Capistrano requires you to have both the public and
# the private key in the same folder, the public key should have the
# extension ".pub". The easiest way to get your hand on this is to create the
# public key from the private key: ssh-keygen -y -f gsg-keypair > gsg-keypair.pub
#
key_name: guy
key_file: "#{Dir[(File.expand_path('~') rescue '/root') + '/.ec2/*' + cloud_providers.aws.key_name].first}"
# OPTIONAL: Needed for bundling a running instance using rubber:bundle
#
# pk_file: "#{Dir[(File.expand_path('~') rescue '/root') + '/.ec2/pk-*'].first}"
# cert_file: "#{Dir[(File.expand_path('~') rescue '/root') + '/.ec2/cert-*'].first}"
# image_bucket: "#{app_name}-images"
# OPTIONAL: Needed for backing up database to s3
# backup_bucket: "#{app_name}-backups"
# REQUIRED: the ami and instance type for creating instances
# The Ubuntu images at http://alestic.com/ work well
# Ubuntu 14.04.1 Trusty instance-store 64-bit: ami-92f569fa
#
# m1.small or m1.large or m1.xlarge
image_type: m3.medium
image_id: ami-1ecae776
# OPTIONAL: Provide fog-specific options directly. This should only be used if you need a special setting that
# Rubber does not directly expose. Since these settings will be passed directly through to fog, we can't make any
# guarantee about how they work (if fog renames an attribute, e.g., your config will break). Please see the fog
# source code for the option names.
# fog_options:
# EBS I/O optimized instance
# EBS-optimized instances deliver dedicated throughput between Amazon EC2 and Amazon EBS, with options
# between 500 Mbps and 1000 Mbps depending on the instance type used.
# Read more and make sure that your image_type supports ebs_optimized function at: http://aws.amazon.com/ec2/instance-types/
# ebs_optimized: false
# OPTIONAL: EC2 spot instance request support.
#
# Enables the creation of spot instance requests. Rubber will wait synchronously until the request is fulfilled,
# at which point it will begin initializing the instance, unless spot_instance_request_timeout is set.
# spot_instance: true
#
# The maximum price you would like to pay for your spot instance.
# spot_price: "0.085"
#
# If a spot instance request can't be fulfilled in 3 minutes, fallback to on-demand instance creation. If not set,
# the default is infinite.
# spot_instance_request_timeout: 180
# digital_ocean:
# REQUIRED: The Digital Ocean region that you want to use.
#
# Options include
# New York 1
# Amsterdam 1
# San Francisco 1
# New York 2
# Amsterdam 2
# Singapore 1
#
# These change often. Check https://www.digitalocean.com/droplets/new for the most up to date options.
# Default to New York 2 since this is the only region that currently supports private networking
# region: New York 2
# REQUIRED: The image name and type for creating instances.
# image_id: Ubuntu 14.04 x64
# image_type: 512MB
# Optionally enable private networking for your instances.
# This is currently only supported in New York 2.
# private_networking: true
# Use an alternate cloud provider supported by fog. This doesn't fully work
# yet due to differences in providers within fog, but gives you a starting
# point for contributing a new provider to rubber. See rubber/lib/rubber/cloud(.rb)
# fog:
# credentials:
# provider: rackspace
# rackspace_api_key: 'XXX'
# rackspace_username: 'YYY'
# image_type: 123
# image_id: 123
# REQUIRED the cloud provider to use
#
cloud_provider: aws
# OPTIONAL: Where to store instance data.
#
# Allowed forms are:
# filesystem: "file:#{Rubber.root}/config/rubber/instance-#{Rubber.env}.yml"
# cloud storage (s3): "storage:#{cloud_providers.aws.backup_bucket}/RubberInstances_#{app_name}/instance-#{Rubber.env}.yml"
# cloud table (simpledb): "table:RubberInstances_#{app_name}_#{Rubber.env}"
#
# If you need to port between forms, load the rails console then:
# Rubber.instances.save(location)
# where location is one of the allowed forms for this variable
#
# instance_storage: "file:#{Rubber.root}/config/rubber/instance-#{Rubber.env}.yml"
# OPTIONAL: Where to store a backup of the instance data
#
# This is most useful when using a remote store in case you end up
# wiping the single copy of your instance data. When using the file
# store, the instance file is typically under version control with
# your project code, so that provides some safety.
#
# instance_storage_backup: "storage:#{cloud_providers.aws.backup_bucket}/RubberInstances_#{app_name}/instance-#{Rubber.env}-#{Time.now.strftime('%Y%m%d-%H%M%S')}.yml"
# OPTIONAL: Default ports for security groups
web_port: 80
web_ssl_port: 443
web_tools_port: 8080
web_tools_ssl_port: 8443
# OPTIONAL: Define security groups
# Each security group is a name associated with a sequence of maps where the
# keys are the parameters to the ec2 AuthorizeSecurityGroupIngress API
# source_security_group_name, source_security_group_owner_id
# ip_protocol, from_port, to_port, cidr_ip
# If you want to use a source_group outside of this project, add "external_group: true"
# to prevent group_isolation from mangling its name, e.g. to give access to graphite
# server to other projects
#
# security_groups:
# graphite_server:
# description: The graphite_server security group to allow projects to send graphite data
# rules:
# - source_group_name: yourappname_production_collectd
# source_group_account: 123456
# external_group: true
# protocol: tcp
# from_port: "#{graphite_server_port}"
# to_port: "#{graphite_server_port}"
#
security_groups:
default:
description: The default security group
rules:
- source_group_name: default
source_group_account: "#{cloud_providers.aws.account}"
- protocol: tcp
from_port: 22
to_port: 22
source_ips: [0.0.0.0/0]
web:
description: "To open up port #{web_port}/#{web_ssl_port} for http server on web role"
rules:
- protocol: tcp
from_port: "#{web_port}"
to_port: "#{web_port}"
source_ips: [0.0.0.0/0]
- protocol: tcp
from_port: "#{web_ssl_port}"
to_port: "#{web_ssl_port}"
source_ips: [0.0.0.0/0]
web_tools:
description: "To open up port #{web_tools_port}/#{web_tools_ssl_port} for internal/tools http server"
rules:
- protocol: tcp
from_port: "#{web_tools_port}"
to_port: "#{web_tools_port}"
source_ips: [0.0.0.0/0]
- protocol: tcp
from_port: "#{web_tools_ssl_port}"
to_port: "#{web_tools_ssl_port}"
source_ips: [0.0.0.0/0]
# OPTIONAL: The default security groups to create instances with
assigned_security_groups: [default]
roles:
web:
assigned_security_groups: [web]
web_tools:
assigned_security_groups: [web_tools]
# OPTIONAL: Automatically create security groups for each host and role
# EC2 Classic doesn't allow one to change what groups an instance belongs to after
# creation, so it's good to have some empty ones predefined. EC2 with VPC, however,
# does allow changing security groups after instance creation and allows far fewer
# security groups per instance, so you shouldn't enable this setting if using VPC.
auto_security_groups: false
# OPTIONAL: Automatically isolate security groups for each appname/environment
# by mangling their names to be appname_env_groupname
# This makes it safer to have staging and production coexist on the same EC2
# account, or even multiple apps. NB: due to the security group limits per instance
# in EC2 with VPCs, this option should only be enabled if you're using EC2 Classic.
isolate_security_groups: false
# OPTIONAL: Prompts one to sync security group rules when the ones in amazon
# differ from those in rubber
prompt_for_security_group_sync: false
# OPTIONAL: A list of CIDR address blocks that represent private networks for your cluster.
# Set this to open up wide access to hosts in your network. Consequently, setting the CIDR block
# to anything other than a private, unroutable block would be a massive security hole.
private_networks: [10.0.0.0/8]
# OPTIONAL: The packages to install on all instances
# You can install a specific version of a package by using a sub-array of pkg, version
# For example, packages: [[rake, 0.7.1], irb]
packages: [postfix, build-essential, git-core, libxslt-dev, ntp]
# OPTIONAL: The package manager mirror to use for installation of primary packages (i.e., those not explicitly
# sourced from a different repository). If not specified, whatever mirror configured by your server image
# will be used.
#
# Note that Ubuntu has a special URL that can be used to auto-select the mirror based upon geoip. To use
# it, specify 'mirror://mirrors.ubuntu.com/mirrors.txt' as the value.
# package_manager_mirror: 'mirror://mirrors.ubuntu.com/mirrors.txt'
# OPTIONAL: The command used to identify your particular OS version. This will be used for configurations
# in Rubber templates that are parameterized by OS version (e.g., package lists). If not specified, Ubuntu
# will be assumed.
os_version_cmd: 'lsb_release -sr'
# OPTIONAL: gem sources to setup for rubygems
# gemsources: ["https://rubygems.org"]
# OPTIONAL: The gems to install on all instances
# You can install a specific version of a gem by using a sub-array of gem, version
# For example, gem: [[rails, 2.2.2], open4, aws-s3]
gems: [open4, aws-s3, bundler, [rubber, "#{Rubber.version}"]]
# OPTIONAL: A string prepended to shell command strings that cause multi
# statement shell commands to fail fast. You may need to comment this out
# on some platforms, but it works for me on linux/osx with a bash shell
#
stop_on_error_cmd: "function error_exit { exit 99; }; trap error_exit ERR"
# OPTIONAL: The default set of roles to use when creating a staging instance
# with "cap rubber:create_staging". By default this uses all the known roles,
# excluding slave roles, but this is not always desired for staging, so you can
# specify a different set here
#
# staging_roles: "web,app,db:primary=true"
# Auto detect staging roles
staging_roles: "#{known_roles.reject {|r| r =~ /slave/ || r =~ /^db$/ }.join(',')}"
# OPTIONAL: Lets one assign amazon elastic IPs (static IPs) to your instances
# You should typically set this on the role/host level rather than
# globally , unless you really do want all instances to have a
# static IP
#
# use_static_ip: true
# OPTIONAL: Specifies an instance to be created in the given availability zone
# Availability zones are sepcified by amazon to be somewhat isolated
# from each other so that hardware failures in one zone shouldn't
# affect instances in another. As such, it is good to specify these
# for instances that need to be redundant to reduce your chance of
# downtime. You should typically set this on the role/host level
# rather than globally. Use cap rubber:describe_zones to see the list
# of zones
# availability_zone: us-east-1a
# OPTIONAL: If you want to use Elastic Block Store (EBS) persistent
# volumes, add them to host specific overrides and they will get created
# and assigned to the instance. On initial creation, the volume will get
# attached _and_ formatted, but if your host disappears and you recreate
# it, the volume will only get remounted thereby preserving your data
#
# hosts:
# production15:
# availability_zone: us-east-1b
# volumes:
# - size: 100 # size of vol in GBs
# zone: us-east-1b # zone to create volume in, needs to match host's zone
# device: /dev/sdh # OS device to attach volume to
# mount: /mnt/postgresql # The directory to mount this volume to
# filesystem: ext4 # the filesystem to create on volume
#
# # OPTIONAL: Provide fog-specific options directly. This should only be used if you need a special setting that
# # Rubber does not directly expose. Since these settings will be passed directly through to fog, we can't make any
# # guarantee about how they work (if fog renames an attribute, e.g., your config will break). Please see the fog
# # source code for the option names.
# fog_options:
# type: gp2 # type of volume, standard (EBS magnetic), io1 (provisioned IOPS - SSD), or gp2 (general purpose - SSD).
# iops: 500 # The number of I/O operations per second (IOPS) that the volume supports.
# # Required when the volume type is io1; not used with non-provisioned IOPS volumes.
# - size: 10
# zone: us-east-1a
# device: /dev/sdi
# mount: /mnt/logs
# filesystem: ext4
# fog_options:
# type: io1
# iops: 500
#
# # volumes without mount/filesystem can be used in raid arrays
#
# - size: 50
# zone: us-east-1a
# device: /dev/sdx
# fog_options:
# type: gp2
# iops: 500
# - size: 50
# zone: us-east-1a
# device: /dev/sdy
# fog_options:
# type: gp2
# iops: 500
#
# # Use some ephemeral volumes for raid array
# local_volumes:
# - partition_device: /dev/sdb
# zero: false # zeros out disk for improved performance
# - partition_device: /dev/sdc
# zero: false # zeros out disk for improved performance
#
# # for raid array, you'll need to add mdadm to packages. Likewise,
# # xfsprogs is needed for xfs filesystem support
# #
# packages: [xfsprogs, mdadm]
# raid_volumes:
# - device: /dev/md0 # OS device to to create raid array on
# mount: /mnt/fast # The directory to mount this array to
# mount_opts: 'nobootwait' # Recent Ubuntu versions require this flag or SSH will not start on reboot
# filesystem: xfs # the filesystem to create on array
# filesystem_opts: -f # the filesystem opts in mkfs
# raid_level: 0 # the raid level to use for the array
# # if you're using Ubuntu 11.x or later (Natty, Oneiric, Precise, etc)
# # you will want to specify the source devices in their /dev/xvd format
# # see https://bugs.launchpad.net/ubuntu/+source/linux/+bug/684875 for
# # more information.
# # NOTE: Only make this change for raid source_devices, NOT generic
# # volume commands above.
# source_devices: [/dev/sdx, /dev/sdy] # the source EBS devices we are creating raid array from (Ubuntu Lucid or older)
# source_devices: [/dev/xvdx, /dev/xvdy] # the source EBS devices we are creating raid array from (Ubuntu Natty or newer)
#
# # for LVM volumes, you'll need to add lvm2 to packages. Likewise,
# # xfsprogs is needed for xfs filesystem support
# packages: [xfsprogs, lvm2]
# lvm_volume_groups:
# - name: vg # The volume group name
# physical_volumes: [/dev/sdx, /dev/sdy] # Devices used for LVM group (you can use just one, but you can't stripe then)
# extent_size: 32 # Size of the volume extent in MB
# volumes:
# - name: lv # Name of the logical volume
# size: 999.9 # Size of volume in GB (slightly less than sum of all physical volumes because LVM reserves some space)
# stripes: 2 # Count of stripes for volume
# filesystem: xfs # The filesystem to create on the logical volume
# filesystem_opts: -f # the filesystem opts in mkfs
# mount: /mnt/large_work_dir # The directory to mount this LVM volume to
# OPTIONAL: You can also define your own variables here for use when
# transforming config files, and they will be available in your config
# templates as <%%= rubber_env.var_name %>
#
# var_name: var_value
# All variables can also be overridden on the role, environment and/or host level by creating
# a sub level to the config under roles, environments and hosts. The precedence is host, environment, role
# e.g. to install mysql only on db role, and awstats only on web01:
# OPTIONAL: Role specific overrides
# roles:
# somerole:
# packages: []
# somerole2:
# myconfig: someval
# OPTIONAL: Environment specific overrides
# environments:
# staging:
# myconfig: otherval
# production:
# myconfig: val
# OPTIONAL: Host specific overrides
# hosts:
# somehost:
# packages: []
And doing a
cap rubber:create
I get the error after:
* executing `rubber:setup_local_aliases'
/home/casekey/.rvm/gems/ruby-2.1.1/gems/rubber-2.16.0/lib/rubber/recipes/rubber/setup.rb:192:in `block (3 levels) in load': no implicit conversion of nil into String (TypeError)
setup.rb at line at 192:
local_hosts << ic.external_ip << ' ' << hosts_data.compact.join(' ') << "\n"
After attempting to debug this with binding.pry, the line 192 goes through without any error.
Any ideas are welcome.
I have also tried:
bundle exec rake rails:update:bin
as per Rails 4 Error with every command "`load': no implicit conversion of nil into String" (Mac OS X 10.9)

Request exceeded the limit of 10 internal redirects due to probable configuration error

I was trying to remove index.php from the URL of a Magento website:
_Turn on “use webserver rewrite”
_ Set permission 755 to necessary files and folders
_ make sure mode rewrite is on
_ configure htaccess file. comment, uncomment allow symlinks, change rewrite base from /magento/ to / or /var/www/hosts/www.domainname.com/ or /hosts/www.domainname.com/ or /www.domainname.com/
_reindex, flush cache
But all results in 500 server internal error.
In the log file I can see:
[Fri Apr 20 11:11:59 2012] [error] [client 88.87.40.140] client denied by server configuration: /var/www/hosts/www.nordocks.no/app/etc/local.xml
[Fri Apr 20 11:12:07 2012] [error] [client 117.5.178.168] Request exceeded the limit of 10 internal redirects due to probable configuration error.
And this is my .htaccess
############################################
## uncomment these lines for CGI mode
## make sure to specify the correct cgi php binary file name
## it might be /cgi-bin/php-cgi
# Action php5-cgi /cgi-bin/php5-cgi
# AddHandler php5-cgi .php
############################################
## GoDaddy specific options
# Options -MultiViews
## you might also need to add this line to php.ini
## cgi.fix_pathinfo = 1
## if it still doesn't work, rename php.ini to php5.ini
############################################
## this line is specific for 1and1 hosting
#AddType x-mapp-php5 .php
#AddHandler x-mapp-php5 .php
############################################
## default index file
DirectoryIndex index.php
<IfModule mod_php5.c>
############################################
## adjust memory limit
# php_value memory_limit 64M
php_value memory_limit 256M
php_value max_execution_time 18000
############################################
## disable magic quotes for php request vars
php_flag magic_quotes_gpc off
############################################
## disable automatic session start
## before autoload was initialized
php_flag session.auto_start off
############################################
## enable resulting html compression
#php_flag zlib.output_compression on
###########################################
# disable user agent verification to not break multiple image upload
php_flag suhosin.session.cryptua off
###########################################
# turn off compatibility with PHP4 when dealing with objects
php_flag zend.ze1_compatibility_mode Off
</IfModule>
<IfModule mod_security.c>
###########################################
# disable POST processing to not break multiple image upload
SecFilterEngine Off
SecFilterScanPOST Off
</IfModule>
<IfModule mod_deflate.c>
############################################
## enable apache served files compression
## http://developer.yahoo.com/performance/rules.html#gzip
# Insert filter on all content
###SetOutputFilter DEFLATE
# Insert filter on selected content types only
#AddOutputFilterByType DEFLATE text/html text/plain text/xml text/css text/javascript
# Netscape 4.x has some problems...
#BrowserMatch ^Mozilla/4 gzip-only-text/html
# Netscape 4.06-4.08 have some more problems
#BrowserMatch ^Mozilla/4\.0[678] no-gzip
# MSIE masquerades as Netscape, but it is fine
#BrowserMatch \bMSIE !no-gzip !gzip-only-text/html
# Don't compress images
#SetEnvIfNoCase Request_URI \.(?:gif|jpe?g|png)$ no-gzip dont-vary
# Make sure proxies don't deliver the wrong content
#Header append Vary User-Agent env=!dont-vary
</IfModule>
<IfModule mod_ssl.c>
############################################
## make HTTPS env vars available for CGI mode
SSLOptions StdEnvVars
</IfModule>
<IfModule mod_rewrite.c>
############################################
## enable rewrites
Options +FollowSymLinks
RewriteEngine on
############################################
## you can put here your magento root folder
## path relative to web root
#RewriteBase /
############################################
## workaround for HTTP authorization
## in CGI environment
RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization}]
############################################
## always send 404 on missing files in these folders
RewriteCond %{REQUEST_URI} !^/(media|skin|js)/
############################################
## never rewrite for existing files, directories and links
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_FILENAME} !-l
############################################
## rewrite everything else to index.php
RewriteRule .* index.php [L]
</IfModule>
############################################
## Prevent character encoding issues from server overrides
## If you still have problems, use the second line instead
AddDefaultCharset Off
#AddDefaultCharset UTF-8
<IfModule mod_expires.c>
############################################
## Add default Expires header
## http://developer.yahoo.com/performance/rules.html#expires
ExpiresDefault "access plus 1 year"
</IfModule>
############################################
## By default allow all access
Order allow,deny
Allow from all
###########################################
## Deny access to release notes to prevent disclosure of the installed Magento version
<Files RELEASE_NOTES.txt>
order allow,deny
deny from all
</Files>
############################################
## If running in cluster environment, uncomment this
## http://developer.yahoo.com/performance/rules.html#etags
#FileETag none
Please give me an instruction on how to deal with this error.
On my side it was not a permission problem but simply (and the same is in your .htaccess) the line with:
RewriteBase /
was commented.
Uncommenting it solved the problem.
Set permission 755 to necessary files and folders
You've hit a rock on this one. The permissions have been changed on items that need other permissions than 755 set.
Resetting File Permissions
The base things to watch out for are the files & directories that must be writeable:
- file: magento/var/.htaccess
- directory: magento/app/etc
- directory: magento/var
- all the directories under: magento/media
chmod o+w var var/.htaccess app/etc
chmod -R o+w media
I had this same problem.
The directory for my Magento website on my Ubuntu server is: /var/www/magento
When running Magento initial installation I selected "No" for Use Web Server Rewrites. This setting is under Admin Panel - System - General - Web.
In .htaccess of root magento folder after finishing the initial installation it had:
RewriteBase /magento/
As noted before my folder was /var/www/magento/
I changed Web Server Rewrites to Yes. In .htaccess I changed:
RewriteBase /
Works fine now.
I ran into this issue aswell
adding
RewriteBase /
to the .htaccess brought me a step further.
After this I ran into the next issue:
Could not determine temp directory, please specify a cache_dir manually";i:1;s:4307:"#0 /XXX/XXX/lib/Zend/Cache/Backend.php(217): Zend_Cache::throwException('Could not deter...')
Solution for this was to edit the
\lib\Zend\Cache\Backend\File.php
In the file.php search for
'cache_dir' => null,
and replace with
'cache_dir' => "var/tmp/",
I hope this saves somebody else some time.

Resources