im trying to use react native firebase version 6 with react native version 0.61 and i get Failed to install CocoaPods dependencie error when i try to create a new project using this tutorial , and when i use pod instll --repo-update or pod install it doesn't install some files like RNFirebase and Firebase core and etc , this is the error i get when i make a new project :
apple#Apples-MacBook-Pro ~ % npx #react-native-community/cli init --template=#react-native-firebase/template Fbsix
###### ######
### #### #### ###
## ### ### ##
## #### ##
## #### ##
## ## ## ##
## ### ### ##
## ######################## ##
###### ### ### ######
### ## ## ## ## ###
### ## ### #### ### ## ###
## #### ######## #### ##
## ### ########## ### ##
## #### ######## #### ##
### ## ### #### ### ## ###
### ## ## ## ## ###
###### ### ### ######
## ######################## ##
## ### ### ##
## ## ## ##
## #### ##
## #### ##
## ### ### ##
### #### #### ###
###### ######
Welcome to React Native!
Learn once, write anywhere
✔ Downloading template
✔ Copying template
✔ Processing template
⠙ Installing CocoaPods dependencies (this may take a few minutes)internal/modules/cjs/loader.js:796
throw err;
^
Error: Cannot find module '/Users/apple/Fbsix/ios/undefined'
at Function.Module._resolveFilename (internal/modules/cjs/loader.js:793:17)
at Function.Module._load (internal/modules/cjs/loader.js:686:27)
at Function.Module.runMain (internal/modules/cjs/loader.js:1043:10)
at internal/main/run_main_module.js:17:11 {
code: 'MODULE_NOT_FOUND',
requireStack: []
}
✖ Installing CocoaPods dependencies (this may take a few minutes)
error Error: Failed to install CocoaPods dependencies for iOS project, which is required by this template.
Please try again manually: "cd ./Fbsix/ios && pod install".
CocoaPods documentation: https://cocoapods.org/
all helps are appreciated, thank you
Related
i made custom message folder msg in package in msg folder i define flie name .msg then added package message generation dependencies in xml file and made changes in cmake list. error is
I am trying make changes in Cmake list
CMake Error at /opt/ros/noetic/share/genmsg/cmake/genmsg-extras.cmake:271 (message):
Could not find 'share/rospy/cmake/rospy-msg-paths.cmake' (searched in
'/home/vrushabh/v1_wiki_ws/devel;/opt/ros/noetic').
Call Stack (most recent call first):
robot_pkg/CMakeLists.txt:72 (generate_messages)
-- Configuring incomplete, errors occurred!
See also "/home/vrushabh/v1_wiki_ws/build/CMakeFiles/CMakeOutput.log".
See also "/home/vrushabh/v1_wiki_ws/build/CMakeFiles/CMakeError.log".
make: *** [Makefile:1440: cmake_check_build_system] Error 1
Invoking "make cmake_check_build_system" failed
Cmake list look like
cmake_minimum_required(VERSION 3.0.2)
project(robot_pkg)
## Compile as C++11, supported in ROS Kinetic and newer
# add_compile_options(-std=c++11)
## Find catkin macros and libraries
## if COMPONENTS list like find_package(catkin REQUIRED COMPONENTS xyz)
## is used, also find other catkin packages
find_package(catkin REQUIRED COMPONENTS
rospy
std_msgs
message_generation
geometry_msgs
)
## System dependencies are found with CMake's conventions
# find_package(Boost REQUIRED COMPONENTS system)
## Uncomment this if the package has a setup.py. This macro ensures
## modules and global scripts declared therein get installed
## See http://ros.org/doc/api/catkin/html/user_guide/setup_dot_py.html
# catkin_python_setup()
################################################
## Declare ROS messages, services and actions ##
################################################
## To declare and build messages, services or actions from within this
## package, follow these steps:
## * Let MSG_DEP_SET be the set of packages whose message types you use in
## your messages/services/actions (e.g. std_msgs, actionlib_msgs, ...).
## * In the file package.xml:
## * add a build_depend tag for "message_generation"
## * add a build_depend and a exec_depend tag for each package in MSG_DEP_SET
## * If MSG_DEP_SET isn't empty the following dependency has been pulled in
## but can be declared for certainty nonetheless:
## * add a exec_depend tag for "message_runtime"
## * In this file (CMakeLists.txt):
## * add "message_generation" and every package in MSG_DEP_SET to
## find_package(catkin REQUIRED COMPONENTS ...)
## * add "message_runtime" and every package in MSG_DEP_SET to
## catkin_package(CATKIN_DEPENDS ...)
## * uncomment the add_*_files sections below as needed
## and list every .msg/.srv/.action file to be processed
## * uncomment the generate_messages entry below
## * add every package in MSG_DEP_SET to generate_messages(DEPENDENCIES ...)
## Generate messages in the 'msg' folder
add_message_files(
FILES
otSensor.msg
)
## Generate services in the 'srv' folder
# add_service_files(
# FILES
# Service1.srv
# Service2.srv
# )
## Generate actions in the 'action' folder
# add_action_files(
# FILES
# Action1.action
# Action2.action
# )
## Generate added messages and services with any dependencies listed here
generate_messages(
DEPENDENCIES
rospy
std_msgs
geometry_msgs
)
################################################
## Declare ROS dynamic reconfigure parameters ##
################################################
## To declare and build dynamic reconfigure parameters within this
## package, follow these steps:
## * In the file package.xml:
## * add a build_depend and a exec_depend tag for "dynamic_reconfigure"
## * In this file (CMakeLists.txt):
## * add "dynamic_reconfigure" to
## find_package(catkin REQUIRED COMPONENTS ...)
## * uncomment the "generate_dynamic_reconfigure_options" section below
## and list every .cfg file to be processed
## Generate dynamic reconfigure parameters in the 'cfg' folder
# generate_dynamic_reconfigure_options(
# cfg/DynReconf1.cfg
# cfg/DynReconf2.cfg
# )
###################################
## catkin specific configuration ##
###################################
## The catkin_package macro generates cmake config files for your package
## Declare things to be passed to dependent projects
## INCLUDE_DIRS: uncomment this if your package contains header files
## LIBRARIES: libraries you create in this project that dependent projects also need
## CATKIN_DEPENDS: catkin_packages dependent projects also need
## DEPENDS: system dependencies of this project that dependent projects also need
# catkin_package(
#INCLUDE_DIRS include(
#LIBRARIES robot_pkg
#CATKIN_DEPENDS rospy std_msgs message_runtime
#DEPENDS system_lib
#)
###########
## Build ##
###########
## Specify additional locations of header files
## Your package locations should be listed before other locations
include_directories(
include
${catkin_INCLUDE_DIRS}
)
## Declare a C++ library
# add_library(${PROJECT_NAME}
# src/${PROJECT_NAME}/robot_pkg.cpp
# )
## Add cmake target dependencies of the library
## as an example, code may need to be generated before libraries
## either from message generation or dynamic reconfigure
# add_dependencies(${PROJECT_NAME} ${${PROJECT_NAME}_EXPORTED_TARGETS} ${catkin_EXPORTED_TARGETS})
## Declare a C++ executable
## With catkin_make all packages are built within a single CMake context
## The recommended prefix ensures that target names across packages don't collide
# add_executable(${PROJECT_NAME}_node src/robot_pkg_node.cpp)
## Rename C++ executable without prefix
## The above recommended prefix causes long target names, the following renames the
## target back to the shorter version for ease of user use
## e.g. "rosrun someones_pkg node" instead of "rosrun someones_pkg someones_pkg_node"
# set_target_properties(${PROJECT_NAME}_node PROPERTIES OUTPUT_NAME node PREFIX "")
## Add cmake target dependencies of the executable
## same as for the library above
# add_dependencies(${PROJECT_NAME}_node ${${PROJECT_NAME}_EXPORTED_TARGETS} ${catkin_EXPORTED_TARGETS})
## Specify libraries to link a library or executable target against
# target_link_libraries(${PROJECT_NAME}_node
# ${catkin_LIBRARIES}
# )
#############
## Install ##
#############
# all install targets should use catkin DESTINATION variables
# See http://ros.org/doc/api/catkin/html/adv_user_guide/variables.html
## Mark executable scripts (Python etc.) for installation
## in contrast to setup.py, you can choose the destination
# catkin_install_python(PROGRAMS
# scripts/my_python_script
# DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION}
# )
## Mark executables for installation
## See http://docs.ros.org/melodic/api/catkin/html/howto/format1/building_executables.html
# install(TARGETS ${PROJECT_NAME}_node
# RUNTIME DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION}
# )
## Mark libraries for installation
## See http://docs.ros.org/melodic/api/catkin/html/howto/format1/building_libraries.html
# install(TARGETS ${PROJECT_NAME}
# ARCHIVE DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION}
# LIBRARY DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION}
# RUNTIME DESTINATION ${CATKIN_GLOBAL_BIN_DESTINATION}
# )
## Mark cpp header files for installation
# install(DIRECTORY include/${PROJECT_NAME}/
# DESTINATION ${CATKIN_PACKAGE_INCLUDE_DESTINATION}
# FILES_MATCHING PATTERN "*.h"
# PATTERN ".svn" EXCLUDE
# )
## Mark other files for installation (e.g. launch and bag files, etc.)
# install(FILES
# # myfile1
# # myfile2
# DESTINATION ${CATKIN_PACKAGE_SHARE_DESTINATION}
# )
#############
## Testing ##
#############
## Add gtest based cpp test target and link libraries
# catkin_add_gtest(${PROJECT_NAME}-test test/test_robot_pkg.cpp)
# if(TARGET ${PROJECT_NAME}-test)
# target_link_libraries(${PROJECT_NAME}-test ${PROJECT_NAME})
# endif()
## Add folders to be run by python nosetests
# catkin_add_nosetests(test)
What changes I should make in cmake list to remove error ?`
Remove rospy from the generate_messages(DEPENDENCIES ...). The error you had was the system trying to find a type of message called rospy
After the installation of ROS Noetic on Ubuntu 20.04LTS I launch the command catkin_make to create the environment as shown in the screenshot but I don't understand the output error that the console gives to me.
I am new to ROS and quite unfamiliar with bash, could someone help me?
Thank you very much in advance for your availability.
here is the file that asked me, but I don't know how to provide you more information about the catkin workspace config.
cmake_minimum_required(VERSION 3.0.2)
project(beginner_tutorials_)
## Compile as C++11, supported in ROS Kinetic and newer
# add_compile_options(-std=c++11)
## Find catkin macros and libraries
## if COMPONENTS list like find_package(catkin REQUIRED COMPONENTS xyz)
## is used, also find other catkin packages
find_package(catkin REQUIRED COMPONENTS
message_generation
roscpp
rospy
std_msgs
)
## System dependencies are found with CMake's conventions
# find_package(Boost REQUIRED COMPONENTS system)
## Uncomment this if the package has a setup.py. This macro ensures
## modules and global scripts declared therein get installed
## See http://ros.org/doc/api/catkin/html/user_guide/setup_dot_py.html
# catkin_python_setup()
################################################
## Declare ROS messages, services and actions ##
################################################
## To declare and build messages, services or actions from within this
## package, follow these steps:
## * Let MSG_DEP_SET be the set of packages whose message types you use in
## your messages/services/actions (e.g. std_msgs, actionlib_msgs, ...).
## * In the file package.xml:
## * add a build_depend tag for "message_generation"
## * add a build_depend and a exec_depend tag for each package in MSG_DEP_SET
## * If MSG_DEP_SET isn't empty the following dependency has been pulled in
## but can be declared for certainty nonetheless:
## * add a exec_depend tag for "message_runtime"
## * In this file (CMakeLists.txt):
## * add "message_generation" and every package in MSG_DEP_SET to
## find_package(catkin REQUIRED COMPONENTS ...)
## * add "message_runtime" and every package in MSG_DEP_SET to
## catkin_package(CATKIN_DEPENDS ...)
catkin_package(
CATKIN_DEPENDS_runtime
)
## * uncomment the add_*_files sections below as needed
## and list every .msg/.srv/.action file to be processed
## * uncomment the generate_messages entry below
## * add every package in MSG_DEP_SET to generate_messages(DEPENDENCIES ...)
## Generate messages in the 'msg' folder
add_message_files(
my_message.msg
# Message1.msg
# Message2.msg
)
## Generate services in the 'srv' folder
# add_service_files(
# FILES
# Service1.srv
# Service2.srv
# )
## Generate actions in the 'action' folder
# add_action_files(
# FILES
# Action1.action
# Action2.action
# )
## Generate added messages and services with any dependencies listed here
generate_messages(
# DEPENDENCIES
std_msgs
)
################################################
## Declare ROS dynamic reconfigure parameters ##
################################################
## To declare and build dynamic reconfigure parameters within this
## package, follow these steps:
## * In the file package.xml:
## * add a build_depend and a exec_depend tag for "dynamic_reconfigure"
## * In this file (CMakeLists.txt):
## * add "dynamic_reconfigure" to
## find_package(catkin REQUIRED COMPONENTS ...)
## * uncomment the "generate_dynamic_reconfigure_options" section below
## and list every .cfg file to be processed
## Generate dynamic reconfigure parameters in the 'cfg' folder
# generate_dynamic_reconfigure_options(
# cfg/DynReconf1.cfg
# cfg/DynReconf2.cfg
# )
###################################
## catkin specific configuration ##
###################################
## The catkin_package macro generates cmake config files for your package
## Declare things to be passed to dependent projects
## INCLUDE_DIRS: uncomment this if your package contains header files
## LIBRARIES: libraries you create in this project that dependent projects also need
## CATKIN_DEPENDS: catkin_packages dependent projects also need
## DEPENDS: system dependencies of this project that dependent projects also need
catkin_package(
# INCLUDE_DIRS include
# LIBRARIES beginner_tutorials_
# CATKIN_DEPENDS roscpp rospy std_msgs
# DEPENDS system_lib
)
###########
## Build ##
###########
## Specify additional locations of header files
## Your package locations should be listed before other locations
include_directories(
# include
${catkin_INCLUDE_DIRS}
)
## Declare a C++ library
# add_library(${PROJECT_NAME}
# src/${PROJECT_NAME}/beginner_tutorials_.cpp
# )
## Add cmake target dependencies of the library
## as an example, code may need to be generated before libraries
## either from message generation or dynamic reconfigure
# add_dependencies(${PROJECT_NAME} ${${PROJECT_NAME}_EXPORTED_TARGETS} ${catkin_EXPORTED_TARGETS})
## Declare a C++ executable
## With catkin_make all packages are built within a single CMake context
## The recommended prefix ensures that target names across packages don't collide
# add_executable(${PROJECT_NAME}_node src/beginner_tutorials__node.cpp)
## Rename C++ executable without prefix
## The above recommended prefix causes long target names, the following renames the
## target back to the shorter version for ease of user use
## e.g. "rosrun someones_pkg node" instead of "rosrun someones_pkg someones_pkg_node"
# set_target_properties(${PROJECT_NAME}_node PROPERTIES OUTPUT_NAME node PREFIX "")
## Add cmake target dependencies of the executable
## same as for the library above
# add_dependencies(${PROJECT_NAME}_node ${${PROJECT_NAME}_EXPORTED_TARGETS} ${catkin_EXPORTED_TARGETS})
## Specify libraries to link a library or executable target against
# target_link_libraries(${PROJECT_NAME}_node
# ${catkin_LIBRARIES}
# )
#############
## Install ##
#############
# all install targets should use catkin DESTINATION variables
# See http://ros.org/doc/api/catkin/html/adv_user_guide/variables.html
## Mark executable scripts (Python etc.) for installation
## in contrast to setup.py, you can choose the destination
# catkin_install_python(PROGRAMS
# scripts/my_python_script
# DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION}
# )
## Mark executables for installation
## See http://docs.ros.org/melodic/api/catkin/html/howto/format1/building_executables.html
# install(TARGETS ${PROJECT_NAME}_node
# RUNTIME DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION}
# )
## Mark libraries for installation
## See http://docs.ros.org/melodic/api/catkin/html/howto/format1/building_libraries.html
# install(TARGETS ${PROJECT_NAME}
# ARCHIVE DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION}
# LIBRARY DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION}
# RUNTIME DESTINATION ${CATKIN_GLOBAL_BIN_DESTINATION}
# )
## Mark cpp header files for installation
# install(DIRECTORY include/${PROJECT_NAME}/
# DESTINATION ${CATKIN_PACKAGE_INCLUDE_DESTINATION}
# FILES_MATCHING PATTERN "*.h"
# PATTERN ".svn" EXCLUDE
# )
## Mark other files for installation (e.g. launch and bag files, etc.)
# install(FILES
# # myfile1
# # myfile2
# DESTINATION ${CATKIN_PACKAGE_SHARE_DESTINATION}
# )
#############
## Testing ##
#############
## Add gtest based cpp test target and link libraries
# catkin_add_gtest(${PROJECT_NAME}-test test/test_beginner_tutorials_.cpp)
# if(TARGET ${PROJECT_NAME}-test)
# target_link_libraries(${PROJECT_NAME}-test ${PROJECT_NAME})
# endif()
## Add folders to be run by python nosetests
# catkin_add_nosetests(test)
The line
catkin_package(
CATKIN_DEPENDS_runtime
)
does not see seem correct to me. I'm not aware of a CATKIN_DEPENDS_runtime macro that exists. Have a look at the CMake tutorial page under 8.2 there is an example CMakeLists.txt.
Is this perhaps a copy and paste error and you actually want to do:
catkin_package(
CATKIN_DEPENDS message_runtime
)
I'm trying to send logs from Winlogbeat to my ELK stack.
I installed my ELK stack with docker and configured TLS on it.
I did everything according to the official guide and it worked for my host.
However, when copying the same winlogbeat directory to my Event Collector server, it did not work (all files are the same including the yml file).
When trying to run the "winlogbeat.exe setup -e" I got the following error: 'error connecting to elasticsearch at "https://elastic-host:9200" Get "https://elastic-host:9200" Winlogbeat setup error: x509 certificate is valid for elastic-host ip, not elastic-host ip' (same ips). The CA is already added to the trusted root certificates. Everything is configured the same as on the host, on the host it works, on the server it doesn't. (the ELK server and the EVC are in the same segment so there shouldn't be any firewall drops)
My .yml (same file on host and EVC server):
on the host it works without the ssl as well and the traffic is still encrypted due to the TLS that I configured on the docker cluster. So I'm not sure the ssl configuration is needed (but I wanted to include them in case they are needed).
# This file is an example configuration file highlighting only the most common
# options. The winlogbeat.reference.yml file from the same directory contains
# all the supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/winlogbeat/index.html
# ======================== Winlogbeat specific options =========================
# event_logs specifies a list of event logs to monitor as well as any
# accompanying options. The YAML data type of event_logs is a list of
# dictionaries.
#
# The supported keys are name (required), tags, fields, fields_under_root,
# forwarded, ignore_older, level, event_id, provider, and include_xml. Please
# visit the documentation for the complete details of each option.
# https://go.es.io/WinlogbeatConfig
winlogbeat.event_logs:
- name: Application
ignore_older: 72h
- name: System
- name: Security
processors:
- script:
lang: javascript
id: security
file: ${path.home}/module/security/config/winlogbeat-security.js
- name: Microsoft-Windows-Sysmon/Operational
processors:
- script:
lang: javascript
id: sysmon
file: ${path.home}/module/sysmon/config/winlogbeat-sysmon.js
- name: Windows PowerShell
event_id: 400, 403, 600, 800
processors:
- script:
lang: javascript
id: powershell
file: ${path.home}/module/powershell/config/winlogbeat-powershell.js
- name: Microsoft-Windows-PowerShell/Operational
event_id: 4103, 4104, 4105, 4106
processors:
- script:
lang: javascript
id: powershell
file: ${path.home}/module/powershell/config/winlogbeat-powershell.js
- name: ForwardedEvents
tags: [forwarded]
processors:
- script:
when.equals.winlog.channel: Security
lang: javascript
id: security
file: ${path.home}/module/security/config/winlogbeat-security.js
- script:
when.equals.winlog.channel: Microsoft-Windows-Sysmon/Operational
lang: javascript
id: sysmon
file: ${path.home}/module/sysmon/config/winlogbeat-sysmon.js
- script:
when.equals.winlog.channel: Windows PowerShell
lang: javascript
id: powershell
file: ${path.home}/module/powershell/config/winlogbeat-powershell.js
- script:
when.equals.winlog.channel: Microsoft-Windows-PowerShell/Operational
lang: javascript
id: powershell
file: ${path.home}/module/powershell/config/winlogbeat-powershell.js
# ====================== Elasticsearch template settings =======================
setup.template.settings:
index.number_of_shards: 1
#index.codec: best_compression
#_source.enabled: false
# ================================== General ===================================
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:
# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]
# Optional fields that you can specify to add additional information to the
# output.
#fields:
# env: staging
# ================================= Dashboards =================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false
# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:
# =================================== Kibana ===================================
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:
# Kibana Host
# Scheme and port can be left out and will be set to the default (http and 5601)
# In case you specify and additional path, the scheme is required: http://localhost:5601/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
host: "192.168.101.129:5601"
protocol: https
username: "elastic"
password: "passwd"
setup.kibana.ssl.enabled: true
setup.kibana.ssl.certificate_authorities: ["C:\\Program Files\\Winlogbeat\\ca.crt"]
setup.kibana.ssl.certificate: "C:\\Program Files\\Winlogbeat\\winlogbeat.crt"
setup.kibana.ssl.key: "C:\\Program Files\\Winlogbeat\\winlogbeat.key"
# Kibana Space ID
# ID of the Kibana Space into which the dashboards should be loaded. By default,
# the Default Space will be used.
#space.id:
# =============================== Elastic Cloud ================================
# These settings simplify using Winlogbeat with the Elastic Cloud (https://cloud.elastic.co/).
# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:
# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:
# ================================== Outputs ===================================
# Configure what output to use when sending the data collected by the beat.
# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["192.168.101.129:9200"]
username: "elastic"
password: "passwd"
# Protocol - either `http` (default) or `https`.
protocol: "https"
output.elasticsearch.ssl.certificate_authorities: ["C:\\Program Files\\Winlogbeat\\ca.crt"]
output.elasticsearch.ssl.certificate: "C:\\Program Files\\Winlogbeat\\winlogbeat.crt"
output.elasticsearch.ssl.key: "C:\\Program Files\\Winlogbeat\\winlogbeat.key"
# Authentication credentials - either API key or username/password.
#api_key: "id:api_key"
# ------------------------------ Logstash Output -------------------------------
#output.logstash:
# The Logstash hosts
#hosts: ["localhost:5044"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
# ================================= Processors =================================
processors:
- add_host_metadata:
when.not.contains.tags: forwarded
- add_cloud_metadata: ~
# ================================== Logging ===================================
# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug
# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]
# ============================= X-Pack Monitoring ==============================
# Winlogbeat can export internal metrics to a central Elasticsearch monitoring
# cluster. This requires xpack monitoring to be enabled in Elasticsearch. The
# reporting is disabled by default.
# Set to true to enable the monitoring reporter.
#monitoring.enabled: false
# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Winlogbeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:
# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:
# ============================== Instrumentation ===============================
# Instrumentation support for the winlogbeat.
#instrumentation:
# Set to true to enable instrumentation of winlogbeat.
#enabled: false
# Environment in which winlogbeat is running on (eg: staging, production, etc.)
#environment: ""
# APM Server hosts to report instrumentation results to.
#hosts:
# - http://localhost:8200
# API Key for the APM Server(s).
# If api_key is set then secret_token will be ignored.
#api_key:
# Secret token for the APM Server(s).
#secret_token:
# ================================= Migration ==================================
# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true
In your output, you need to specify ssl.verification_mode: certificate.
For your example, it looks like it is the Kibana output that has a certificate specified on it:
setup.kibana.ssl.enabled: true
setup.kibana.ssl.certificate_authorities: ["C:\\Program Files\\Winlogbeat\\ca.crt"]
setup.kibana.ssl.certificate: "C:\\Program Files\\Winlogbeat\\winlogbeat.crt"
setup.kibana.ssl.key: "C:\\Program Files\\Winlogbeat\\winlogbeat.key"
setup.kibana.ssl.verification_mode: certificate
Older versions of winlogbeat will need ssl.verification_mode: none instead.
See SSL/TLS configuration documentation at https://www.elastic.co/guide/en/beats/winlogbeat/current/configuration-ssl.html
I am using elasticsearch 2.3.4 with searchkick and MongoDB on production server. We are running our entire rails app on 2GB RAM aws. Elasticsearch is consuming RAM making the server go down. Please.. how do I solve this issue? Need help
This my elasticsearch.yml file
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please see the documentation for further information on configuration options:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/setup-configuration.html>
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: node-1
#
# Add custom attributes to the node:
#
# node.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
# path.data: /path/to/data
#
# Path to log files:
#
# path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
# bootstrap.mlockall: true
#
#
# Add custom attributes to the node:
#
# node.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
# path.data: /path/to/data
#
# Path to log files:
#
# path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
# bootstrap.mlockall: true
#
# Make sure that the `ES_HEAP_SIZE` environment variable is set to about half the memory
# available on the system and that the owner of the process is allowed to use this limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
# network.host: ********
#
# Set a custom port for HTTP:
#
# http.port: 9200
#
# For more information, see the documentation at:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-network.html>
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
# network.host: *******
#
# Set a custom port for HTTP:
#
# http.port: 9200
#
# For more information, see the documentation at:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-network.html>
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
# discovery.zen.ping.unicast.hosts: ["host1", "host2"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of nodes / 2 + 1):
#
# discovery.zen.minimum_master_nodes: 3
#
# For more information, see the documentation at:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-discovery.html>
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
# gateway.recover_after_nodes: 3
#
# For more information, see the documentation at:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-gateway.html>
#
# ---------------------------------- Various -----------------------------------
#
# Disable starting multiple nodes on a single system:
#
# node.max_local_storage_nodes: 1
# Prevent the "split brain" by configuring the majority of nodes (total number of nodes / 2 + 1):
#
# discovery.zen.minimum_master_nodes: 3
#
# For more information, see the documentation at:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-discovery.html>
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
# gateway.recover_after_nodes: 3
#
# For more information, see the documentation at:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-gateway.html>
#
# ---------------------------------- Various -----------------------------------
#
# Disable starting multiple nodes on a single system:
#
# node.max_local_storage_nodes: 1
#
# Require explicit names when deleting indices:
#
# action.destructive_requires_name: true
2 GB RAM is not enough to run all that. Separate the components across multiple instances vertically or scale the instance to a larger instance type with much more RAM.
I have been having issues trying to deploy with rubber
in the Terminal
rubber vulcanize complete_passenger_postgresql
and in my rubber.yml
# REQUIRED: The name of your application
app_name: app-name
# REQUIRED: The system user to run your app servers as
app_user: app
# REQUIRED: Notification emails (e.g. monit) get sent to this address
#
admin_email: "root##{full_host}"
# OPTIONAL: If not set, you won't be able to access web_tools
# server (graphite, graylog, monit status, haproxy status, etc)
# web_tools_user: admin
# web_tools_password: sekret
# REQUIRED: The timezone the server should be in
timezone: US/Eastern
# REQUIRED: the domain all the instances should be associated with
#
domain: foo.com
# OPTIONAL: See rubber-dns.yml for dns configuration
# This lets rubber update a dynamic dns service with the instance alias
# and ip when they are created. It also allows setting up arbitrary
# dns records (CNAME, MX, Round Robin DNS, etc)
# OPTIONAL: Additional rubber file to pull config from if it exists. This file will
# also be pushed to remote host at Rubber.root/config/rubber/rubber-secret.yml
#
rubber_secret: "#{File.expand_path('~') + '/.ec2' + (Rubber.env == 'production' ? '' : '_dev') + '/rubber-secret.yml' rescue 'rubber-secret.yml'}"
# OPTIONAL: Encryption key that was used to obfuscate the contents of rubber-secret.yml with "rubber util:obfuscation"
# Not that much better when stored in here, but you could use a ruby snippet in here to fetch it from a key server or something
#
# rubber_secret_key: "XXXyyy=="
# REQUIRED All known cloud providers with the settings needed to configure them
# There's only one working cloud provider right now - Amazon Web Services
# To implement another, clone lib/rubber/cloud/aws.rb or make the fog provider
# work in a generic fashion
#
cloud_providers:
aws:
# REQUIRED The AWS region that you want to use.
#
# Options include
# ap-northeast-1 # Asia Pacific (Tokyo) Region
# ap-southeast-1 # Asia Pacific (Singapore) Region
# ap-southeast-2 # Asia Pacific (Sydney) Region
# eu-west-1 # EU (Ireland) Region
# sa-east-1 # South America (Sao Paulo) Region
# us-east-1 # US East (Northern Virginia) Region
# us-west-1 # US West (Northern California) Region
# us-west-2 # US West (Oregon) Region
#
region: us-east-1
# REQUIRED The amazon keys and account ID (digits only, no dashes) used to access the AWS API
#
access_key: XXX
secret_access_key: YYY
account: ZZZ #entered in
# REQUIRED: The name of the amazon keypair and location of its private key
#
# NOTE: for some reason Capistrano requires you to have both the public and
# the private key in the same folder, the public key should have the
# extension ".pub". The easiest way to get your hand on this is to create the
# public key from the private key: ssh-keygen -y -f gsg-keypair > gsg-keypair.pub
#
key_name: guy
key_file: "#{Dir[(File.expand_path('~') rescue '/root') + '/.ec2/*' + cloud_providers.aws.key_name].first}"
# OPTIONAL: Needed for bundling a running instance using rubber:bundle
#
# pk_file: "#{Dir[(File.expand_path('~') rescue '/root') + '/.ec2/pk-*'].first}"
# cert_file: "#{Dir[(File.expand_path('~') rescue '/root') + '/.ec2/cert-*'].first}"
# image_bucket: "#{app_name}-images"
# OPTIONAL: Needed for backing up database to s3
# backup_bucket: "#{app_name}-backups"
# REQUIRED: the ami and instance type for creating instances
# The Ubuntu images at http://alestic.com/ work well
# Ubuntu 14.04.1 Trusty instance-store 64-bit: ami-92f569fa
#
# m1.small or m1.large or m1.xlarge
image_type: m3.medium
image_id: ami-1ecae776
# OPTIONAL: Provide fog-specific options directly. This should only be used if you need a special setting that
# Rubber does not directly expose. Since these settings will be passed directly through to fog, we can't make any
# guarantee about how they work (if fog renames an attribute, e.g., your config will break). Please see the fog
# source code for the option names.
# fog_options:
# EBS I/O optimized instance
# EBS-optimized instances deliver dedicated throughput between Amazon EC2 and Amazon EBS, with options
# between 500 Mbps and 1000 Mbps depending on the instance type used.
# Read more and make sure that your image_type supports ebs_optimized function at: http://aws.amazon.com/ec2/instance-types/
# ebs_optimized: false
# OPTIONAL: EC2 spot instance request support.
#
# Enables the creation of spot instance requests. Rubber will wait synchronously until the request is fulfilled,
# at which point it will begin initializing the instance, unless spot_instance_request_timeout is set.
# spot_instance: true
#
# The maximum price you would like to pay for your spot instance.
# spot_price: "0.085"
#
# If a spot instance request can't be fulfilled in 3 minutes, fallback to on-demand instance creation. If not set,
# the default is infinite.
# spot_instance_request_timeout: 180
# digital_ocean:
# REQUIRED: The Digital Ocean region that you want to use.
#
# Options include
# New York 1
# Amsterdam 1
# San Francisco 1
# New York 2
# Amsterdam 2
# Singapore 1
#
# These change often. Check https://www.digitalocean.com/droplets/new for the most up to date options.
# Default to New York 2 since this is the only region that currently supports private networking
# region: New York 2
# REQUIRED: The image name and type for creating instances.
# image_id: Ubuntu 14.04 x64
# image_type: 512MB
# Optionally enable private networking for your instances.
# This is currently only supported in New York 2.
# private_networking: true
# Use an alternate cloud provider supported by fog. This doesn't fully work
# yet due to differences in providers within fog, but gives you a starting
# point for contributing a new provider to rubber. See rubber/lib/rubber/cloud(.rb)
# fog:
# credentials:
# provider: rackspace
# rackspace_api_key: 'XXX'
# rackspace_username: 'YYY'
# image_type: 123
# image_id: 123
# REQUIRED the cloud provider to use
#
cloud_provider: aws
# OPTIONAL: Where to store instance data.
#
# Allowed forms are:
# filesystem: "file:#{Rubber.root}/config/rubber/instance-#{Rubber.env}.yml"
# cloud storage (s3): "storage:#{cloud_providers.aws.backup_bucket}/RubberInstances_#{app_name}/instance-#{Rubber.env}.yml"
# cloud table (simpledb): "table:RubberInstances_#{app_name}_#{Rubber.env}"
#
# If you need to port between forms, load the rails console then:
# Rubber.instances.save(location)
# where location is one of the allowed forms for this variable
#
# instance_storage: "file:#{Rubber.root}/config/rubber/instance-#{Rubber.env}.yml"
# OPTIONAL: Where to store a backup of the instance data
#
# This is most useful when using a remote store in case you end up
# wiping the single copy of your instance data. When using the file
# store, the instance file is typically under version control with
# your project code, so that provides some safety.
#
# instance_storage_backup: "storage:#{cloud_providers.aws.backup_bucket}/RubberInstances_#{app_name}/instance-#{Rubber.env}-#{Time.now.strftime('%Y%m%d-%H%M%S')}.yml"
# OPTIONAL: Default ports for security groups
web_port: 80
web_ssl_port: 443
web_tools_port: 8080
web_tools_ssl_port: 8443
# OPTIONAL: Define security groups
# Each security group is a name associated with a sequence of maps where the
# keys are the parameters to the ec2 AuthorizeSecurityGroupIngress API
# source_security_group_name, source_security_group_owner_id
# ip_protocol, from_port, to_port, cidr_ip
# If you want to use a source_group outside of this project, add "external_group: true"
# to prevent group_isolation from mangling its name, e.g. to give access to graphite
# server to other projects
#
# security_groups:
# graphite_server:
# description: The graphite_server security group to allow projects to send graphite data
# rules:
# - source_group_name: yourappname_production_collectd
# source_group_account: 123456
# external_group: true
# protocol: tcp
# from_port: "#{graphite_server_port}"
# to_port: "#{graphite_server_port}"
#
security_groups:
default:
description: The default security group
rules:
- source_group_name: default
source_group_account: "#{cloud_providers.aws.account}"
- protocol: tcp
from_port: 22
to_port: 22
source_ips: [0.0.0.0/0]
web:
description: "To open up port #{web_port}/#{web_ssl_port} for http server on web role"
rules:
- protocol: tcp
from_port: "#{web_port}"
to_port: "#{web_port}"
source_ips: [0.0.0.0/0]
- protocol: tcp
from_port: "#{web_ssl_port}"
to_port: "#{web_ssl_port}"
source_ips: [0.0.0.0/0]
web_tools:
description: "To open up port #{web_tools_port}/#{web_tools_ssl_port} for internal/tools http server"
rules:
- protocol: tcp
from_port: "#{web_tools_port}"
to_port: "#{web_tools_port}"
source_ips: [0.0.0.0/0]
- protocol: tcp
from_port: "#{web_tools_ssl_port}"
to_port: "#{web_tools_ssl_port}"
source_ips: [0.0.0.0/0]
# OPTIONAL: The default security groups to create instances with
assigned_security_groups: [default]
roles:
web:
assigned_security_groups: [web]
web_tools:
assigned_security_groups: [web_tools]
# OPTIONAL: Automatically create security groups for each host and role
# EC2 Classic doesn't allow one to change what groups an instance belongs to after
# creation, so it's good to have some empty ones predefined. EC2 with VPC, however,
# does allow changing security groups after instance creation and allows far fewer
# security groups per instance, so you shouldn't enable this setting if using VPC.
auto_security_groups: false
# OPTIONAL: Automatically isolate security groups for each appname/environment
# by mangling their names to be appname_env_groupname
# This makes it safer to have staging and production coexist on the same EC2
# account, or even multiple apps. NB: due to the security group limits per instance
# in EC2 with VPCs, this option should only be enabled if you're using EC2 Classic.
isolate_security_groups: false
# OPTIONAL: Prompts one to sync security group rules when the ones in amazon
# differ from those in rubber
prompt_for_security_group_sync: false
# OPTIONAL: A list of CIDR address blocks that represent private networks for your cluster.
# Set this to open up wide access to hosts in your network. Consequently, setting the CIDR block
# to anything other than a private, unroutable block would be a massive security hole.
private_networks: [10.0.0.0/8]
# OPTIONAL: The packages to install on all instances
# You can install a specific version of a package by using a sub-array of pkg, version
# For example, packages: [[rake, 0.7.1], irb]
packages: [postfix, build-essential, git-core, libxslt-dev, ntp]
# OPTIONAL: The package manager mirror to use for installation of primary packages (i.e., those not explicitly
# sourced from a different repository). If not specified, whatever mirror configured by your server image
# will be used.
#
# Note that Ubuntu has a special URL that can be used to auto-select the mirror based upon geoip. To use
# it, specify 'mirror://mirrors.ubuntu.com/mirrors.txt' as the value.
# package_manager_mirror: 'mirror://mirrors.ubuntu.com/mirrors.txt'
# OPTIONAL: The command used to identify your particular OS version. This will be used for configurations
# in Rubber templates that are parameterized by OS version (e.g., package lists). If not specified, Ubuntu
# will be assumed.
os_version_cmd: 'lsb_release -sr'
# OPTIONAL: gem sources to setup for rubygems
# gemsources: ["https://rubygems.org"]
# OPTIONAL: The gems to install on all instances
# You can install a specific version of a gem by using a sub-array of gem, version
# For example, gem: [[rails, 2.2.2], open4, aws-s3]
gems: [open4, aws-s3, bundler, [rubber, "#{Rubber.version}"]]
# OPTIONAL: A string prepended to shell command strings that cause multi
# statement shell commands to fail fast. You may need to comment this out
# on some platforms, but it works for me on linux/osx with a bash shell
#
stop_on_error_cmd: "function error_exit { exit 99; }; trap error_exit ERR"
# OPTIONAL: The default set of roles to use when creating a staging instance
# with "cap rubber:create_staging". By default this uses all the known roles,
# excluding slave roles, but this is not always desired for staging, so you can
# specify a different set here
#
# staging_roles: "web,app,db:primary=true"
# Auto detect staging roles
staging_roles: "#{known_roles.reject {|r| r =~ /slave/ || r =~ /^db$/ }.join(',')}"
# OPTIONAL: Lets one assign amazon elastic IPs (static IPs) to your instances
# You should typically set this on the role/host level rather than
# globally , unless you really do want all instances to have a
# static IP
#
# use_static_ip: true
# OPTIONAL: Specifies an instance to be created in the given availability zone
# Availability zones are sepcified by amazon to be somewhat isolated
# from each other so that hardware failures in one zone shouldn't
# affect instances in another. As such, it is good to specify these
# for instances that need to be redundant to reduce your chance of
# downtime. You should typically set this on the role/host level
# rather than globally. Use cap rubber:describe_zones to see the list
# of zones
# availability_zone: us-east-1a
# OPTIONAL: If you want to use Elastic Block Store (EBS) persistent
# volumes, add them to host specific overrides and they will get created
# and assigned to the instance. On initial creation, the volume will get
# attached _and_ formatted, but if your host disappears and you recreate
# it, the volume will only get remounted thereby preserving your data
#
# hosts:
# production15:
# availability_zone: us-east-1b
# volumes:
# - size: 100 # size of vol in GBs
# zone: us-east-1b # zone to create volume in, needs to match host's zone
# device: /dev/sdh # OS device to attach volume to
# mount: /mnt/postgresql # The directory to mount this volume to
# filesystem: ext4 # the filesystem to create on volume
#
# # OPTIONAL: Provide fog-specific options directly. This should only be used if you need a special setting that
# # Rubber does not directly expose. Since these settings will be passed directly through to fog, we can't make any
# # guarantee about how they work (if fog renames an attribute, e.g., your config will break). Please see the fog
# # source code for the option names.
# fog_options:
# type: gp2 # type of volume, standard (EBS magnetic), io1 (provisioned IOPS - SSD), or gp2 (general purpose - SSD).
# iops: 500 # The number of I/O operations per second (IOPS) that the volume supports.
# # Required when the volume type is io1; not used with non-provisioned IOPS volumes.
# - size: 10
# zone: us-east-1a
# device: /dev/sdi
# mount: /mnt/logs
# filesystem: ext4
# fog_options:
# type: io1
# iops: 500
#
# # volumes without mount/filesystem can be used in raid arrays
#
# - size: 50
# zone: us-east-1a
# device: /dev/sdx
# fog_options:
# type: gp2
# iops: 500
# - size: 50
# zone: us-east-1a
# device: /dev/sdy
# fog_options:
# type: gp2
# iops: 500
#
# # Use some ephemeral volumes for raid array
# local_volumes:
# - partition_device: /dev/sdb
# zero: false # zeros out disk for improved performance
# - partition_device: /dev/sdc
# zero: false # zeros out disk for improved performance
#
# # for raid array, you'll need to add mdadm to packages. Likewise,
# # xfsprogs is needed for xfs filesystem support
# #
# packages: [xfsprogs, mdadm]
# raid_volumes:
# - device: /dev/md0 # OS device to to create raid array on
# mount: /mnt/fast # The directory to mount this array to
# mount_opts: 'nobootwait' # Recent Ubuntu versions require this flag or SSH will not start on reboot
# filesystem: xfs # the filesystem to create on array
# filesystem_opts: -f # the filesystem opts in mkfs
# raid_level: 0 # the raid level to use for the array
# # if you're using Ubuntu 11.x or later (Natty, Oneiric, Precise, etc)
# # you will want to specify the source devices in their /dev/xvd format
# # see https://bugs.launchpad.net/ubuntu/+source/linux/+bug/684875 for
# # more information.
# # NOTE: Only make this change for raid source_devices, NOT generic
# # volume commands above.
# source_devices: [/dev/sdx, /dev/sdy] # the source EBS devices we are creating raid array from (Ubuntu Lucid or older)
# source_devices: [/dev/xvdx, /dev/xvdy] # the source EBS devices we are creating raid array from (Ubuntu Natty or newer)
#
# # for LVM volumes, you'll need to add lvm2 to packages. Likewise,
# # xfsprogs is needed for xfs filesystem support
# packages: [xfsprogs, lvm2]
# lvm_volume_groups:
# - name: vg # The volume group name
# physical_volumes: [/dev/sdx, /dev/sdy] # Devices used for LVM group (you can use just one, but you can't stripe then)
# extent_size: 32 # Size of the volume extent in MB
# volumes:
# - name: lv # Name of the logical volume
# size: 999.9 # Size of volume in GB (slightly less than sum of all physical volumes because LVM reserves some space)
# stripes: 2 # Count of stripes for volume
# filesystem: xfs # The filesystem to create on the logical volume
# filesystem_opts: -f # the filesystem opts in mkfs
# mount: /mnt/large_work_dir # The directory to mount this LVM volume to
# OPTIONAL: You can also define your own variables here for use when
# transforming config files, and they will be available in your config
# templates as <%%= rubber_env.var_name %>
#
# var_name: var_value
# All variables can also be overridden on the role, environment and/or host level by creating
# a sub level to the config under roles, environments and hosts. The precedence is host, environment, role
# e.g. to install mysql only on db role, and awstats only on web01:
# OPTIONAL: Role specific overrides
# roles:
# somerole:
# packages: []
# somerole2:
# myconfig: someval
# OPTIONAL: Environment specific overrides
# environments:
# staging:
# myconfig: otherval
# production:
# myconfig: val
# OPTIONAL: Host specific overrides
# hosts:
# somehost:
# packages: []
And doing a
cap rubber:create
I get the error after:
* executing `rubber:setup_local_aliases'
/home/casekey/.rvm/gems/ruby-2.1.1/gems/rubber-2.16.0/lib/rubber/recipes/rubber/setup.rb:192:in `block (3 levels) in load': no implicit conversion of nil into String (TypeError)
setup.rb at line at 192:
local_hosts << ic.external_ip << ' ' << hosts_data.compact.join(' ') << "\n"
After attempting to debug this with binding.pry, the line 192 goes through without any error.
Any ideas are welcome.
I have also tried:
bundle exec rake rails:update:bin
as per Rails 4 Error with every command "`load': no implicit conversion of nil into String" (Mac OS X 10.9)