Rebalance topology throws java.lang.StringIndexOutOfBoundsException: String index out of range: -1 - twitter

I'm using the Taylor Goetz's storm version pointed out by article
http://ptgoetz.github.io/blog/2013/12/18/running-apache-storm-on-windows/
and located at:
https://github.com/ptgoetz/incubator-storm/tree/windows-test
I have succeeded to install everything on my computer (running windows 7, 64 bit). I have also ran fine the indicated topology and my topology too. But when I'm trying to do a rebalancing of my topology by re-configuring the number of spouts or bolt with the command
storm rebalance WordCount -e spout=3
I'm getting the exception:
Exception in thread "main" java.lang.StringIndexOutOfBoundsException: String index out of range: -1
at java.lang.String.substring(String.java:1911)
at backtype.storm.command.rebalance$parse_executor.invoke(rebalance.clj:24)
at clojure.tools.cli$apply_specs.invoke(cli.clj:80)
at clojure.tools.cli$cli.doInvoke(cli.clj:130)
at clojure.lang.RestFn.invoke(RestFn.java:460)
at backtype.storm.command.rebalance$_main.doInvoke(rebalance.clj:31)
at clojure.lang.RestFn.applyTo(RestFn.java:137)
at backtype.storm.command.rebalance.main(Unknown Source)
If I'm changing only the number of workers it works without any exceptions.
If someone of you have tested this version, can you please help me to get rid of it?
I'll look forward for your answers.

In the mentioned version for Windows, the rebalance command arguments should not be passed as indicated in
https://github.com/nathanmarz/storm/wiki/Understanding-the-parallelism-of-a-Storm-topology
storm rebalance mytopology -n 5 -e blue-spout=3 -e yellow-bolt=10
In order to get rid of the mentioned exception(java.lang.StringIndexOutOfBoundsException:) you should use the command
storm rebalance WordCount -e "spout=3"
However, trying to rebalance more components (either spouts or bolts) will rebalance only the latest component mentioned in the list. So, for example for example:
storm rebalance WordCount -e "spout=3" -e "count=5"
the rebalance will be applied only for the "count" component not for the "spout".
So, in my opinion either the documentation should be updated or the rebalance.clj should be changed in order to support rebalancing for multiple components.
But this is a different issue.

Related

How to get a program's std-out to fluentd (without docker)

Scenario:
You write a program in R or Python, which needs to run on Linux or Windows, you want to log (JSON structured and unstructured) std-out and (mostly unstructured) std-error from this program to a Fluentd instance. Adding a new program or starting another instance should not require to update the Fluentd configuration and the applications will not (yet) be running in a docker environment.
Question:
How to send "logs" from a bunch of programs to an fluentd instance, without the need to perform curl calls for every log entry that your application was originally writing to std-out?
When a UDP or TCP connection' is necessary for the application to run, it seems to become less easy to debug, and any dependency of your program that returns std-out will be required to be parsed, just to get it's logging passed through.
Thoughts:
Alternatively, a question could be, how to accept a 'connection' object which can either point to a file or to a TCP connection? So that switching between the std-out or a TCP destination is a matter of changing a single value?
I like the 'tail' input plugin, which could be what I am looking for, but then:
the original log file never appears to stop growing (will the trail position value reset when it is simply removed? I couldn't find this behaviour), and
it seems that it requires to reconfigure fluentd for every new program that you start on that server (if it logs in another file), I would highly prefer to keep that configuration on the program side...
I build an EFK stack with a docker logdriver set to fluentd, which does not seem to have an optimal solid solution either, but without docker, I already get kind of stuck with setting up a basic configuration (not referring to fluent.conf here).
TL;DR
std-out -> fluentd: Redirect the program output, when launching your program, to a file. On linux, use logrotate, you will love it.
Windows: use fluent-bit.
App side config: use single (or predictable) log locations, and the
fluentd/fluent-bit 'in_tail' plugin.
logging general:
It's recommended to always write application output to a file, if the std-out must be written to a file, pipe it's output at program startup. For more flexibility for the fluentd configuration, pipe them to separate files (just like 'Apache' does):
My_program.exe Do some crazy stuf > my_out_file.txt 2> my_error_file.txt
This opens the option for fluentd to read from this/these file(s).
Windows:
For Windows systems, use fluent-bit, it likely solves the issue for aggregating the Windows OS program logs. Support for Windows has just been implemented recently.
fluent-bit supports:
the 'tail' plugin, which records the 'inode' value (unique, renaming insensitive, file pointer) and the 'index' (called 'pos' for the full-blown 'fluent' application) value in a sqllite3 database and deals with un-processable data, which is allocated to a certain key ('log' by default)
Works on Windows machines, but note that it cannot buffer to disk, so be sure a lost connection, or another issue with the output, is reestablished or fixed in time so that you will not be running into OOM issues.
Appl. side config:
The tail plugin can monitor a folder, this makes it practically possible to keep the configuration on the side of your program. Just make sure you write your logs of your different applications to a predictable directory.
Fluent-bit setup/config:
For Linux, just use fluentd (unless > 100000 messages per second are required, which is where fluent-bit becomes your only choice).
For Windows, install Fluent-bit, and make it run as a deamon (almost funny sollution).
There are 2 execution methods:
Providing configuration directly via the commandline
Using a config file (example included in zip), and referring to it with the -c flag.
Directly from commandline
Some example executions (without making use of the option to work with a configuration file) can be found here:
PS .\bin\fluent-bit.exe -i winlog -p "channels=Setup,Windows PowerShell" -p "db=./test.db" -o stdout -m '*'
-i declares the input method. Currently, only a few plugins have been implemented, see the man page below.
PS fluent-bit.exe --help
Available Options
-b --storage_path=PATH specify a storage buffering path
-c --config=FILE specify an optional configuration file
-f, --flush=SECONDS flush timeout in seconds (default: 5)
-F --filter=FILTER set a filter
-i, --input=INPUT set an input
-m, --match=MATCH set plugin match, same as '-p match=abc'
-o, --output=OUTPUT set an output
-p, --prop="A=B" set plugin configuration property
-R, --parser=FILE specify a parser configuration file
-e, --plugin=FILE load an external plugin (shared lib)
-l, --log_file=FILE write log info to a file
-t, --tag=TAG set plugin tag, same as '-p tag=abc'
-T, --sp-task=SQL define a stream processor task
-v, --verbose increase logging verbosity (default: info)
-s, --coro_stack_size Set coroutines stack size in bytes (default: 98302)
-q, --quiet quiet mode
-S, --sosreport support report for Enterprise customers
-V, --version show version number
-h, --help print this help
Inputs
tail Tail files
dummy Generate dummy data
statsd StatsD input plugin
winlog Windows Event Log
tcp TCP
forward Fluentd in-forward
random Random
Outputs
counter Records counter
datadog Send events to DataDog HTTP Event Collector
es Elasticsearch
file Generate log file
forward Forward (Fluentd protocol)
http HTTP Output
influxdb InfluxDB Time Series
null Throws away events
slack Send events to a Slack channel
splunk Send events to Splunk HTTP Event Collector
stackdriver Send events to Google Stackdriver Logging
stdout Prints events to STDOUT
tcp TCP Output
flowcounter FlowCounter
Filters
aws Add AWS Metadata
expect Validate expected keys and values
record_modifier modify record
rewrite_tag Rewrite records tags
throttle Throttle messages using sliding window algorithm
grep grep events by specified field values
kubernetes Filter to append Kubernetes metadata
parser Parse events
nest nest events by specified field values
modify modify records by applying rules
lua Lua Scripting Filter
stdout Filter events to STDOUT

GNU Parallel does not do anything using remote execution

I just need a hint. I am trying to run the following command from the GNU parallel tutorial (GNU Parallel tutorial):
parallel -S $SERVER1,$SERVER2 echo ::: running on more hosts
I replaced $SERVERX with known hosts in my network. If I execute the command I'm getting asked for my password for each server and after that nothing happens anymore. The curser blinks all day long and I do not get any error message.
I tried different servers with the same result.
The verbose mode shows:
ssh $SERVER1 -- exec perl -e #GNU_Parallel\\=split/_/,\\"use_IPC::Open3\\;_use_MIME::Base64\\"\\;eval\\"#GNU_Parallel\\"\\;\\$SIG\{CHLD\}\\=\\"IGNORE\\"\\;my\\$zip\\=\(grep\{-x\\$_\}\\"/usr/local/bin/bzip2\\"\)\[0\]\\|\\|\\"bzip2\\"\\;open3\(\\$in,\\$out,\\"\>\\&STDERR\\",\\$zip,\\"-dc\\"\)\\;if\(my\\$perlpid\\=fork\)\{close\\$in\\;\\$eval\\=join\\"\\",\\<\\$out\>\\;close\\$out\\;\}else\{close\\$out\\;print\\$in\(decode_base64\(join\\"\\",#ARGV\)\)\\;close\\$in\\;exit\\;\}wait\\;eval\\$eval\\;
and Followed by random characters
Something similar appears four times. I guess for the four jobs I started. I'd be very happy for help.
I think you are expected to set up passwordless ssh logins to all the remotes so GNU Parallel can get into them. – Mark Setchell
This was the right suggestion. Setting up key authentication using ssh-keygen and ssh-copy-id did the job! Thank you very much now it works. A short hint in the tutorial would have been great.

Docker: `Repository name must match ...` error

I'm reading the book Docker in action, which is a really great book so far, but I think I'm stuck now on a command which doesn't work
$> docker run –it --rm --link cass1:cass cassandra:2.2 cqlsh cass
It should run an interactive shell (cqlsh) on the cassandra database, but when I run this I get the following error:
repository name component must match "[a-z0-9](?:-*[a-z0-9])*(?:[._][a-z0-9](?:-*[a-z0-9])*)*"
Any suggestions why this doesn't work ?
The single cassandra example mentions this docker run command after
Launch a server called cass1:
Make sure you have a cass1 container up and running before trying a --link cass1:cass, or the last "cass" argument would reference nothing.
Regarding the command-line error, this is very similar to minus vs. hyphen minus error: both characters looks the same in monospaced font, but the minus would not be correctly interpreted by a shell..

Docker - Handling multiple services in a single container

I would like to start two different services in my Docker container and exit the container as soon as one of them exits. I looked at supervisor, but I can't find how to get it to quit as soon as one of the managed applications exits. It tries to restart them up to three times, as is the standard setting and then just sits there doing nothing. Is supervisor able to do this or is there any other tool for this? A bonus would be if there also was a way to let both managed programs write to stdout, tagged with their application name, e.g.:
[Program 1] Some output
[Program 2] Some other output
[Program 1] Output again
Since you asked if there was another tool... we designed and wrote a powerful replacement for supervisord that is designed specifically for Docker. It automatically terminates when all applications quit, as well as has special service settings to control this behavior, plus will redirect stdout with tagged syslog-compatible output lines as well. It's open source, and being used in production.
Here is a quick start for Docker: http://garywiz.github.io/chaperone/guide/chap-docker-simple.html
There is also a complete set of tested base-images which are a good example at: https://github.com/garywiz/chaperone-docker, but these might be overkill and the earlier quickstart may do the trick.
I found solutions to both of my requirements by reading through the docs some more.
Exit supervisord on application exit
This can be achieved by using a custom eventlistener. I had to add the following segment into my supervisord configuration file:
[eventlistener:shutdownevent]
command=/shutdownhandler.sh
events=PROCESS_STATE_EXITED
supervisord will start the referenced script and upon the given event being triggered (PROCESS_STATE_EXITED is triggered after the exit of one of the managed programs and it not restarting automatically) will send a line containing data about the event on the scripts stdin.
The referenced shutdownhandler-script contains:
#!/bin/bash
while :
do
echo -en "READY\n"
read line
kill $(cat /supervisord.pid)
echo -en "RESULT 2\nOK"
done
The script has to indicate being ready by sending "READY\n" on its stdout, after which it may receive an event data line on its stdin. For my use case upon receival of a line (meaning one of the managed programs has exited), a SIGTERM is sent to the supervisord process being found by the pid it leaves in its pid file (situated in the root directory by default). For technical completeness, I also included a positive answer for the eventlistener, though that one should never matter.
Tagged output on stdout
I did this by simply starting a tail process in the background before starting supervisord, tailing the programs output log and piping the lines through ts (from the moreutils package) to prepend a tag to it. This way it shows up via docker logs with an easy way to see which program actually wrote the line.
tail -fn0 /var/log/supervisor/program1.log | ts '[Program 1]' &

Icinga check_jboss "NRPE: unable to read output"

I'm using Icinga to monitor some servers and services. Most of them run fine. But now I like to monitor a JBoss-AS on one server via NRPE. Therefore I'm using the check_jboss-Plugin from MonitoringExchange. Although each time I try running a test-command from my Icinga-Server via NRPE I'm getting a NRPE: unable to read output error. When I try executing the command directly on the monitored server it runs fine. It's strange that the execution on the monitored server takes around 5 seconds to return a acceptable result but the NRPE-Exceution returns immediately the error. Trying to set up the NRPE-timeout didn't solve the problem. I also checked the permissions of the check_jboss-plugin and set them to "777" so that there should be no error.
I don't think that there's a common issue with NRPE, because there are also some other checks (e.g. check_load, check_disk, ...) via NRPE and they are all running fine. The permissions of these plugins are analog to my check_jboss-Plugin.
Following one sample exceuction on the monitored server which runs fine:
/usr/lib64/nagios/plugins/check_jboss.pl -T ServerInfo -J jboss.system -a MaxMemory -w 3000: -c 2000: -f
JBOSS OK - MaxMemory is 4049076224 | MaxMemory=4049076224
Here are two command-executions via NRPE from my Icinga-Server. Both commands are correctly
./check_nrpe -H xxx.xxx.xxx.xxx -c check_hda1
DISK OK - free space: / 47452 MB (76% inode=97%);| /=14505MB;52218;58745;0;65273
./check_nrpe -H xxx.xxx.xxx.xxx -c jboss_MaxMemory
NRPE: Unable to read output
Does anyone have a hint for me? If further config-information needed please ask :)
Try to rule out SELinux either by disabling it globally or by changing the SELinux type to nagios_unconfined_plugin_exec_t.

Resources