Situation:
I'm importing data to Elasticsearch via Logstash at 12 pm manually every day.
I understand there is no "close" on Logstash because ideally, you would want to continuously send data to the server.
I am using elk-docker as my ELK stack.
I wrote a shell script that sends a command to a docker container to execute the following:
dailyImport.sh
docker exec -it $DOCKER_CONTAINER_NAME opt/logstash/bin/logstash --path.data /tmp/logstash/data -e \
'input {
file {
path => "'$OUTPUT_PATH'"
start_position => "beginning"
sincedb_path => "/dev/null"
mode => "read"
file_completed_action => "delete"
}
}
filter {
csv {
separator => ","
columns => ["foo", "bar", "foo2", "bar2"]
}
}
output {
elasticsearch{
hosts => "localhost:9200"
index => "foo"
document_type => "foo"
}
stdout {}
}'
What I have tried and understood:
I have read that adding read mode and file_completed_action to delete would stop the operation, I tried it but it didn't work.
I would still need to send Ctrl + C manually to stop the pipeline. e.g:
^C[2019-02-21T15:49:07,787][WARN ][logstash.runner ] SIGINT received. Shutting down.
[2019-02-21T15:49:07,899][INFO ][filewatch.observingread ] QUIT - closing all files and shutting down.
[2019-02-21T15:49:09,764][INFO ][logstash.pipeline ] Pipeline has terminated {:pipeline_id=>"main", :thread=>"#<Thread:0x6bdb92ea run>"}
Done
I have read that I could do the following, but don't know how:
Monitor the sincedb file to check when Logstash has reached EOF, then kill Logstash.
Use the stdin input instead. Logstash will shut down by itself when stdin has been closed and all inputs has been processed. On the flip side, it Logstash dies for whatever reason you don't know how much it has processed.
Reference: https://discuss.elastic.co/t/stop-logstash-after-processing-the-file/84959
What I want:
I don't need a fancy progress bar to tell me how much data I have imported (against the input file).
I only want to end the operation when "it's done" and maybe send a Ctrl + C when it reaches the EOF or "finished importing".
for file input in read mode there's recently a way to exit the process upon reading all files, just set:
input { file { exit_after_read => true } }
https://www.elastic.co/guide/en/logstash/current/plugins-inputs-file.html#plugins-inputs-file-exit_after_read
Related
I am working on a set of build scripts which are called from a ubuntu hosted CI environment. The powershell build script calls jest via react-scripts via npm. Unfortunately jest doesn't use stderr correctly and writes non-errors to the stream.
I have redirected the error stream using 3>&1 2>&1 and this works fine from just powershell core ($LASTEXITCODE is 0 after running, no content from stderr is written in red).
However when I introduce docker via docker run, the build script appears to not behave and outputs the line that should be redirected from the error stream in red (and crashes). i.e. something like: docker : PASS src/App.test.js. Error: Process completed with exit code 1..
Can anyone suggest what I am doing wrong? because I'm a bit stumped. I include the sample PowerShell call below:-
function Invoke-ShellExecutable
{
param (
[ScriptBlock]
$Command
)
$Output = Invoke-Command $Command -NoNewScope | Out-String
if($LASTEXITCODE -ne 0) {
$CmdString = $Command.ToString().Trim()
throw "Process [$($CmdString)] returned a failure status code [$($LASTEXITCODE)]. The process may have outputted details about the error."
}
return $Output
}
Invoke-ShellExecutable {
($env:CI = "true") -and (npm run test:ci)
} 3>&1 2>&1
I'm new to this community as well as programming. I'm currently working on a an simple Expect script that that reads a file with a list of DNS names, SSH into a Cisco router, and does a simple "show ip int brief".
This list contains some hosts that are not reachable at the moment, so I'm trying to get the script to timeout that unreachable device but to continue with the rest of devices.
When I run the script, it is able to SSH to the first device and execute the "show" command. However, when it reaches the second device (which is unreachable), it hangs there for about 30 seconds and then terminates the script. I'm not sure what I'm doing wrong. Any assistance would be greatly appreciated.
#!/usr/bin/expect
#
#
set workingdir cisco/rtr
puts stdout "Enter TACACS Username:"
gets stdin tacuserid
system stty -echo
puts stdout "Enter TACACS password:"
gets stdin tacpswd
puts stdout "\nEnter enable password:"
gets stdin enabpswd
system stty echo
#
set RTR [open "$workingdir/IP-List.txt" r]
set timestamp [timestamp -format %Y-%m-%d_%H:%M]
#
while {[gets $RTR dnsname] != -1} {
if {[ string range $dnsname 0 0 ] != "#"} {
send_user "The value of the router name is $dnsname\n"
set timeout 10
set count 0
log_file -a -noappend $workingdir/session_$dnsname\_$timestamp.log
send_log "### /START-SSH-SESSION/ IP: $dnsname # [exec date] ###\n"
spawn ssh -o StrictHostKeyChecking=no -l $tacuserid $dnsname
expect {
"TACACS Password: " {send "$tacpswd\r"}
timeout {puts "$dnsname - failed to login"; wait;close;exp_continue}
}
expect {
{>} {send "enable\r"; send_user "on the second expect\n"}
}
expect {
{assword: } {send "$enabpswd\r"}
}
#
expect {
"#" {send "show ip int brief\r"}
}
#expect "#"
send "exit\r"
send_log "\n"
send_log "### /END-SSH-SESSION/ IP: $dnsname # [exec date] ###\n"
log_file
}
}
exit
Your first expect is doing
expect {...
timeout {puts "..."; wait; close; exp_continue}
}
This will match when the ssh takes over 10 seconds to connect to a host.
When this matches it inevitably exits with an error (spawn id ... not open). This is because you wait for the command to end, close the spawn connection, then restart the same expect command.
You probably meant to use continue rather than exp_continue, in order to continue with the enclosing while loop.
I'm trying to run perf test in my CI environment, using the k6 docker, and a simple single script file works fine. However, I want to break down my tests into multiple JS files. In order to do this, I need to mount a volume on Docker so I can import local modules.
The volume seems to be mounting correctly, with my command
docker run --env-file ./test/performance/env/perf.list -v \
`pwd`/test/performance:/perf -i loadimpact/k6 run - /perf/index.js
k6 seems to start, but immediately errors with
time="2018-01-17T13:04:17Z" level=error msg="accepts 1 arg(s), received 2"
Locally, my file system looks something like
/toychicken
/test
/performance
/env
- perf.list
- index.js
- something.js
And the index.js looks like this
import { check, sleep } from 'k6'
import http from 'k6/http'
import something from '/perf/something'
export default () => {
const r = http.get(`https://${__ENV.DOMAIN}`)
check(r, {
'status is 200': r => r.status === 200
})
sleep(2)
something()
}
You need to remove the "-" after run in the Docker command. The "-" instructs k6 to read from stdin, but in this case you want to load the main JS file from the file system. That's why it complains that it receives two args, one being the "-" and the second being the path to index.js (the error message could definitely be more descriptive).
You'll also need to add .js to the '/perf/something' import.
The -i flag is described as "Keep STDIN open even if not attached", but Docker run reference also says:
If you do not specify -a then Docker will attach all standard streams.
So, by default, stdin is attached, but not opened? I think it doesn't make any sense when STDIN is attached but not opened, right?
The exact code associated with that documentation is:
// If neither -d or -a are set, attach to everything by default
if len(flAttach) == 0 && !*flDetach {
if !*flDetach {
flAttach.Set("stdout")
flAttach.Set("stderr")
if *flStdin {
flAttach.Set("stdin")
}
}
}
With:
flStdin := cmd.Bool("i", false, "Keep stdin open even if not attached")
In other words, stdin is attached only if -i is set.
if *flStdin {
flAttach.Set("stdin")
}
In that sense, "all" standard streams isn't accurate.
As commented below, that code (referenced by the doc) has since changed to:
cmd.Var(&flAttach, []string{"a", "-attach"}, "Attach to STDIN, STDOUT or STDERR")
-a does not man anymore "attach all streams", but "specify which streams you want attached".
var (
attachStdin = flAttach.Get("stdin")
attachStdout = flAttach.Get("stdout")
attachStderr = flAttach.Get("stderr")
)
-i remains a valid option:
if *flStdin {
attachStdin = true
}
Hi I am trying to store the output of a command run through a spawn ssh remote window into my local host, I am new to expect and am not able to figure out where I am wrong.
My Code:
#!/bin/bash
while read line
do
/usr/bin/expect <<EOD
spawn ssh mininet#$line
expect "assword:"
send -- "mininet\r"
set output [open "outputfile.txt" "a+"]
expect "mininet#mininet-vm:*"
send -- "ls\r"
set outcome $expect_out(buffer)
send "\r"
puts $output "$outcome"
close $output
expect "mininet#mininet-vm:*"
send -- "exit\r"
interact
expect eof
EOD
done <read_ip.txt
I am getting the error
expect: spawn id exp6 not open
while executing
"expect "mininet#mininet-vm:*""
Please can any body help me on this code.
You have your expect program in a shell heredoc. The shell will expand variables in the heredoc before launching expect. You have to protect expect's variables from the shell.
One way is to use a 'quoted' heredoc, and pass the shell variable to expect through the environment:
#!/bin/bash
export host ## an environment variable
while read host
do
/usr/bin/expect <<'EOD' ## note the quotes here
spawn ssh mininet#$env(host) ## get the value from the environment
expect "assword:"
send -- "mininet\r"
set output [open "outputfile.txt" "a+"]
expect "mininet#mininet-vm:*"
send -- "ls\r"
set outcome $expect_out(buffer)
send "\r"
puts $output "$outcome"
close $output
expect "mininet#mininet-vm:*"
send -- "exit\r"
expect eof ## don't want both "interact" and "expect eof"
EOD
done <read_ip.txt
Putting single quotes around the heredoc terminator means the whole heredoc acts like a single quoted string, and expect's variables are left for expect to handle.
You might also investigate the expect log_file command: you can enable and disable logging at will, much as you are doing manually here.