Confuse about docker's -i "Keep STDIN open even if not attached" - docker

The -i flag is described as "Keep STDIN open even if not attached", but Docker run reference also says:
If you do not specify -a then Docker will attach all standard streams.
So, by default, stdin is attached, but not opened? I think it doesn't make any sense when STDIN is attached but not opened, right?

The exact code associated with that documentation is:
// If neither -d or -a are set, attach to everything by default
if len(flAttach) == 0 && !*flDetach {
if !*flDetach {
flAttach.Set("stdout")
flAttach.Set("stderr")
if *flStdin {
flAttach.Set("stdin")
}
}
}
With:
flStdin := cmd.Bool("i", false, "Keep stdin open even if not attached")
In other words, stdin is attached only if -i is set.
if *flStdin {
flAttach.Set("stdin")
}
In that sense, "all" standard streams isn't accurate.
As commented below, that code (referenced by the doc) has since changed to:
cmd.Var(&flAttach, []string{"a", "-attach"}, "Attach to STDIN, STDOUT or STDERR")
-a does not man anymore "attach all streams", but "specify which streams you want attached".
var (
attachStdin = flAttach.Get("stdin")
attachStdout = flAttach.Get("stdout")
attachStderr = flAttach.Get("stderr")
)
-i remains a valid option:
if *flStdin {
attachStdin = true
}

Related

How to define a rule to capture alerts when any manual command gets executed inside the container on Falco

Installed Falco drivers on the host.
Able to capture alerts for specific conditions like when there is a process spawned or if any script is getting executed inside the container. But the requirement is to trigger an alert whenever any manual command gets executed inside the container.
Is there any custom condition we use to generate an alert whenever any command gets executed inside a container?
Expecting the below condition should capture an alert whenever command line contains newline char or pressed enter inside a container or the command executed contains any .sh but this didn't work.
- rule: shell_in_container
desc: notice shell activity within a container
condition: >
container.id != host and
proc.cmdline contains "\n" or
proc.cmdline endswith ".sh"
output: >
shell in a container
(user=%user.name container_id=%container.id container_name=%container.name
shell=%proc.name parent=%proc.pname source_ip=%fd.rip cmdline=%proc.cmdline)
priority: WARNING
Your question made me go and read about falco(I learned a new lesson today). After installing falco and reading its documentation, I found a solution that seems to work.
- rule: shell_in_container
desc: notice shell activity within a container
condition: >
container.id != host and
proc.cmdline != ""
output: >
shell in a container
(user=%user.name container_id=%container.id container_name=%container.name
shell=%proc.name parent=%proc.pname source_ip=%fd.rip cmdline=%proc.cmdline)
priority: WARNING
Below rule is generating alerts whenever there is a manual command executed inside container (exec with bash or sh) with all the required fields in the output.
Support for pod ip to be present in falco version 0.35. work is in progress.
https://github.com/falcosecurity/libs/pull/708 and will be called container.ip (but effectively it is the Pod_IP since all containers share the network stack of the pod) and container.cni.json for a complete view in case you have dual-stack and multiple interfaces.
- rule: shell_in_container
desc: notice shell activity within a container
condition: >
container.id != host and
evt.type = execve and
(proc.pname = bash or
proc.pname = sh) and
proc.cmdline != bash
output: >
(user=%user.name command=%proc.cmdline timestamp=%evt.datetime.s container_id=%container.id container_name=%container.name pod_name=%k8s.pod.name proc_name=%proc.name proc_pname=%proc.pname res=%evt.res)
priority: informational

What does Jenkins use to capture stdout and stderr of a shell command?

In Jenkins, you can use the sh step to run Unix shell scripts.
I was experimenting, and I found that the stdout is not a tty, at least on a Docker image.
What does Jenkins use for capturing stdout and stderr of programs running via the sh step? Is the same thing used for running the sh step on a Jenkins node versus on a Docker container?
I ask for my own edification and for some possible practical applications of this knowledge.
To reproduce my experimentation
If you already know an answer, you don't need to read these details for reproducing. I am just adding this here for reproducibility.
I have the following Jenkins/Groovy code:
docker.image('gcc').inside {
sh '''
gcc -O2 -Wall -Wextra -Wunused -Wpedantic \
-Werror write_to_tty.c -o write_to_tty
./write_to_tty
'''
}
The Jenkins log snippet for the sh step code above is
+ gcc -O2 -Wall -Wextra -Wunused -Wpedantic -Werror write_to_tty.c -o write_to_tty
+ ./write_to_tty
stdout is not a tty.
This compiles and runs the following C code:
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
int main() {
int stdout_fd = fileno(stdout);
if (!isatty(stdout_fd)) {
fprintf(stderr, "stdout is not a tty.\n");
exit(1);
}
char* stdout_tty_name = ttyname(stdout_fd);
if (stdout_tty_name == NULL) {
fprintf(stderr, "Failed to get tty name of stdout.\n");
exit(1);
}
FILE* tty = fopen(stdout_tty_name, "w");
if (tty == NULL) {
fprintf(stderr, "Failed to open tty %s.\n", stdout_tty_name);
exit(1);
}
fprintf(tty, "Written directly to tty.\n");
fclose(tty);
printf("Written to stdout.\n");
fprintf(stderr, "Written to stderr.\n");
exit(0);
}
I briefly looked at the source here and it seems stdout is written to a file and then read from that, Hence it's not a tty. Also, stderror, if any, will be written to the log.
Here is the Javadoc.
/** * Runs a durable task, such as a shell script, typically on an
agent. * “Durable” in this context means that Jenkins makes an
attempt to keep the external process running * even if either the
Jenkins controller or an agent JVM is restarted. * Process standard
output is directed to a file near the workspace, rather than holding a
file handle open. * Whenever a Remoting connection between the two
can be reëstablished, * Jenkins again looks for any output sent since
the last time it checked. * When the process exits, the status code
is also written to a file and ultimately results in the step passing
or failing. * Tasks can also be run on the built-in node, which
differs only in that there is no possibility of a network failure. */
I found the source for the sh step, which appears to be implemented using the BourneShellScript class.
When not capturing stdout, the command is generated like this:
cmdString = String.format("{ while [ -d '%s' -a \\! -f '%s' ]; do touch '%s'; sleep 3; done } & jsc=%s; %s=$jsc %s '%s' > '%s' 2>&1; echo $? > '%s.tmp'; mv '%s.tmp' '%s'; wait",
controlDir,
resultFile,
logFile,
cookieValue,
cookieVariable,
interpreter,
scriptPath,
logFile,
resultFile, resultFile, resultFile);
If I correctly matched the format specifiers to the variables, then the %s '%s' > '%s' 2>&1 part roughly corresponds to
${interpreter} ${scriptPath} > ${logFile} 2>&1
So it seems that the stdout and stderr of the script is written to a file.
When capturing output, it is slightly different, and is instead
${interpreter} ${scriptPath} > ${c.getOutputFile(ws)} 2> ${logFile}
In this case, stdout and stderr are still written to files, just not to the same file.
Aside
For anyone interested in how I found the source code:
First I used my bookmark for the sh step docs, which can be found with a search engine
I scrolled to the top and clicked on the "View this plugin on the Plugins site" link
On that page, I clicked the GitHub link
On the GitHub repo, I navigated to the ShellStep.java by drilling down from the src directory
I saw that the class was implemented using the BourneShellScript class, and based on the import for this class, I knew it was part of the durable task plugin
I searched the durable task plugin, and followed the same "View this plugin on the Plugins site" link and then GitHub link to view the GitHub repo for the durable task plugin
Next, I used the "Go to file" button and searched for the class name to jump to its .java file
Finally, I inspected this file to find what I was interested in

Stop Logstash process automatically after imported all data

Situation:
I'm importing data to Elasticsearch via Logstash at 12 pm manually every day.
I understand there is no "close" on Logstash because ideally, you would want to continuously send data to the server.
I am using elk-docker as my ELK stack.
I wrote a shell script that sends a command to a docker container to execute the following:
dailyImport.sh
docker exec -it $DOCKER_CONTAINER_NAME opt/logstash/bin/logstash --path.data /tmp/logstash/data -e \
'input {
file {
path => "'$OUTPUT_PATH'"
start_position => "beginning"
sincedb_path => "/dev/null"
mode => "read"
file_completed_action => "delete"
}
}
filter {
csv {
separator => ","
columns => ["foo", "bar", "foo2", "bar2"]
}
}
output {
elasticsearch{
hosts => "localhost:9200"
index => "foo"
document_type => "foo"
}
stdout {}
}'
What I have tried and understood:
I have read that adding read mode and file_completed_action to delete would stop the operation, I tried it but it didn't work.
I would still need to send Ctrl + C manually to stop the pipeline. e.g:
^C[2019-02-21T15:49:07,787][WARN ][logstash.runner ] SIGINT received. Shutting down.
[2019-02-21T15:49:07,899][INFO ][filewatch.observingread ] QUIT - closing all files and shutting down.
[2019-02-21T15:49:09,764][INFO ][logstash.pipeline ] Pipeline has terminated {:pipeline_id=>"main", :thread=>"#<Thread:0x6bdb92ea run>"}
Done
I have read that I could do the following, but don't know how:
Monitor the sincedb file to check when Logstash has reached EOF, then kill Logstash.
Use the stdin input instead. Logstash will shut down by itself when stdin has been closed and all inputs has been processed. On the flip side, it Logstash dies for whatever reason you don't know how much it has processed.
Reference: https://discuss.elastic.co/t/stop-logstash-after-processing-the-file/84959
What I want:
I don't need a fancy progress bar to tell me how much data I have imported (against the input file).
I only want to end the operation when "it's done" and maybe send a Ctrl + C when it reaches the EOF or "finished importing".
for file input in read mode there's recently a way to exit the process upon reading all files, just set:
input { file { exit_after_read => true } }
https://www.elastic.co/guide/en/logstash/current/plugins-inputs-file.html#plugins-inputs-file-exit_after_read

grep show all lines, not just matches, set exit status

I'm piping some output of a command to egrep, which I'm using to make sure a particular failure string doesn't appear in.
The command itself, unfortunately, won't return a proper non-zero exit status on failure, that's why I'm doing this.
command | egrep -i -v "badpattern"
This works as far as giving me the exit code I want (1 if badpattern appears in the output, 0 otherwise), BUT, it'll only output lines that don't match the pattern (as the -v switch was designed to do). For my needs, those lines are the most interesting lines.
Is there a way to have grep just blindly pass through all lines it gets as input, and just give me the exit code as appropriate?
If not, I was thinking I could just use perl -ne "print; exit 1 if /badpattern/". I use -n rather than -p because -p won't print the offending line (since it prints after running the one-liner). So, I use -n and call print myself, which at least gives me the first offending line, but then output (and execution) stops there, so I'd have to do something like
perl -e '$code = 0; while (<>) { print; $code = 1 if /badpattern/; } exit $code'
which does the whole deal, but is a bit much, is there a simple command line switch for grep that will just do what I'm looking for?
Actually, your perl idea is not bad. Try:
perl -pe 'END { exit $status } $status=1 if /badpattern/;'
I bet this is at least as fast as the other options being suggested.
$ tee /dev/tty < ~/.bashrc | grep -q spam && echo spam || echo no spam
How about doing a redirect to /dev/null, hence removing all lines, but you still get the exit code?
$ grep spam .bashrc > /dev/null
$ echo $?
1
$ grep alias .bashrc > /dev/null
$ echo $?
0
Or you can simply use the -q switch
-q, --quiet, --silent
Quiet; do not write anything to standard output. Exit
immediately with zero status if any match is found, even if an
error was detected. Also see the -s or --no-messages option.
(-q is specified by POSIX.)

How can I tell from a within a shell script if the shell that invoked it is an interactive shell?

I'm trying to set up a shell script that will start a screen session (or rejoin an existing one) only if it is invoked from an interactive shell. The solution I have seen is to check if $- contains the letter "i":
#!/bin/sh -e
echo "Testing interactivity..."
echo 'Current value of $- = '"$-"
if [ `echo \$- | grep -qs i` ]; then
echo interactive;
else
echo noninteractive;
fi
However, this fails, because the script is run by a new noninteractive shell, invoked as a result of the #!/bin/sh at the top. If I source the script instead of running it, it works as desired, but that's an ugly hack. I'd rather have it work when I run it.
So how can I test for interactivity within a script?
Give this a try and see if it does what you're looking for:
#!/bin/sh
if [ $_ != $0 ]
then
echo interactive;
else
echo noninteractive;
fi
The underscore ($_) expands to the absolute pathname used to invoke the script. The zero ($0) expands to the name of the script. If they're different then the script was invoked from an interactive shell. In Bash, subsequent expansion of $_ gives the expanded argument to the previous command (it might be a good idea to save the value of $_ in another variable in order to preserve it).
From man bash:
0 Expands to the name of the shell or shell script. This is set
at shell initialization. If bash is invoked with a file of com‐
mands, $0 is set to the name of that file. If bash is started
with the -c option, then $0 is set to the first argument after
the string to be executed, if one is present. Otherwise, it is
set to the file name used to invoke bash, as given by argument
zero.
_ At shell startup, set to the absolute pathname used to invoke
the shell or shell script being executed as passed in the envi‐
ronment or argument list. Subsequently, expands to the last
argument to the previous command, after expansion. Also set to
the full pathname used to invoke each command executed and
placed in the environment exported to that command. When check‐
ing mail, this parameter holds the name of the mail file cur‐
rently being checked.
$_ may not work in every POSIX compatible sh, although it probably works in must.
$PS1 will only be set if the shell is interactive. So this should work:
if [ -z "$PS1" ]; then
echo noninteractive
else
echo interactive
fi
try tty
if tty 2>&1 |grep not ; then echo "Not a tty"; else echo "a tty"; fi
man tty :
The tty utility writes the name of the terminal attached to standard
input to standard output. The name that is written is the string
returned by ttyname(3). If the standard input is not a terminal, the
message ``not a tty'' is written.
You could try using something like...
if [[ -t 0 ]]
then
echo "Interactive...say something!"
read line
echo $line
else
echo "Not Interactive"
fi
The "-t" switch in the test field checks if the file descriptor given matches a terminal (you could also do this to stop the program if the output was going to be printed to a terminal, for example). Here it checks if the standard in of the program matches a terminal.
Simple answer: don't run those commands inside ` ` or [ ].
There is no need for either of those constructs here.
Obviously I can't be sure what you expected
[ `echo \$- | grep -qs i` ]
to be testing, but I don't think it's testing what you think it's testing.
That code will do the following:
Run echo \$- | grep -qs i inside a subshell (due to the ` `).
Capture the subshell's standard output.
Replace the original ` ` expression with a string containing that output.
Pass that string as an argument to the [ command or built-in (depending on your shell).
Produce a successful return code from [ only if that string was nonempty (assuming the string didn't look like an option to [).
Some possible problems:
The -qs options to grep should cause it to produce no output, so I'd expect [ to be testing an empty string regardless of what $- looks like.
It's also possible that the backslash is escaping the dollar sign and causing a literal 'dollar minus' (rather than the contents of a variable) to be sent to grep.
On the other hand, if you removed the [ and backticks and instead said
if echo "$-" | grep -qs i ; then
then:
your current shell would expand "$-" with the value you want to test,
echo ... | would send that to grep on its standard input,
grep would return a successful return code when that input contained the letter i,
grep would print no output, due to the -qs flags, and
the if statement would use grep's return code to decide which branch to take.
Also:
no backticks would replace any commands with the output produced when they were run, and
no [ command would try to replace the return code of grep with some return code that it had tried to reconstruct by itself from the output produced by grep.
For more on how to use the if command, see this section of the excellent BashGuide.
If you want to test the value of $- without forking an external process (e.g. grep) then you can use the following technique:
if [ "${-%i*}" != "$-" ]
then
echo Interactive shell
else
echo Not an interactive shell
fi
This deletes any match for i* from the value of $- then checks to see if this made any difference.
(The ${parameter/from/to} construct (e.g. [ "${-//[!i]/}" = "i" ] is true iff interactive) can be used in Bash scripts but is not present in Dash, which is /bin/sh on Debian and Ubuntu systems.)

Resources