Knife ssh not loading environment variables - environment-variables

On my node node_name I've got $JAVA_HOME and other environment variables set in /etc/profile. I'm aware (found that out) that knife ssh isn't a login shell, and so doesn't load the environment variables. Is there a way to load the environment variables without having to source it?
Right now I'm forced to do
knife ssh 'name:nod_name' 'source /etc/profile; echo $JAVA_HOME'
I'm chaining in a few commands during knife ssh including some of the environment variables and loading the /etc/profile just makes that longer. Is there a way to load the /etc/profile during knife ssh?

This has nothing to do with knife ssh, it is just how SSH works for commands executed directly over a connection. You can alternatively run a command like bash -l -c "something". In general you can't count on any specific way of setting env vars in non-interactive sessions as being portable, so caveat emptor.

Not really an answer to the OP's question, but was searching for a solution to a similar problem. Thought I would share my situation and solution to maybe help someone. My issue was trying to run Chef remotely on our on-premise servers with awscli credentials in root's .bashrc. The -i switch was what I was missing in order to load root's environment variables in .bashrc.
Did not work:
knife ssh "name:$NODE" "sudo /etc/init.d/appserver stop; sleep 10;sudo chef-client -r role_appserver" -A -x user -P password
Worked:
knife ssh "name:$NODE" "sudo /etc/init.d/appserver stop; sleep 10;sudo -i chef-client -r role_appserver" -A -x user -P password

Related

Process is hanging all the time

Here is my script:
mogo()
{
sshpass -p 'abc123' ssh -tt -q -o StrictHostKeyChecking=no admin#192.168.10.145 <<'SSH_EOF'
sudo docker exec -it $(sudo docker ps --filter name=mongo --format "{{.Names}}") bash -c "mongodump -d saas -u abc -p abc123 -o md1/"
logout
SSH_EOF
touch /home/admin/11jul20
}
I am calling above script using cronjob for taking backup.
Issue: the process created by the above script hangs forever and
the touch command after logout is not executed.
Manual workaround: If I terminate the process manually with the kill command. The touch command is running and the file 11jul20 got created.
If I remove single quotes '' to 'SSH_EOF' sudo docker command not taking backup, but the touch command is running.
Kindly help me to understand what is wrong.
I suspect that your issue is related to the password request of sudo going to the pseudo terminal specified by your ssh -tt (which is not a real terminal).
I avoid sshpass and never install it (it's a security risk) so I can't test your script.
However, the following will work.
Make your ssh login account part of the docker group
sudo usermod -a -G admin docker
As an aside, instead of using an 'admin' account, it would be a lot more secure to create a special login account (maintenance) on your linux box that can perform only the admin tasks needed.
User the following script
mogo()
{
ssh -T -q -o StrictHostKeyChecking=no admin#192.168.10.145 <<SSH_EOF
docker exec -it \$(docker ps --filter name=mongo --format "{{.Names}}") bash -c "mongodump -d saas -u $MGO_USER -p $MGO_PWD -o md1/"
logout
SSH_EOF
touch /home/admin/11jul20
}
Note that there is no need for pseudo terminal so we disable it (-T). I'd also look at not disabling StrictHostKeyChecking.
3. Set your env.
Keep your passwords and other secrets as environment variables (12 factors), never in your scripts. That's a minimum.
For instance, the following will be injected in your heredoc script.
IMPORTANT make sure you don't use single quotes around SSH_EOF, otherwise the var replacement isn't performed.
export MGO_USER=abc
export MGO_PWD=abc123
Docker has also a secrets store and other open source vaults are available, but with more complexity.
Call your backup script
mgo

gitlab-runner exec docker - inject gpg key

I need to run gitlab-runner locally using the exec command and the docker executor.
The docker executor clones the project into the container so I start with a blank slate. In order to run the tests, I need to decrypt certain credentials files. Normally this is done on the dev machine using the developers private gpg key. But now we are in a container and I can't find a way to inject the developers gpg key into the testing container.
Normally it would make sense to pass the private key as an environment variable but the environment feature is not supported on the gitlab-runner exec command.
It would be much easier if gitlab-runner would just copy the project files into the container instead of doing a fresh clone of the project. That way the developer would decrypt the credentials on the host and everything is fine.
What are my options here?
The only option to pass in environment variables into the testing container is to use the --env parameter for gitlab-runner.
First we need to store the private key in an environment variable on our local machine. I used direnv for this but it also works manually:
export GPG_PRIVATE_KEY="$(gpg --export-secret-keys -a <KEY ID>)"
Then we can run gitlab-runner like this:
gitlab-runner exec docker test \
--env GPG_PRIVATE_KEY="$GPG_PRIVATE_KEY" \
--env GPG_PASSPHRASE="$GPG_PASSPHRASE"
Note that I also passed the passphrase in an environment variable because I need it inside the container to decrypt my data.
Now I can import the key in the docker container. The top of my .gitlab-ci.yml looks like this:
image: quay.io/mhart/alpine-node:8
before_script:
- apk add --no-cache gnupg
- echo "$GPG_PRIVATE_KEY" | gpg --batch --import --pinentry-mode loopback --no-tty
Done, now we can use that key inside the container to do what we want.
I also ran into some problems when I tried to decrypt my data. This guide was incredibly helpful and solved my issue.
It is hard for me to imagine.why you need to invoke gitlab-runner with exec but why could not you do
exec gitlab-runner sh
export GPG_KEY=...
....

How I can directly login to rails console through ssh?

Time to time I repeat the following commands:
ssh username#servername
cd /projects/rails_project
bundle exec rails c production
I want to create a shell script, and make alias for this file for run production console in one line. If i wrote simple script whith this 3 command it is dosen't work.
How I can do it?
If you don't use ssh -t, the Rails console will not prompt you for input. What you actually want is:
ssh -t username#servername "cd/projects/rails_project && bundle exec rails c production"
Using -t " [...] can be used to execute arbitrary screen-based programs on a remote machine [...]" (according to the ssh man page.)
Just send it as an argument:
ssh username#servername 'cd /projects/rails_project && bundle exec rails c production'
From man ssh:
SYNOPSIS
ssh [options] [user#]hostname [command]
...
If command is specified, it is executed on the remote host instead of a
login shell.
...
When the user's identity has been accepted by the server, the server
either executes the given command in a non-interactive session or, if no
command has been specified, logs into the machine and gives the user a
normal shell as an interactive session. All communication with the remote
command or shell will be automatically encrypted.
To be more precise, if you use bash and have error of bundle command not found then you have set it before running bundle exec, like:
ssh -t <user>#<server> "cd /home/appfolder && /bin/bash --login bundle exec rails c -e production"

Jenkins SSH shell closes before executing remote commands

I have a Jenkins job with the following commands under "Execute shell":
ssh jenkins#172.31.12.58
pwd
I want the Jenkins server to connect via SSH to the remote server then run a command on the remote server.
Instead, Jenkins connects to the remote server, disconnects immediately, then runs the pwd command locally as can be seen in the output:
Started by user Johanan Lieberman
Building in workspace /var/lib/jenkins/jobs/Test Github build/workspace
[workspace] $ /bin/sh -xe /tmp/hudson266272646442487328.sh
+ ssh jenkins#172.31.12.58
Pseudo-terminal will not be allocated because stdin is not a terminal.
+ pwd
/var/lib/jenkins/jobs/Test Github build/workspace
Finished: SUCCESS
Edit: Any idea why the subsequent commands after the ssh command aren't run inside the SSH shell, but rather run locally instead?
If you're not running interactively, SSH does not create an interactive session (thus the "Pseudo-terminal" error message you see), so it's not quite the same as executing a sequence of commands in an interactive terminal.
To run a specific command through an SSH session, use:
ssh jenkins#YOUR_IP 'uname -a'
The remote command must be quoted properly as a single argument to the ssh command. Or use the bash here-doc syntax for a simple multi-line script:
ssh jenkins#YOUR_IP <<EOF
pwd
uname -a
EOF
I think you can use the Publish Over SSH plugin to execute commands on a slave with SSH:
If the Source files field is mandatory, maybe you can transfer a dummy file.
Update:
Another solution is to use the SSH plugin. Maybe it's a better solution compare to the other plugin :)

Is there a way to expand aliases in a non-interactive sh shell?

I have a collection of aliases defined in ~/.aliases which I would like to make available to sh even when it is run non-interactively. My system has been setup in the typical way so that sh is a symlink to bash.
When bash is run non-interactively as bash, this could by using shopt -s expand_aliases together with setting $ENV or $BASH_ENV to (directly or indirectly) source ~/.aliases.
But when bash is invoked non-interactively as sh, it seems to ignore $ENV and all startup files, so I can't see a way to do it. Any ideas? Or is this just not possible?
One way to force the shell to be interactive when running a script is using -i such as:
$ bash -i <script>
Also, note that if your script has execute permissions, you can replace:
#!/bin/bash
with:
#!/bin/bash -i

Resources