Rsync over SSH from an ant script with a password - ant

I have a virtual machine running on my developer machine, and I need to rsync files to it over SSH via an ant build script to "deploy". In production, security is a concern, but I really don't care about secure SSH practices when communicating with a dev VM on my local machine.
I could have created a cert and installed it in my SSH keys, but that's a little annoying. I'd much rather just send my password to rsync via the ant script and call it a day.
(EDIT - If you reeeeally can't handle this question without an example, let's assume this server is outside my control, and their evil sysadmin refuses to allow me to sign in with an SSH key for whatever reason. Who knows? He's just crazy man!)
Is there any way to invoke SSH, or more specifically rsync in non-interactive mode, without editing your ssh config? In other words, just supply the password?

I happen to have already figured out a solution to this, but it wasn't very easy, so I wanted to share it.
Basically, I used a command line program called "expect" to fill my password into rsync's interactive mode. I also didn't want to have to write it up as a script, so I condensed it into a single command. This also works for ssh as well as rsync, if you need that for some reason.
Maybe there's a better way, but this seems to work fine.
192.168.64.131 is obviously my local VM's ip in the following. Replace login_name and login_password with your ssh login & pass.
expect -c 'spawn rsync -avz -e ssh ./ login_name#192.168.64.131:/var/www/auth/; expect "*?assword:*" {send "login_password\r"; interact};'

Much easier and more secure to use an SSH key. An example is given in the following answer:
Ant, download fileset from remote machine

Related

ssh-agent issues when running on heroku

I have a Rails app, using docker, that does some auto changes to another app, and then git pushes the changes it up to GitHub. It took me a bit of time to be able to get my ssh keys onto the docker container, in a sort of same manor (not happy with it fully, but will change it up after I sort this out). My issue now is that when running the git clones in the Dockerfile, it is all good, but then from my rails code, it fails saying that I don't have access, so in the code I go to re ssh-add the keys. However it then says that Could not open a connection to your authentication agent., so then I try to re-initialise the ssh-agent (echo $(ssh-agent -s)), which seems to succeed, but still fails on ssh-add.
If I SSH in and try those steps, it works fine, but if I rails console in and run the functions that run these console calls, it fails with the same problem. It then seems to be that the ssh-agent call to set the env variables aren't being set. I have a feeling that heroku containers are not allowing changing of the env variables, without it going through their heroku config:set, but this isn't possible as each process will have different SSH_AUTH_SOCK and SSH_AGENT_PID. Any suggestions on how to deal with this would be a massive help.
This error normally happens when you don't have active SSH agent running.
Could not open a connection to your authentication agent.
This is quite common with Debian based systems, whereas most Ubuntu has one running at all times.
To fix this, you just need to start a new agent.
eval $(ssh-agent)
This should be run before ssh-add.
In your current setup, you need to evaluate the risk/cost of using a passphrase-protected private SSH key.
As mentioned here, for an automated process, using a passphrase-less key would be the recommended option, provided you are sure there is no easy way to access said private key.

Initial setup for ssh on docker-compose

I am using docker for MacOS / Win.
I connect to external servers via ssh from shell in docker container,
For now, I generate ssh-key in docker shell, and manually send sshkey to servers.
However in this method, everytime I re-build container, sshkey is deleted.
So I want to set initial sshkey when I build images.
I have 2 ideas
Mount .ssh folder from my macOS to docker folder and persist.
(Permission control might be difficult and complex....)
Write scripts that makes the ssh-keymake & sends this to servers in docker-compose.yml or Dockerfile.
(Everytime I build , new key is send...??)
Which is the best practice? or do you have any idea to set ssh-key automatically??
Best practice is typically to not make outbound ssh connections from containers. If what you’re trying to add to your container is a binary or application code, manage your source control setup outside Docker and COPY the data into an image. If it’s data your application needs to run, again fetch it externally and use docker run -v to inject it into the container.
As you say, managing this key material securely, and obeying ssh’s Unix permission requirements, is incredibly tricky. If I really didn’t have a choice but to do this I’d write an ENTRYPOINT script that copied the private key from a bind-mounted volume to my container user’s .ssh directory. But my first choice would be to redesign my application flow to not need this at all.
After reading the "I'm a windows user .." comment I'm thinking you are solving the wrong problem. You are looking for an easy (sane) shell access to your servers. The are are two simpler solutions.
1. Windows Linux subsystem -- https://en.wikipedia.org/wiki/Windows_Subsystem_for_Linux. (not my choice)
Cygwin -- http://www.cygwin.com -- for that comfy Linux feel to your cmd :-)
How I install it.
Download and install it (be careful to only pick the features beyond base that you need. (there is a LOT and most of it you will not need -- like the compilers and X). Make sure that SSH is selected. Don't worry you can rerun the setup as many times as you want (I do that occasionally to update what I use)
Start the bash shell (there will be a link after the installation)
a. run 'cygpath -wp $PATH'
b. look at the results -- there will be a couple of folders in the begging of the path that will look like "C:\cygwin\bin;C:\cygwin\usr\local\bin;..." simply all the paths that start with "C:\cygwin" provided you installed your Cygwin into "C:\Cygwin" directory.
c. Add these paths to your system path
d. Start a new instance of CMD. run 'ls' it should now work directly under windows shell.
Extra credit.
a. move the all the ".xxx" files that were created during the first launch of the shell in your C:\cygwin\home\<username> directory to you windows home directory (C:\Users\<username>).
b. exit any bash shells you have running
c. delete c:\cygwin\home directory
d. use windows mklink utility to create a link named home under cygwin pointing to C:\Users (Administrator shell) 'mklink /J C:\Cygwin\home C:\Users'
This will make your windows home directory the same as your cygwin home.
After that you follow the normal setup for ssh under Cygwin bash and you will be able to generate the keys and distribute them normally to servers.
NOTE: you will have to sever the propagation of credentials from windows to your <home>/.ssh folder (in the folder's security settings) leave just your user id. then set permissions on the folder and various key files underneath appropriately for SSH using 'chmod'.
Enjoy -- some days I have to squint to remember I'm on a windows box ...

Docker and SSH for development with phpStorm

I am trying to setup a small development environment using Docker. phpStorm team is working hard on get Docker integrated for remote interpreter and therefore for debugging but sadly is not working yet (see here). The only way I have to add such capabilities for debugging is by creating and enabling an SSH access to the container which works like a charm.
Now, I have read a lot about this and some people like the one on this post says is not recommended. I have read others which says to have a dedicated SSH Docker container which I don't get how to fit on this environment.
I am already creating a user docker-user (check repo here) for certain tasks like run composer without root permissions. That could be used for this SSH stuff easily by adding a default password to it.
How would you handle this under such circumstances?
I too have implemented the ssh server workaround when using jetbrains IDEs.
Usually what I do is add a public ssh key to the ~/.ssh/authorized_keys file for the SSH user in the target container/system, and enable passwordless sudo.
One solution that I've thought of, but not yet had the time to implement, would be to make some sort of SSH service that would be a gateway to a docker exec command. That would potentially allow at least some functionality without having to modify your images in any way for this dev requirement.

How to assign AWS credentials to SYSTEM user?

I have a Powershell script that uploads audit logs to an S3 repository. The script works fine when I run it while logged in but I need to define a scheduled task and the task needs to be run as SYSTEM user. Can someone recommend a way that I can provide the SYSTEM user with the AWS credentials so that they are not stored on the machine in clear text?
I just found what I think is the answer: if I run the script thru the task scheduler once with the 'Set-AWSCredentials' command it creates the encrypted key info in C:\Windows\System32\config\systemprofile\AppData\Local\AWSToolkit\RegisteredAccounts.json. Then I was able to remove the 'Set-AWSCredentials' command and the script seems to run successfully.
I don't believe this is possible. Your authentication info will have to be stored in the clear on the client machine one way or another, and windows doesn't provide any convenient methods for protecting that information.
You might find it more convenient to manage access if you use ssh + encryption certificates as credentials (i believe there's an ssh client for powershell, though i haven't tried using it for aws work)

Automate ssh from mac to iPhone

I am working on a tool for iOS, it's based on shell scripts that push various stuff to the device and then performs various operations based on what's pushed on the device.
In this case the device is going to be a jailbroken iPhone. And I would like to connect to it from mac.. so, I have used "usbmux" to ssh over usb and it works great. (cheers :D)
Now, the problem is, i would like to completely automate the ssh-ing process, assuming the password is the default 'alpine'. To avoid user interaction.
This is what i have tried and it doesn't give me the expected outcome.
expect <<< 'spawn ssh root#localhost -p 2222; expect "*?password:*"; send "alpine\r";'
I read about ssh-keygen and few other options but they seem to require initial manual interactions. Please help me completely automate this.
Found the solution ..
Execute the following script
#!/usr/bin/expect
spawn ssh -q user#hostname
expect "assword"
send "alpine\r"
interact
Works like a charm!! :)

Resources