Automate ssh from mac to iPhone - ios

I am working on a tool for iOS, it's based on shell scripts that push various stuff to the device and then performs various operations based on what's pushed on the device.
In this case the device is going to be a jailbroken iPhone. And I would like to connect to it from mac.. so, I have used "usbmux" to ssh over usb and it works great. (cheers :D)
Now, the problem is, i would like to completely automate the ssh-ing process, assuming the password is the default 'alpine'. To avoid user interaction.
This is what i have tried and it doesn't give me the expected outcome.
expect <<< 'spawn ssh root#localhost -p 2222; expect "*?password:*"; send "alpine\r";'
I read about ssh-keygen and few other options but they seem to require initial manual interactions. Please help me completely automate this.

Found the solution ..
Execute the following script
#!/usr/bin/expect
spawn ssh -q user#hostname
expect "assword"
send "alpine\r"
interact
Works like a charm!! :)

Related

How to produce an app bundle so that user can double click to launch docker application?

Suppose I have a docker application (such as this one). The standard usage is using the CLI to run docker run, in this case, for macOS users it would be:
docker run -it --rm bigdeddu/nyxt:2.2.1
Now, I would like to produce an app bundle or something so that users can double click to launch this docker application as a desktop application. It would be kind of a GUI shortcut to launch docker.
How can I achieve that?
1 - Is there a solution already done for it? If so, which one?
2 - If there is not a solution already done for it, what would be a rough sketch on how to build one?
Thanks!
Docker was designed to encapsulate server processes. For servers, the CLI is a reasonable and often satisfactory interface.
If you want users to run their possibly interactive application, you may want to look for https://appimage.org/. Although I am unsure whether that is available for MacOS.
To get around these limitations, you could either think of creating an end user targeting GUI for docker, or an implementation of AppImage for MacOS.

How can I open VS code "in container" without it restarting itself and losing shell settings when "Reopen in container" is invoked?

I have a development use-case where I use a script to configure a shell with docker-machine or other environment and then open a directory containing source and settings (/.vscode/, .devcontainer/) that I can edit/build/debug in the VS code Remote Containers extension.
In short, I'm looking to implement the following sequence when a "start-development.sh" script/hook runs:
Set up host-side env or remote resources (reverse sshfs to mount source to a remote docker-machine, modprobe, docker buildx, xhost for x-passthrough, etc.)
Run VS Code in that shell so settings aren't thrown away with a specified directory (may be mounted via sshfs or other means) in container, not just open on the host
Run cleanup scripts to clean-up and/or destroy real resources (unmount, modprobe -r, etc.) when the development container is stopped (by either closing VS Code or rebuilding the container).
See this script for a simple example of auto-configuring a shell with an AWS instance via docker-machine. I'll be adding a few more examples to this repository over the coming day or so.
It's easy enough to open VS Code in that directory (code -w -n --folder-uri /path/here) and wait for it to quit (so I can perform cleanup steps like taking down the remote docker-machine, un-mounting reverse-sshfs mounted code or disabling kernel mods I use for development, etc.).
However, VS code currently opens in "host mode" and when I choose "Reopen in container" or "Rebuild container" via the UI or command palette, it kills that process and opens another top-level(?) process, quitting the shell & throwing away my configuration and/or prematurely running my cleanup portion of the script so it has the wrong env. when it finally launches in-container. Sadness.
So finally, my question is:
Is there a way to tell VS code to open a folder "in-container"? This would solve a ton of problems for me, instead of a janky dev. cycle where I have to ensure that the code instance isn't restarting itself and messing things up - whenever I rebuild the container, for example.
Alternatively, it'd be great to not quit the top-level code process I started altogether, enabling me to wait on that, or perhaps monitor it in other ways I'm not aware of to prevent erasure of my settings and premature run of my cleanup script?
Thanks in advance!
PS: Please read the entire question before flagging it as "not related to development". If the idea of a zero-install development environment for a complex native project, live on-device development/debugging or deep learning using cloud instances with giant GPUs for Docker where you don't have to manually manage everything and write pages of readmes appeals to you - this is very much about programming.
After all weekend of trying different things, I finally figured it out! The key was this section in the awesome articles about advanced container configuration.
I put that into a bash script and used jq to merge docker.host and other docker env settings into .vscode/settings.json. See this example here.
After running a script that generates this file, the user will only need to reload/relaunch VS code in that workspace folder (where the settings were created) and yay, everything works as expected.
I plan to add some actual samples now that I have the basics working. Unfortunately, I had to separate my create and teardown as separate activate and deactivate hooks. Not a bad workflow still, IMO.

ssh-agent issues when running on heroku

I have a Rails app, using docker, that does some auto changes to another app, and then git pushes the changes it up to GitHub. It took me a bit of time to be able to get my ssh keys onto the docker container, in a sort of same manor (not happy with it fully, but will change it up after I sort this out). My issue now is that when running the git clones in the Dockerfile, it is all good, but then from my rails code, it fails saying that I don't have access, so in the code I go to re ssh-add the keys. However it then says that Could not open a connection to your authentication agent., so then I try to re-initialise the ssh-agent (echo $(ssh-agent -s)), which seems to succeed, but still fails on ssh-add.
If I SSH in and try those steps, it works fine, but if I rails console in and run the functions that run these console calls, it fails with the same problem. It then seems to be that the ssh-agent call to set the env variables aren't being set. I have a feeling that heroku containers are not allowing changing of the env variables, without it going through their heroku config:set, but this isn't possible as each process will have different SSH_AUTH_SOCK and SSH_AGENT_PID. Any suggestions on how to deal with this would be a massive help.
This error normally happens when you don't have active SSH agent running.
Could not open a connection to your authentication agent.
This is quite common with Debian based systems, whereas most Ubuntu has one running at all times.
To fix this, you just need to start a new agent.
eval $(ssh-agent)
This should be run before ssh-add.
In your current setup, you need to evaluate the risk/cost of using a passphrase-protected private SSH key.
As mentioned here, for an automated process, using a passphrase-less key would be the recommended option, provided you are sure there is no easy way to access said private key.

Docker and SSH for development with phpStorm

I am trying to setup a small development environment using Docker. phpStorm team is working hard on get Docker integrated for remote interpreter and therefore for debugging but sadly is not working yet (see here). The only way I have to add such capabilities for debugging is by creating and enabling an SSH access to the container which works like a charm.
Now, I have read a lot about this and some people like the one on this post says is not recommended. I have read others which says to have a dedicated SSH Docker container which I don't get how to fit on this environment.
I am already creating a user docker-user (check repo here) for certain tasks like run composer without root permissions. That could be used for this SSH stuff easily by adding a default password to it.
How would you handle this under such circumstances?
I too have implemented the ssh server workaround when using jetbrains IDEs.
Usually what I do is add a public ssh key to the ~/.ssh/authorized_keys file for the SSH user in the target container/system, and enable passwordless sudo.
One solution that I've thought of, but not yet had the time to implement, would be to make some sort of SSH service that would be a gateway to a docker exec command. That would potentially allow at least some functionality without having to modify your images in any way for this dev requirement.

Rsync over SSH from an ant script with a password

I have a virtual machine running on my developer machine, and I need to rsync files to it over SSH via an ant build script to "deploy". In production, security is a concern, but I really don't care about secure SSH practices when communicating with a dev VM on my local machine.
I could have created a cert and installed it in my SSH keys, but that's a little annoying. I'd much rather just send my password to rsync via the ant script and call it a day.
(EDIT - If you reeeeally can't handle this question without an example, let's assume this server is outside my control, and their evil sysadmin refuses to allow me to sign in with an SSH key for whatever reason. Who knows? He's just crazy man!)
Is there any way to invoke SSH, or more specifically rsync in non-interactive mode, without editing your ssh config? In other words, just supply the password?
I happen to have already figured out a solution to this, but it wasn't very easy, so I wanted to share it.
Basically, I used a command line program called "expect" to fill my password into rsync's interactive mode. I also didn't want to have to write it up as a script, so I condensed it into a single command. This also works for ssh as well as rsync, if you need that for some reason.
Maybe there's a better way, but this seems to work fine.
192.168.64.131 is obviously my local VM's ip in the following. Replace login_name and login_password with your ssh login & pass.
expect -c 'spawn rsync -avz -e ssh ./ login_name#192.168.64.131:/var/www/auth/; expect "*?assword:*" {send "login_password\r"; interact};'
Much easier and more secure to use an SSH key. An example is given in the following answer:
Ant, download fileset from remote machine

Resources