ssh-agent issues when running on heroku - ruby-on-rails

I have a Rails app, using docker, that does some auto changes to another app, and then git pushes the changes it up to GitHub. It took me a bit of time to be able to get my ssh keys onto the docker container, in a sort of same manor (not happy with it fully, but will change it up after I sort this out). My issue now is that when running the git clones in the Dockerfile, it is all good, but then from my rails code, it fails saying that I don't have access, so in the code I go to re ssh-add the keys. However it then says that Could not open a connection to your authentication agent., so then I try to re-initialise the ssh-agent (echo $(ssh-agent -s)), which seems to succeed, but still fails on ssh-add.
If I SSH in and try those steps, it works fine, but if I rails console in and run the functions that run these console calls, it fails with the same problem. It then seems to be that the ssh-agent call to set the env variables aren't being set. I have a feeling that heroku containers are not allowing changing of the env variables, without it going through their heroku config:set, but this isn't possible as each process will have different SSH_AUTH_SOCK and SSH_AGENT_PID. Any suggestions on how to deal with this would be a massive help.

This error normally happens when you don't have active SSH agent running.
Could not open a connection to your authentication agent.
This is quite common with Debian based systems, whereas most Ubuntu has one running at all times.
To fix this, you just need to start a new agent.
eval $(ssh-agent)
This should be run before ssh-add.

In your current setup, you need to evaluate the risk/cost of using a passphrase-protected private SSH key.
As mentioned here, for an automated process, using a passphrase-less key would be the recommended option, provided you are sure there is no easy way to access said private key.

Related

Prebuilding a docker image *within* a Github Codespace when the image relies on the organization's other private repositories?

I'm exploring how best to use Github Codespaces for my organization. Our dev environment consists of a Docker dev environment that we run on local machines. It relies on pulling other private repos we maintain via the local machine's ssh-agent. I'd ideally like to keep things as consistent as possible and have our Codespaces solution use the same Docker dev environment from within the codespace.
There's a naive solution of just building a new codespace with no devcontainer.json and going through all the setup for a dev environment each time you create a new one... but I'd like to avoid this. Ideally, I keep the same dev experience and am able to get the codespace to prebuild by building the docker image and somehow getting access to our other private repos.
An extremely hacky-feeling solution that works for automated building is creating an ssh key and storing it as a user codespace secret, then setting up the ssh-agent with that ssh-key as part of the postCreateCommand. My understanding is that this would not work with the onCreateCommand because "it will not typically have access to user-scoped assets or secrets.". To reiterate, this works for automated building, but not pre-building.
From this Github issue it looks like cloning via ssh is a complete no-go with prebuilds because ssh will need a user-defined ssh key, which isn't available from the onCreateCommand. The only potential workaround I can see for this is having an organization-wide read-only ssh-key... which seems potentially even sketchier than having user-created ssh keys as user secrets.
The other possibility I can think of is switching to https for the git clones. This would require adding access to the other repos, which is no big deal. BUT I can't quite see how to get access from within the docker image. When I tried this, I was getting errors because I was asked for a username and password when I ran a git clone from within docker... even though git clone worked fine in the base codespace. Is there a way to forward whatever tokens Github uses for access to other repos into the docker build process? Is there a way to have user-generated tokens get passed into the docker build process and use that for access instead?
Thoughts and roasts welcome.

How can I open VS code "in container" without it restarting itself and losing shell settings when "Reopen in container" is invoked?

I have a development use-case where I use a script to configure a shell with docker-machine or other environment and then open a directory containing source and settings (/.vscode/, .devcontainer/) that I can edit/build/debug in the VS code Remote Containers extension.
In short, I'm looking to implement the following sequence when a "start-development.sh" script/hook runs:
Set up host-side env or remote resources (reverse sshfs to mount source to a remote docker-machine, modprobe, docker buildx, xhost for x-passthrough, etc.)
Run VS Code in that shell so settings aren't thrown away with a specified directory (may be mounted via sshfs or other means) in container, not just open on the host
Run cleanup scripts to clean-up and/or destroy real resources (unmount, modprobe -r, etc.) when the development container is stopped (by either closing VS Code or rebuilding the container).
See this script for a simple example of auto-configuring a shell with an AWS instance via docker-machine. I'll be adding a few more examples to this repository over the coming day or so.
It's easy enough to open VS Code in that directory (code -w -n --folder-uri /path/here) and wait for it to quit (so I can perform cleanup steps like taking down the remote docker-machine, un-mounting reverse-sshfs mounted code or disabling kernel mods I use for development, etc.).
However, VS code currently opens in "host mode" and when I choose "Reopen in container" or "Rebuild container" via the UI or command palette, it kills that process and opens another top-level(?) process, quitting the shell & throwing away my configuration and/or prematurely running my cleanup portion of the script so it has the wrong env. when it finally launches in-container. Sadness.
So finally, my question is:
Is there a way to tell VS code to open a folder "in-container"? This would solve a ton of problems for me, instead of a janky dev. cycle where I have to ensure that the code instance isn't restarting itself and messing things up - whenever I rebuild the container, for example.
Alternatively, it'd be great to not quit the top-level code process I started altogether, enabling me to wait on that, or perhaps monitor it in other ways I'm not aware of to prevent erasure of my settings and premature run of my cleanup script?
Thanks in advance!
PS: Please read the entire question before flagging it as "not related to development". If the idea of a zero-install development environment for a complex native project, live on-device development/debugging or deep learning using cloud instances with giant GPUs for Docker where you don't have to manually manage everything and write pages of readmes appeals to you - this is very much about programming.
After all weekend of trying different things, I finally figured it out! The key was this section in the awesome articles about advanced container configuration.
I put that into a bash script and used jq to merge docker.host and other docker env settings into .vscode/settings.json. See this example here.
After running a script that generates this file, the user will only need to reload/relaunch VS code in that workspace folder (where the settings were created) and yay, everything works as expected.
I plan to add some actual samples now that I have the basics working. Unfortunately, I had to separate my create and teardown as separate activate and deactivate hooks. Not a bad workflow still, IMO.

Docker and SSH for development with phpStorm

I am trying to setup a small development environment using Docker. phpStorm team is working hard on get Docker integrated for remote interpreter and therefore for debugging but sadly is not working yet (see here). The only way I have to add such capabilities for debugging is by creating and enabling an SSH access to the container which works like a charm.
Now, I have read a lot about this and some people like the one on this post says is not recommended. I have read others which says to have a dedicated SSH Docker container which I don't get how to fit on this environment.
I am already creating a user docker-user (check repo here) for certain tasks like run composer without root permissions. That could be used for this SSH stuff easily by adding a default password to it.
How would you handle this under such circumstances?
I too have implemented the ssh server workaround when using jetbrains IDEs.
Usually what I do is add a public ssh key to the ~/.ssh/authorized_keys file for the SSH user in the target container/system, and enable passwordless sudo.
One solution that I've thought of, but not yet had the time to implement, would be to make some sort of SSH service that would be a gateway to a docker exec command. That would potentially allow at least some functionality without having to modify your images in any way for this dev requirement.

Moving from Docker Containers to Cloud Foundry containers

Recently I started to practice Dockers. Basically, I am running a C application on Docker container. Now, I want to try cloud foundry, therefore, trying to understand the difference between the two.
I'll describe the application as a novice because I am.
The application I start as a service(from /etc/init.d) and it reads a config file during startup, which specifies what all modules to load and IP of other services and it's own (0.0.0.0 does not work, so I have to give actual IP).
I had to manually update the IP and some details in the config file when the container starts. So, I wrote a startup script which did all the changes when the container starts and then the service start command.
Now, moving on to Cloud Foundry, the first thing I was not able to find is 'How to deploy C application' then I found a C build pack and a binary build pack option. I still have to try those but what I am not able to understand how I can provide a startup script to a cloud foundry container or in brief how to achieve what I was doing with Dockers.
The last option I have is to use docker containers in Cloud foundry, but I want to understand if I can achieve what I described above.
I hope I was clear enough to explain my doubt.
Help appreciated.
An old question, but a lot has changed since this was posted:
Recently I started to practice Dockers. Basically, I am running a C application on Docker container. Now, I want to try cloud foundry, therefore, trying to understand the difference between the two.
...
The last option I have is to use docker containers in Cloud foundry, but I want to understand if I can achieve what I described above.
There's nothing wrong with using Docker containers on CF. If you've already got everything set up to run inside a Docker container, being able to run that on CF give you yet another place you can easily deploy your workload.
While these are pretty minor, there are a couple requirements for your Docker container, so it's worth checking those to make sure it's possible to run on CF.
https://docs.cloudfoundry.org/devguide/deploy-apps/push-docker.html#requirements
Anyways, I am not working on this now as CF is not suitable for the project. It's an SIP application and CF only accepts HTTP/S requests.
OK, the elephant in the room. This is no longer true. CF has support for TCP routes. These allow you to receive TCP traffic directly to your application. This means, it's no longer just HTTP/S apps that are suitable for running on CF.
Instructions to set up your CF environment with TCP routing: https://docs.cloudfoundry.org/adminguide/enabling-tcp-routing.html
Instructions to use TCP routes as a developer: https://docs.cloudfoundry.org/devguide/deploy-apps/routes-domains.html#create-route-with-port
Now, moving on to Cloud Foundry, the first thing I was not able to find is 'How to deploy C application' then I found a C build pack and a binary build pack option.
Picking a buildpack is an important step. The buildpack takes your app and prepares it to run on CF. A C buildpack might sound nice as it would take your source code, build and run it, but it's going to get tricky because your C app likely depends on libraries. Libraries that may or may not be installed.
If you're going to go this route, you'll probably need to use CF's multi-buildpack support. This lets you run multiple buildpacks. If you pair this with the Apt buildpack, you can install the packages that you need so that any required libraries are available for your app as it's compiled.
https://docs.cloudfoundry.org/buildpacks/use-multiple-buildpacks.html
https://github.com/cloudfoundry/apt-buildpack
Using the binary buildpack is another option. In this case, you'd build your app locally. Perhaps in a docker container or on an Ubuntu VM (it needs to match the stack being used by your CF provider, i.e. cf stacks, currently Ubuntu Trusty or Ubuntu Bionic). Once you have a binary or binary + set of libraries, you can simply cf push the compiled artifacts. The binary buildpack will "run" (it actually does nothing) and then your app will be started with the command you specify.
My $0.02 only, but the binary buildpack is probably the easier of the two options.
what I am not able to understand how I can provide a startup script to a cloud foundry container or in brief how to achieve what I was doing with Dockers.
There's a few ways you can do this. The first is to specify a custom start command. You do this with cf push -c 'command'. This would normally be used to just start your app, like './my-app', but you could also use this to do other things.
Ex: cf push -c './prep-my-app.sh && ./my-app'
Or even just call your start script:
Ex: cf push -c './start-my-app.sh'.
CF also has support for a .profile script. This can be pushed with your app (at the root of the files you push), and it will be executed by the platform prior to your application starting up.
https://docs.cloudfoundry.org/devguide/deploy-apps/deploy-app.html#profile
Normally, you'd want to use a .profile script as you'd want to let the buildpack decide how to start your app (setting -c will override the buildpack), but in your case with the C or binary buildpack's, it's unlikely the buildpack will be able to do that, so you'll end up having to set a custom start command anyway.
For this specific case, I'd suggest using cf push -c as it's slightly easier, but for all other cases and apps deployed with other buildpacks, I'd suggest a .profile script.
Hope that helps!

Rsync over SSH from an ant script with a password

I have a virtual machine running on my developer machine, and I need to rsync files to it over SSH via an ant build script to "deploy". In production, security is a concern, but I really don't care about secure SSH practices when communicating with a dev VM on my local machine.
I could have created a cert and installed it in my SSH keys, but that's a little annoying. I'd much rather just send my password to rsync via the ant script and call it a day.
(EDIT - If you reeeeally can't handle this question without an example, let's assume this server is outside my control, and their evil sysadmin refuses to allow me to sign in with an SSH key for whatever reason. Who knows? He's just crazy man!)
Is there any way to invoke SSH, or more specifically rsync in non-interactive mode, without editing your ssh config? In other words, just supply the password?
I happen to have already figured out a solution to this, but it wasn't very easy, so I wanted to share it.
Basically, I used a command line program called "expect" to fill my password into rsync's interactive mode. I also didn't want to have to write it up as a script, so I condensed it into a single command. This also works for ssh as well as rsync, if you need that for some reason.
Maybe there's a better way, but this seems to work fine.
192.168.64.131 is obviously my local VM's ip in the following. Replace login_name and login_password with your ssh login & pass.
expect -c 'spawn rsync -avz -e ssh ./ login_name#192.168.64.131:/var/www/auth/; expect "*?assword:*" {send "login_password\r"; interact};'
Much easier and more secure to use an SSH key. An example is given in the following answer:
Ant, download fileset from remote machine

Resources