I am trying to setup a small development environment using Docker. phpStorm team is working hard on get Docker integrated for remote interpreter and therefore for debugging but sadly is not working yet (see here). The only way I have to add such capabilities for debugging is by creating and enabling an SSH access to the container which works like a charm.
Now, I have read a lot about this and some people like the one on this post says is not recommended. I have read others which says to have a dedicated SSH Docker container which I don't get how to fit on this environment.
I am already creating a user docker-user (check repo here) for certain tasks like run composer without root permissions. That could be used for this SSH stuff easily by adding a default password to it.
How would you handle this under such circumstances?
I too have implemented the ssh server workaround when using jetbrains IDEs.
Usually what I do is add a public ssh key to the ~/.ssh/authorized_keys file for the SSH user in the target container/system, and enable passwordless sudo.
One solution that I've thought of, but not yet had the time to implement, would be to make some sort of SSH service that would be a gateway to a docker exec command. That would potentially allow at least some functionality without having to modify your images in any way for this dev requirement.
Related
I'm exploring how best to use Github Codespaces for my organization. Our dev environment consists of a Docker dev environment that we run on local machines. It relies on pulling other private repos we maintain via the local machine's ssh-agent. I'd ideally like to keep things as consistent as possible and have our Codespaces solution use the same Docker dev environment from within the codespace.
There's a naive solution of just building a new codespace with no devcontainer.json and going through all the setup for a dev environment each time you create a new one... but I'd like to avoid this. Ideally, I keep the same dev experience and am able to get the codespace to prebuild by building the docker image and somehow getting access to our other private repos.
An extremely hacky-feeling solution that works for automated building is creating an ssh key and storing it as a user codespace secret, then setting up the ssh-agent with that ssh-key as part of the postCreateCommand. My understanding is that this would not work with the onCreateCommand because "it will not typically have access to user-scoped assets or secrets.". To reiterate, this works for automated building, but not pre-building.
From this Github issue it looks like cloning via ssh is a complete no-go with prebuilds because ssh will need a user-defined ssh key, which isn't available from the onCreateCommand. The only potential workaround I can see for this is having an organization-wide read-only ssh-key... which seems potentially even sketchier than having user-created ssh keys as user secrets.
The other possibility I can think of is switching to https for the git clones. This would require adding access to the other repos, which is no big deal. BUT I can't quite see how to get access from within the docker image. When I tried this, I was getting errors because I was asked for a username and password when I ran a git clone from within docker... even though git clone worked fine in the base codespace. Is there a way to forward whatever tokens Github uses for access to other repos into the docker build process? Is there a way to have user-generated tokens get passed into the docker build process and use that for access instead?
Thoughts and roasts welcome.
I have a Rails app, using docker, that does some auto changes to another app, and then git pushes the changes it up to GitHub. It took me a bit of time to be able to get my ssh keys onto the docker container, in a sort of same manor (not happy with it fully, but will change it up after I sort this out). My issue now is that when running the git clones in the Dockerfile, it is all good, but then from my rails code, it fails saying that I don't have access, so in the code I go to re ssh-add the keys. However it then says that Could not open a connection to your authentication agent., so then I try to re-initialise the ssh-agent (echo $(ssh-agent -s)), which seems to succeed, but still fails on ssh-add.
If I SSH in and try those steps, it works fine, but if I rails console in and run the functions that run these console calls, it fails with the same problem. It then seems to be that the ssh-agent call to set the env variables aren't being set. I have a feeling that heroku containers are not allowing changing of the env variables, without it going through their heroku config:set, but this isn't possible as each process will have different SSH_AUTH_SOCK and SSH_AGENT_PID. Any suggestions on how to deal with this would be a massive help.
This error normally happens when you don't have active SSH agent running.
Could not open a connection to your authentication agent.
This is quite common with Debian based systems, whereas most Ubuntu has one running at all times.
To fix this, you just need to start a new agent.
eval $(ssh-agent)
This should be run before ssh-add.
In your current setup, you need to evaluate the risk/cost of using a passphrase-protected private SSH key.
As mentioned here, for an automated process, using a passphrase-less key would be the recommended option, provided you are sure there is no easy way to access said private key.
Docker is a wonderful tool for running/deploying your application in a well-defined, controlled environment, and is well supported by e.g. the GitLab CI or by MS Azure.
We would like to use it also in the development phase, so that all developers have the same environment available. Of course, we want to keep the image as light as possible and we do not want e.g. any IDE or other development tool inside of it.
So the actual development takes place outside of docker.
Running our (python) application inside of docker is no problem, but debugging it is not trivial: I do not know of a way to attach a debugger to an application running inside docker. In theory this should be possible, but how does one do it?
Additional info: we use visual studio code, that does have some docker, plugin, but nothing of this sort is mentioned.
Turns out that this is possible, following the same steps needed for remote debugging.
The IP address of the docker image can be retrieved through:
docker inspect <container_id> | grep -i ip
just be sure to add at the beginning of your application:
import ptvsd
# Allow other computers to attach to ptvsd at this IP address and port, using the secret
ptvsd.enable_attach(secret=None, address = ('0.0.0.0', 3000))
ptvsd.wait_for_attach()
'0.0.0.0' means on all interfaces.
For vscode, the last steps consists in adapting the python: Attach configuration, specifying the address and the remote and local roots for your script.
However, for some mysterious reason my breakpoints are ignored.
On my windows server 2016, I am trying to figure out the run command syntax to run a docker image as a user in my ldap. I read this article, but I am not following it very well (different environments)
Perhaps I am miss understanding the concept all together, but in the end I need to run the container as a specific user in our active directory.
Any links to a well documented run --user examples would be appreciated...
One of the things that is confusing is trying to figure out the UserId and such...
The answer depends on the use case, but may be gMSA authentication would help? Basically, with gMSA authentication, you can add the host OS to an AD domain, and containers running on it can share the privileges to use things like network drive. That way, you don't need to pass credential every time you access them.
MS team has a good write up on it here:
Active Directory Service Accounts for Windows Containers
https://learn.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/manage-serviceaccounts
Also, artisticcheese has fantastic walk through.
Enabling integrated Windows Authentication in windows docker container
https://artisticcheese.wordpress.com/2017/09/09/enabling-integrated-windows-authentication-in-windows-docker-container/
Hope this helps.
I have a virtual machine running on my developer machine, and I need to rsync files to it over SSH via an ant build script to "deploy". In production, security is a concern, but I really don't care about secure SSH practices when communicating with a dev VM on my local machine.
I could have created a cert and installed it in my SSH keys, but that's a little annoying. I'd much rather just send my password to rsync via the ant script and call it a day.
(EDIT - If you reeeeally can't handle this question without an example, let's assume this server is outside my control, and their evil sysadmin refuses to allow me to sign in with an SSH key for whatever reason. Who knows? He's just crazy man!)
Is there any way to invoke SSH, or more specifically rsync in non-interactive mode, without editing your ssh config? In other words, just supply the password?
I happen to have already figured out a solution to this, but it wasn't very easy, so I wanted to share it.
Basically, I used a command line program called "expect" to fill my password into rsync's interactive mode. I also didn't want to have to write it up as a script, so I condensed it into a single command. This also works for ssh as well as rsync, if you need that for some reason.
Maybe there's a better way, but this seems to work fine.
192.168.64.131 is obviously my local VM's ip in the following. Replace login_name and login_password with your ssh login & pass.
expect -c 'spawn rsync -avz -e ssh ./ login_name#192.168.64.131:/var/www/auth/; expect "*?assword:*" {send "login_password\r"; interact};'
Much easier and more secure to use an SSH key. An example is given in the following answer:
Ant, download fileset from remote machine