I am using docker for MacOS / Win.
I connect to external servers via ssh from shell in docker container,
For now, I generate ssh-key in docker shell, and manually send sshkey to servers.
However in this method, everytime I re-build container, sshkey is deleted.
So I want to set initial sshkey when I build images.
I have 2 ideas
Mount .ssh folder from my macOS to docker folder and persist.
(Permission control might be difficult and complex....)
Write scripts that makes the ssh-keymake & sends this to servers in docker-compose.yml or Dockerfile.
(Everytime I build , new key is send...??)
Which is the best practice? or do you have any idea to set ssh-key automatically??
Best practice is typically to not make outbound ssh connections from containers. If what you’re trying to add to your container is a binary or application code, manage your source control setup outside Docker and COPY the data into an image. If it’s data your application needs to run, again fetch it externally and use docker run -v to inject it into the container.
As you say, managing this key material securely, and obeying ssh’s Unix permission requirements, is incredibly tricky. If I really didn’t have a choice but to do this I’d write an ENTRYPOINT script that copied the private key from a bind-mounted volume to my container user’s .ssh directory. But my first choice would be to redesign my application flow to not need this at all.
After reading the "I'm a windows user .." comment I'm thinking you are solving the wrong problem. You are looking for an easy (sane) shell access to your servers. The are are two simpler solutions.
1. Windows Linux subsystem -- https://en.wikipedia.org/wiki/Windows_Subsystem_for_Linux. (not my choice)
Cygwin -- http://www.cygwin.com -- for that comfy Linux feel to your cmd :-)
How I install it.
Download and install it (be careful to only pick the features beyond base that you need. (there is a LOT and most of it you will not need -- like the compilers and X). Make sure that SSH is selected. Don't worry you can rerun the setup as many times as you want (I do that occasionally to update what I use)
Start the bash shell (there will be a link after the installation)
a. run 'cygpath -wp $PATH'
b. look at the results -- there will be a couple of folders in the begging of the path that will look like "C:\cygwin\bin;C:\cygwin\usr\local\bin;..." simply all the paths that start with "C:\cygwin" provided you installed your Cygwin into "C:\Cygwin" directory.
c. Add these paths to your system path
d. Start a new instance of CMD. run 'ls' it should now work directly under windows shell.
Extra credit.
a. move the all the ".xxx" files that were created during the first launch of the shell in your C:\cygwin\home\<username> directory to you windows home directory (C:\Users\<username>).
b. exit any bash shells you have running
c. delete c:\cygwin\home directory
d. use windows mklink utility to create a link named home under cygwin pointing to C:\Users (Administrator shell) 'mklink /J C:\Cygwin\home C:\Users'
This will make your windows home directory the same as your cygwin home.
After that you follow the normal setup for ssh under Cygwin bash and you will be able to generate the keys and distribute them normally to servers.
NOTE: you will have to sever the propagation of credentials from windows to your <home>/.ssh folder (in the folder's security settings) leave just your user id. then set permissions on the folder and various key files underneath appropriately for SSH using 'chmod'.
Enjoy -- some days I have to squint to remember I'm on a windows box ...
Related
It is certainly a basic question but I don't know how to deal with this issue.
I am creating a simple docker image executing Python scripts and which be deployed on differents users's Windows laptops. It needs specific shared folder in order to write outputs at the end of the process.
Users are not able to manage any informatic stuff like docker or simple terminal.
So they run it with a bat file where I indicate the docker command with -v option.
But obviously users path are differents on each laptop. How to create a standard image which could avoid this specific mounting path
thanks a lot
I have a development use-case where I use a script to configure a shell with docker-machine or other environment and then open a directory containing source and settings (/.vscode/, .devcontainer/) that I can edit/build/debug in the VS code Remote Containers extension.
In short, I'm looking to implement the following sequence when a "start-development.sh" script/hook runs:
Set up host-side env or remote resources (reverse sshfs to mount source to a remote docker-machine, modprobe, docker buildx, xhost for x-passthrough, etc.)
Run VS Code in that shell so settings aren't thrown away with a specified directory (may be mounted via sshfs or other means) in container, not just open on the host
Run cleanup scripts to clean-up and/or destroy real resources (unmount, modprobe -r, etc.) when the development container is stopped (by either closing VS Code or rebuilding the container).
See this script for a simple example of auto-configuring a shell with an AWS instance via docker-machine. I'll be adding a few more examples to this repository over the coming day or so.
It's easy enough to open VS Code in that directory (code -w -n --folder-uri /path/here) and wait for it to quit (so I can perform cleanup steps like taking down the remote docker-machine, un-mounting reverse-sshfs mounted code or disabling kernel mods I use for development, etc.).
However, VS code currently opens in "host mode" and when I choose "Reopen in container" or "Rebuild container" via the UI or command palette, it kills that process and opens another top-level(?) process, quitting the shell & throwing away my configuration and/or prematurely running my cleanup portion of the script so it has the wrong env. when it finally launches in-container. Sadness.
So finally, my question is:
Is there a way to tell VS code to open a folder "in-container"? This would solve a ton of problems for me, instead of a janky dev. cycle where I have to ensure that the code instance isn't restarting itself and messing things up - whenever I rebuild the container, for example.
Alternatively, it'd be great to not quit the top-level code process I started altogether, enabling me to wait on that, or perhaps monitor it in other ways I'm not aware of to prevent erasure of my settings and premature run of my cleanup script?
Thanks in advance!
PS: Please read the entire question before flagging it as "not related to development". If the idea of a zero-install development environment for a complex native project, live on-device development/debugging or deep learning using cloud instances with giant GPUs for Docker where you don't have to manually manage everything and write pages of readmes appeals to you - this is very much about programming.
After all weekend of trying different things, I finally figured it out! The key was this section in the awesome articles about advanced container configuration.
I put that into a bash script and used jq to merge docker.host and other docker env settings into .vscode/settings.json. See this example here.
After running a script that generates this file, the user will only need to reload/relaunch VS code in that workspace folder (where the settings were created) and yay, everything works as expected.
I plan to add some actual samples now that I have the basics working. Unfortunately, I had to separate my create and teardown as separate activate and deactivate hooks. Not a bad workflow still, IMO.
Motivation
As of now, we are using five docker containers (MySQL, PHP, static...) managed by docker-compose. We do only need to access one of them. We now have a local copy of all data inside and sync it from Windows to the container, but that is very slow, VSCode on Windows sometimes randomly locks files causing git rebase origin/master to end in very unpleasant ways.
Desired behaviour
Use VSCode Remote Development extension to:
Edit files inside the container without any mirrored files on Windows
Run git commands (checkout, rebase, merge...)
Run build commands (make, ng, npm)
Still keep Windows as for many developers it is the prefered platform.
Question
Is it possible to develop inside a docker container using VSCode?
I have tried to follow the official guide, but they do seem to require us to have mirrored files. We do also use WSL.
As #FSCKur points out this is the exact scenario VSCode dev containers is supposed to address, but on Windows I've found the performance to be unusable.
I've settled on running VSCode and docker inside a Linux VM on Windows, and have a 96% time saving in things like running up a server and watching code for changes making this setup my preferred way now.
The standardisation of devcontainer.json and being able to use github codespaces if you're away from your normal dev machine make this whole setup a pleasure to use.
see https://stackoverflow.com/a/72787362/183005 for detailed timing comparison and setup details
This is sounds like exactly what I do. My team uses Windows on the desktop, and we develop a containerised Linux app.
We use VSCode dev containers. They are an excellent solution for the scenario.
You may also be able to SSH to your docker host and code on it, but in my view this is less good because you want to keep all customisation "contained" - I have installed a few quality-of-life packages in my dev container which I'd prefer to keep out of my colleague's environments and off the docker host.
We have access to the docker host, so we clone our source on the docker host and mount it through. We also bind-mount folders from the docker host for SQL and Redis data - but that could be achieved with docker volumes instead. IIUC, the workspace folder itself does have to be a bind-mount - in fact, no alternative is allowed in the devcontainer.json file. But since you need permission anyway on the docker daemon, this is probably achievable.
All source code operations happen in the dev container, i.e. in Linux. We commit and push from there, we edit our code there. If we need to work on the repo on our laptops, we pull it locally. No rcopy, no SCP - github is our "sync" mechanism. We previously used vagrant and mounted the source from Windows - the symlinks were an absolute pain for us, but probably anyone who's tried mounting source code from Windows into Linux will have experienced pain over some element or other.
VSCode in a dev container is very similar to the local experience. You will get bash in the terminal. To be real, you probably can't work like this without touching bash. However, you can install PSv7 in the container, and/or a 'better' shell (opinion mine) such as zsh.
Summary
So I'm trying to figure out a way to use docker to be able to spin up testing environments for customers rather easily. Basically, I've got a customized piece of software that want to install to a Windows docker container (microsoft/windowsservercore), and I need to be able to access the program folder for that software (C:\Program Files\SOFTWARE_NAME) as it has some logs, imports/exports, and other miscellaneous configuration files. The installation part was easy, and I figured that after a few hours of messing around with docker and learning how it works, but transferring files in a simple manner is proving far more difficult than I would expect. I'm well aware of the docker cp command, but I'd like something that allows for the files to be viewed in a file browser to allow testers to quickly/easily view log/configuration files from the container.
Background (what I've tried):
I've spent 20+ hours monkeying around with running an SSH server on the docker container, so I could just ssh in and move files back and forth, but I've had no luck. I've spent most of my time trying to configure OpenSSH, and I can get it installed, but there appears to be something wrong with the default configuration file provided with my installation, as I can't get it up and running unless I start it manually via command line by running sshd -d. Strangely, this runs just fine, but it isn't really a viable solution as it is running in debug mode and shuts down as soon as the connection is closed. I can provide more detail on what I've tested with this, but it seems like it might be a dead end (even though I feel like this should be extremely simple). I've followed every guide I can find (though half are specific to linux containers), and haven't gotten any of them to work, and half the posts I've found just say "why would you want to use ssh when you can just use the built in docker commands". I want to use ssh because it's simpler from an end user's perspective, and I'd rather tell a tester to ssh to a particular IP than make them interact with docker via the command line.
EDIT: Using OpenSSH
Starting server using net start sshd, which reports it starting successfully, however, the service stops immediately if I haven't generated at least an RSA or DSA key using:
ssh-keygen.exe -f "C:\\Program Files\\OpenSSH-Win64/./ssh_host_rsa_key" -t rsa
And modifying the permissions using:
icacls "C:\Program Files\OpenSSH-Win64/" /grant sshd:(OI)(CI)F /T
and
icacls "C:\Program Files\OpenSSH-Win64/" /grant ContainerAdministrator:(OI)(CI)F /T
Again, I'm using the default supplied sshd_config file, but I've tried just about every adjustment of those settings I can find and none of them help.
I also attempted to setup Volumes to do this, but because the installation of our software is done at compile time in docker, the folder that I want to map as a volume is already populated with files, which seems to make docker fail when I try to start the container with the volume attached. This section of documentation seems to say this should be possible, but I can't get it to work. Keep getting errors when I try to start the container saying "the directory is not empty".
EDIT: Command used:
docker run -it -d -p 9999:9092 --mount source=my_volume,destination=C:/temp my_container
Running this on a ProxMox VM.
At this point, I'm running out of ideas, and something that I feel like should be incredibly simple is taking me far too many hours to figure out. It particularly frustrates me that I see so many blog posts saying "Just use the built in docker cp command!" when that is honestly a pretty bad solution when you're going to be browsing lots of files and viewing/editing them. I really need a method that allows the files to be viewed in a file browser/notepad++.
Is there something obvious here that I'm missing? How is this so difficult? Any help is appreciated.
So after a fair bit more troubleshooting, I was unable to get the docker volume to initialize on an already populated folder, even though the documentation suggests it should be possible.
So, I instead decided to try to start the container with the volume linked to an empty folder, and then start the installation script for the program after the container is running, so the folder populates after the volume is already linked. This worked perfectly! There's a bit of weirdness if you leave the files in the volume and then try to restart the container, as it will overwrite most of the files, but things like logs and files not created by the installer will remain, so we'll have to figure out some process for managing that, but it works just like I need it to, and then I can use windows sharing to access that volume folder from anywhere on the network.
Here's how I got it working, it's actually very simple.
So in my dockerfile, I added a batch script that unzips the installation DVD that is copied to the container, and runs the installer after extracting. I then used the CMD option to run this on container start:
Dockerfile
FROM microsoft/windowsservercore
ADD DVD.zip C:\\resources\\DVD.zip
ADD config.bat C:\\resources\\config.bat
CMD "C:\resources\config.bat" && cmd
Then I build the container without anything special:
docker build -t my_container:latest .
And run it with the attachment to the volume:
docker run -it -d -p 9999:9092 --mount source=my_volume,destination="C:/Program Files (x86)/{PROGRAM NAME}" my_container
And that's it. Unfortunately, the container takes a little longer to start (it does build faster though, for what that's worth, as it isn't running the installer in the build), and the program isn't installed/running for another 5 minutes or so after the container does start, but it works!
I can provide more details if anyone needs them, but most of the rest is implementation specific and fairly straightforward.
Try this with Docker composer. Unfortunately, I cannot test it as I'm using a Mac it's not a "supported platform" (way to go Windows). See if that works, if not try volume line like this instead - ./my_volume:C:/tmp/
Dockerfile
FROM microsoft/windowsservercore
# need to ecape \
WORKDIR C:\\tmp\\
# Add the program from host machine to container
ADD ["<source>", "C:\tmp"]
# Normally used with web servers
# EXPOSE 80
# Running the program
CMD ["C:\tmp\program.exe", "any-parameter"]
Docker Composer
Should ideally be in the parent folder.
version: "3"
services:
windows:
build: ./folder-of-Dockerfile
volume:
- type: bind
source: ./my_volume
target: C:/tmp/
ports:
- 9999:9092
Folder structure
|---docker-composer.yml
|
|---folder-of-Dockerfile
|
|---Dockerfile
Just run docker-composer up to build and start the container. Use -d for detach mode, should only use once you know its working properly.
Useful link Manage Windows Dockerfile
I have a custom AMI that has my app directory and a docker image. I'm setting up Auto Scale Group with Launch Configuration to create a new instance. I have a User Data script to boot up the application. This is the code:
#!/bin/bash
docker-compose -f /home/ec2-user/app/docker-compose.yaml up -d app
the script runs, but the app doesn't run. I can SSH and run the app manually which works. Looking at the cloud-init-output.log file, I'm getting the following:
/var/lib/cloud/instance/scripts/part-001: line 4: docker-compose: command not found
Docker-compose is available when I SSH as I've installed it before creating my custom AMI.
Anything I'm missing?
Doesn't matter regarding your best practice question. Either way would suffice.
HakRou is right however.
The boot strap is operating under a different security context / shell environment so you need to cater for that.
You could just put the entire path to the binary file as well such as:
/usr/local/bin/docker-compose -f /home/ec2-user/app/docker-compose.yaml up -d app
and see how that goes.
docker-compose might have been available to the user you used to SSH into your instance (like ec2-user, ubuntu or admin), but it might not be available to root, and root is the one used with user-data when Amazon spins a new instance.
So you might want to add a soft link of docker-compose in one of the folders in the root $PATH, /usr/bin for exemple.