Start ssh automatically when restart docker container - docker

I am new to Nvidia-docker. I made a container and installed tones of Softwares in it. I can SSH to the container, but every time I restart the container, I need to restart the SSH service. I want to know how to start ssh automatically.
I saw similar problems on this question saying change my dockerfile, but I do not have a dockerfile...I created it using command and installed all Softwares in it, just consider my container like a normal Ubuntu.
Thank you for replying codingwithmanny. I am using Ubuntu, just using command prompt to operate. I feel my question is more like: if I did not setup autostart ssh when setting up the docker container, what should I do now to make it up?

Its better to use docker exec -it container_name /bin/bash (or /bin/sh) instead of SSH server
https://docs.docker.com/engine/reference/commandline/exec/
https://medium.com/better-programming/docker-best-practices-and-anti-patterns-e7cbccba4f19

Related

Install a package with Docker in Ubuntu

I want to install a package by docker, following instruction in: https://dynamic-fba.readthedocs.io/en/latest/installation.html#installing-from-source
I installed ubuntu and then Docker. But I don't understand what I need to do next. There it is said to type (docker run -it -v ${PWD}:/opt/examples davidtourigny/dfba python3 examples/example1.py). I excatly type it in ubuntu but I get this error:
docker: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?.
See 'docker run --help'.
Using alternative method of dockerfile, I also get error. I don't know how to make use of make build, but used build instead following tutorials on the web.
It's my first time using Docker and I don't know what to do.
Any help is very appreciated.
The Docker application has two components, a back-end server, and a front-end cli. This way you can do cool stuff like control Docker remotely or have orchestration frameworks that manage multiple Docker nodes over the network like Kubernates.
For security, the Docker back-end server is not exposed on a normal TCP port but it uses a unix domain socket (Linux magic that makes a file act as a port) at unix:///var/run/docker.sock.
When you execute docker run -it ... the cli application will attempt to connect to the backend server, but it looks like the daemon/server is probably not running.
Try to check that daemon is running. If you are using systemd you can check with
systemctl status docker and start if is stopped with systemctl start docker finally it might be good to enable it to make sure it starts automatically on reboot, you can do that with systemctl enable docker
Make sure to start docker service (you can either go for systemctl start docker or reboot your computer).
Once this is done, it is likely that your user has no permissions to communicate with Docker without sudo. Docker has privileged access to your hardware and therefore giving a user the docker group is required for security reasons.
Run:
sudo usermod -aG docker $USER
groupadd docker
docker run hello-world
This will add you to docker group, reflect the changes inmediately and run a sample image from Docker.
If all was okay, the last command should tell you "Hello from Docker".

Pycharm use Docker Container Python as Remote Interpreter

I am trying to use the python in a docker container on a remote machine as the interpreter in Pycharm. Since that is a mouthful, here is a diagram:
There is a Jupyter Notebook running in the container, which I am able to connect to through my local browser (although this is just for testing the connection). The command I am using to launch the Docker container is
docker run --runtime=nvidia -it --rm --shm-size=2g -v /home/timo/storage:/storage -v /etc/passwd:/etc/passwd -v /etc/group:/etc/group --ulimit memlock=-1 -p 8888:8888 -p 7722:22 --ipc=host latest:latest
I can forward the port 8888 which the Jupyter notebook is running on with ssh -L 8888:0.0.0.0:8888 BBB.BBB.BBB.BBB and thus use it on the local machine. But I don't much like using Jupyter for developing and would like to use the Python interpreter in the Docker Container in Pycharm.
When I select "Add Python Interpreter" in Pycharm, I get the following options:
The documentation for Pycharm suggests using the "Add Python Interpreter/Docker" tool which looks like this:
However the documentation doesn't say how to set up the Docker container and the connections if the Docker is on a remote machine.
So my questions are: should I use a Unix or a TCP socket to connect to my remote docker? Or should I somehow forward all the relevant ports from the container and use the "SSH Interpreter" option? And if so, how do I set this all up? Am I setting up my Docker Container properly in the first place?
I think I have trawled through every forum and online resource, over the last two days, but have not come any closer to getting this to work. I have also tried to get this to work in Spyder, but to no avail either. So any advice is very appreciated!
Many thanks!
Thank you for depicting the dilemma so poignantly and clearly in your cartoon :-). My colleague and I were trying to do something similar and what ultimately worked beautifully was creating an SSH config directly to the Docker container jumping from the remote machine, and then setting it as a remote SSH interpreter so that pycharm doesn't even realize it's a Docker container. It also works well for vscode.
set up ssh service in docker container (subset of steps in https://dev.to/s1ntaxe770r/how-to-setup-ssh-within-a-docker-container-i5i, port22 stuff wasn't needed)
docker exec -it <container> bash: create admin interactive prompt for docker
apt-get install openssh-server
service ssh start
confirm with service ssh status -> * sshd is running
determine IP and test SSHing from remote machine into container (adapted from https://phoenixnap.com/kb/how-to-ssh-into-docker-container, steps 2 and 3)
from normal command prompt on remote machine (not within container): docker inspect -f "{{ .NetworkSettings.IPAddress }}" <container> to get container IP
test: ping -c 3 <container_ip>
ssh: ssh <container_ip>; should drop you into the container as your user; however, requires container to be configured properly (docker run cmd has -v /etc/passwd:/etc/passwd:ro \ etc.). It may ask for a password. note: if you do this for a different container later that is assigned the same IP, you will get a warning and may need to delete the previous key from known_hosts; just follow the instructions in the warning.
test SSH from local machine
if you don't have it set up already, set up passwordless ssh key-based authentication to the remote machine with the docker container
make SSH command that uses your remote machine as a jump server to the container: ssh -J <remote_machine> <container_ip>, as described in https://wiki.gentoo.org/wiki/SSH_jump_host; if successful you should drop into the container just as you did from the remote machine
save this setup in your ~/.ssh/config; follow the ProxyJump Example from https://wiki.gentoo.org/wiki/SSH_jump_host
test config with ssh <container_host_name_defined_in_ssh_config>; should also drop you into interactive container
configure pycharm (or vscode or any IDE that accepts remote SSH interpreter)
Preferences -> Project -> Python Interpreter -> Add -> SSH Interpreter -> New server configuration
host: <container_host_name_defined_in_ssh_config>
port: 22
username: <username_on_remote_server>
select interpreter, can navigate using the folder icon, which will walk you through paths within the docker, or you can enter the result of which python from the container
follow pycharm prompts

how to access a path of a container from `docker-machine `

how to access a path of a container from docker-machine? I have the ip docker-machine and I want to connect via remote in a docker image, e.g:
when I connect to ssh docker#5.5.5.5, all file are docker-machine, but I wat to conect a docker image via ssh.
whe I use this comman docker exec -u 0 -it test bash all files from the imagen are ok, but I want to access with ssh using docker-machine.
How can I do it?
This is tricky as Docker is designed to run a single process in foreground and containers dies when the process completed. This means Docker containers don't run anything additional other than what you define in the Dockerfile or docker-compose.yml.
What you can try is using docker-compose.yml file, expose the port 22 to outside world (also can be done through command line with Dockerfile). This is NOT guaranteed to work as this require the image to run an SSH daemon and most cases it runs one process.
If you're looking to persist files that are used by containers, such as when a container is re-deployed it starts where it left off, you can mount a folder from host machine to the container as a volume.

Installing Active directory or Ldap server in windows container

I'd like to test a product using Docker windows containers.
I need a container with Active directory or Ldap server.
Does anyone know how to accomplish that?
Thanks
Your question is not clear about whether you want the container to run on windows host or you want windows to run inside the container in which you want LDAP.
Regarding former, you can start OpenLDAP by following commands in a windows host machine with docker for-windows installed:
docker pull osixia/openldap
docker run --name my-openldap-container --detach osixia/openldap
The details on how to configure the OpenLDAP container is given here:
https://github.com/osixia/docker-openldap

Consul and Tomcat in the same docker container

This is a two-part question.
First part:
What is the best approach to run Consul and a Tomcat in the same docker container?
I've built my own image, installing both Tomcat and Consul correctly, but I am not sure on how to start them. I tried putting both calls as CMD in the Dockerfile, but no success. I tried to put Consul as an ENTRYPOINT (Dockerfile) and Tomcat to be called in the "docker run" command. It could be vice versa but I have a feeling that it is no good way either.
The docker will run in one AWS instance. Each docker container would run Consul as a server, to register themselves in another AWS instance. Consul and Consul-template will be integrated into proper load balance. This way, my HAproxy instance will be able to correctly forward the requests as I plug or unplug containers.
Second part:
In individual tests I did, the docker container was able to reach my main Consul server(leader) but it failed to register itself as an "alive" node.
Reading the logs at Consul server, I think is a matter of which ports I am exposing and publishing. In AWS, I already allowed communication in all ports in TCP and UDP between the instances in this particular Security Group.
Do you know which ports I should be exposing and publishing to allow proper communication between a standalone consul(aws instance) and consul servers (running inside docker containers inside a aws container)? What is command to run the docker container: docker run -p 8300:8300 .........
Thank you.
I would use ENTRYPOINT to kick off a script on docker run.
Something like
ENTRYPOINT myamazingbashscript.sh
Syntax might be off but u get the idea
The script should start both services and finally should tail -f the tomcat logs (or any logs).
tail -f will prevent container exit since the tail -f command never exits and it will also help u to see what tomcat is doing
Do ... docker logs -f to watch the logs after a docker run
Note because the container doesn't exit u can exec into it with ... docker exec -it containerName bash
This lets you have a sniff around inside the container
Not generally the best approach to have two services in one container because it destroys the separation of concerns and reusability but u may have valid reasons
To build use docker build then run with docker run as u stated it.
If u decided to go for a 2 container solution then u will need to expose ports between containers to allow them to talk to each other. You could share files between containers using volumes_from

Resources