I am trying to use the python in a docker container on a remote machine as the interpreter in Pycharm. Since that is a mouthful, here is a diagram:
There is a Jupyter Notebook running in the container, which I am able to connect to through my local browser (although this is just for testing the connection). The command I am using to launch the Docker container is
docker run --runtime=nvidia -it --rm --shm-size=2g -v /home/timo/storage:/storage -v /etc/passwd:/etc/passwd -v /etc/group:/etc/group --ulimit memlock=-1 -p 8888:8888 -p 7722:22 --ipc=host latest:latest
I can forward the port 8888 which the Jupyter notebook is running on with ssh -L 8888:0.0.0.0:8888 BBB.BBB.BBB.BBB and thus use it on the local machine. But I don't much like using Jupyter for developing and would like to use the Python interpreter in the Docker Container in Pycharm.
When I select "Add Python Interpreter" in Pycharm, I get the following options:
The documentation for Pycharm suggests using the "Add Python Interpreter/Docker" tool which looks like this:
However the documentation doesn't say how to set up the Docker container and the connections if the Docker is on a remote machine.
So my questions are: should I use a Unix or a TCP socket to connect to my remote docker? Or should I somehow forward all the relevant ports from the container and use the "SSH Interpreter" option? And if so, how do I set this all up? Am I setting up my Docker Container properly in the first place?
I think I have trawled through every forum and online resource, over the last two days, but have not come any closer to getting this to work. I have also tried to get this to work in Spyder, but to no avail either. So any advice is very appreciated!
Many thanks!
Thank you for depicting the dilemma so poignantly and clearly in your cartoon :-). My colleague and I were trying to do something similar and what ultimately worked beautifully was creating an SSH config directly to the Docker container jumping from the remote machine, and then setting it as a remote SSH interpreter so that pycharm doesn't even realize it's a Docker container. It also works well for vscode.
set up ssh service in docker container (subset of steps in https://dev.to/s1ntaxe770r/how-to-setup-ssh-within-a-docker-container-i5i, port22 stuff wasn't needed)
docker exec -it <container> bash: create admin interactive prompt for docker
apt-get install openssh-server
service ssh start
confirm with service ssh status -> * sshd is running
determine IP and test SSHing from remote machine into container (adapted from https://phoenixnap.com/kb/how-to-ssh-into-docker-container, steps 2 and 3)
from normal command prompt on remote machine (not within container): docker inspect -f "{{ .NetworkSettings.IPAddress }}" <container> to get container IP
test: ping -c 3 <container_ip>
ssh: ssh <container_ip>; should drop you into the container as your user; however, requires container to be configured properly (docker run cmd has -v /etc/passwd:/etc/passwd:ro \ etc.). It may ask for a password. note: if you do this for a different container later that is assigned the same IP, you will get a warning and may need to delete the previous key from known_hosts; just follow the instructions in the warning.
test SSH from local machine
if you don't have it set up already, set up passwordless ssh key-based authentication to the remote machine with the docker container
make SSH command that uses your remote machine as a jump server to the container: ssh -J <remote_machine> <container_ip>, as described in https://wiki.gentoo.org/wiki/SSH_jump_host; if successful you should drop into the container just as you did from the remote machine
save this setup in your ~/.ssh/config; follow the ProxyJump Example from https://wiki.gentoo.org/wiki/SSH_jump_host
test config with ssh <container_host_name_defined_in_ssh_config>; should also drop you into interactive container
configure pycharm (or vscode or any IDE that accepts remote SSH interpreter)
Preferences -> Project -> Python Interpreter -> Add -> SSH Interpreter -> New server configuration
host: <container_host_name_defined_in_ssh_config>
port: 22
username: <username_on_remote_server>
select interpreter, can navigate using the folder icon, which will walk you through paths within the docker, or you can enter the result of which python from the container
follow pycharm prompts
Related
I have a question regarding the following instructions from https://jupyter-docker-stacks.readthedocs.io/en/latest/.
This command pulls the jupyter/datascience-notebook image tagged
6b49f3337709 from Docker Hub if it is not already present on the local
host. It then starts an ephemeral container running a Jupyter Server
and exposes the server on host port 10000.
docker run -it --rm -p 10000:8888 -v "${PWD}":/home/jovyan/work
jupyter/datascience-notebook:6b49f3337709
The command mounts the current working directory on the host ({PWD} in
the example command) as /home/jovyan/work in the container. The server
logs appear in the terminal.
Visiting http://<hostname>:10000/?token=<token> in a browser loads
JupyterLab.
What is the <hostname> supposed to be? I tried my computer's username and "jovyan" but neither worked in my browser.
I used the below command to start the splunk server using Docker.
docker run -d -e "SPLUNK_START_ARGS=--accept-license" -e "SPLUNK_USER=root" -p "8000:8000" splunk/splunk
But when I opened the URL localhost:8000, I am getting Server can't be reached message
What am I missing here?
I followed a tutorial from the source :- https://medium.com/#caysever/docker-splunk-logging-driver-c70dd78ad56a
Depending on your docker version and host OS, you could be missing the need to map 8080 from the VirtualBox.
This should not be needed if you are using HyperV (Windows host) or XHyve (Mac host), but can still be needed with VirtualBox.
The link to the Docker Image is https://hub.docker.com/r/splunk/splunk/. Referring to this we can see some details related to pulling and running the image. According to the link the right command is:
docker run -d -p 8000:8000 -e "SPLUNK_START_ARGS=--accept-license" -e "SPLUNK_PASSWORD=<password>" --name splunk splunk/splunk:latest
This works correctly for me. The image uses Ansible to do the configurations once the container has been created. If you do not specify password, the respective Ansible task will fail and your container will not be configured.
To follow the progress of the container configuration, you may run this command after running the above command:
docker logs -f splunk
Given that the name of your container is splunk. Here you will be able to see the progress of Ansible in configuring Splunk.
In case you are looking to create a clustered Splunk deployment then, you might want to have a look at this: https://github.com/splunk/docker-splunk
Hope this helps!
I am using docker for windows 10 for development. Before I used Docker Toolbox on windows 8. I am used to "tune" the host virtual machine in this case the MobyLinuxVM.
When I try to connect in hyper-v manager i get error cannot connect. When I try to docker-machine ls I get no docker machines. How can I possibly access the underlying machine on docker for windows 10?
Problems I want to solve are (aka why I want to connect):
Ubuntu apt-get doesnt work for me (I am behind proxy) get errors like E: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/xenial/universe/source/Sources Cannot initiate the connection to 3128:80 (0.0.12.56). - connect (22: Invalid argument). On other hand Centos yum, curl,... works. http_proxy variables are set.
I want to turn off swap on the host.
update
Solved problem with apt-get by changin configuration of http proxy in docker settings from 1.2.3.4:1234 to http://1.2.3.4:1234/.
update 2
Worked around the problem by modifying /etc/init.d/automount in host and added swapoff -a.
I was able to access host MobyLinuxVM through container run with various privieleges.
First I ran container like that (note the double slash when mounting root filesystem. Single slash didnt work for me in powershell)
$ docker run --net=host --ipc=host --uts=host --pid=host -it --security-opt=seccomp=unconfined --privileged --rm -v //:/host alpine sh
after that when I got into container I just did
$ chroot /host
and then I could access all i needed. /etc/fstab or swapoff -a.
Using this docker image from Docker Hub, I'm trying to run an ansible playbook that would configure the machine on which the container is running.
As an example, I run this:
docker run --net="host" -v <path_inventory>:/inventory -v <path_playbook>:/playbook.yml williamyeh/ansible:ubuntu16.04 ansible-playbook -vvvv -i /inventory /playbook.yml
With this options, I can ping localhost and the inventory and playbook are both accessible.
The inventory is configured to use a local connection:
[executors]
127.0.0.1
[executors:vars]
ansible_connection=local
ansible_user=<my_user_in_docker_host>
ansible_become=True
The group executors is the one referenced from the playbook.
I see that the playbook is trying to connect as root (what I get by default when I attach to the container). Specifying -u when running the container doesn't seem to get along with Ansible.
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
... followed by errors complaining about any command not available, after a successful local connection. That is what makes no sense for me given that both root or non-root users can execute them.
Any idea?
this image is designed to serve as a base for other images, and to take advantadge of ansible as a way of provisioning the requirements of the image rather than using the Dockerfile only.
This is stated in the documentation of the docker image:
Used mostly as a base image for configuring other software stack on
some specified Linux distribution(s).
Think of it as a base image to perform CI tasks on a lighter way than using other options (VMs, Vagrant...)
Take in account that the good thing about docker is that it isolates the host from the containers, so you can not reach the host files from the containers (except for whatever volumes you bind). Otherwise, it would be a security problem. See Here
regards
I was able to use ansible to configure the host from within a docker container. However, I didn't use a docker host network, but a docker bridge network.
When you start an ansible playbook in a container, then localhost will be the localhost of the container itself. This is just fine, because local_action(s) in ansible run in the container itself and remote actions on the host.
This is the modified version of your docker run example:
docker run -v <path_inventory>:/inventory -v <path_playbook>:/playbook.yml williamyeh/ansible:ubuntu16.04 ansible-playbook -vvvv -i /inventory/playbook.yml
You should't configure the inventory to use localhost or a local connection, but to use the host (machine) and connect via ssh. This is an example:
[executors]
<my_host_ip>
[executors:vars]
ansible_connection=ssh
ansible_user=<my_host_user>
ansible_become=True
Assuming your docker container is running in the default bridge, you can find my_host_ip with the following command:
ip addr show docker0
The container will connect with ssh to the docker interface on the host.
Some additional hints:
ssh needs to listen on the docker0 interface
iptables/nftables needs to provide ssh access from the ansible container to the docker0 interface
Ansible uses keys to connect via ssh by default. By using the -k and/or the -K parameters of the ansible-playbook command, you can provide a password instead.
I have a ubuntu machine which is a VM where I have installed docker in it. I am using this machine from my local windows machine and doing ssh , opening the terminal to the ubuntu machine.
Now , I am going to take a docker image which contains all the necessary softwares for eg: apache installed in it. Later I am going to deploy a sample appication(which is a web applicationP on to it and save it .
Now , I am in a confused mode as in how to check the deployed application if its running properly. i.e., what would be the address of the container which containds the deployed application.
for eg:- If I type http://127.x.x.x which is the address of the ubuntu machine , I am just getting time out .
Can anyone tell me how to verify the deployed application . Also, the printing the output of the program on the console works seemlessly fine , as the output gets printed , only thing I have some doubts is regarding the web application.
There are some possibilities to check whether your app is running.
Remote API
As JimiDini said, one possibility is the Docker remote API. You can use it to see all running containers (which would be your use case, right?), inspect a certain container or start and stop containers. The API is a REST-API with several binding for programming languages (at https://docs.docker.io/reference/api/remote_api_client_libraries/). Some of them are very outdated. To use the Docker remote API from another machine, I needed to open it explicitly:
docker -H tcp://0.0.0.0:4243 -H unix:///var/run/docker.sock -d &
Note that the API is open to the world now! In a real scenario you would need to secure it in some way (e.g. see the example at http://java.dzone.com/articles/securing-docker%E2%80%99s-remote-api).
Docker PS
To see all running containers run docker ps on your host. This will list all running containers. If you do not see your app, it is not running. It also shows you the ports your app is exposing. You can also do this via the remote API.
Logs
You can also check the logs. You can run docker attach <container id> to attach to a certain container an see its stdout. You can run also run docker logs <container id> to receive the Docker logs. What I prefer is to write the logs to a certain directory, e.g. all logs to /var/log and mount this folder to my host machine. Then all your logs will end up in /home/ubuntu/docker-logs on your host.
docker run -p 80:8080 -v /home/ubuntu/docker-logs:/var/log:rw my/application
One word to ports and IP
Every container will get its own IP address. You can check this IP address via the remote API or via Docker on the host machine directly. You can also specify a certain host name for the container (by passing the --hostname="test42" to the run command). However, you mostly did not need that.
To access the application in the container, you need to open the port in the container and bind to a port on the host.
In your Dockerfile you need to EXPOSE the port your app runs on:
FROM ubuntu
...
EXPOSE 8080
CMD run-my-app.sh
When you start your container, you need to bind this port to a port of the host:
docker run -p 80:8080 my/application
Now you can access your app on http://localhost:80 or http://127.0.0.1:80.
If you app does not response, check if the container is running by typing docker ps or the remote API. If it is not running, check the logs for the reason.
(Note: If you run your Ubuntu VM in something like VirtualBox and you try to access it from your Windows machine, make sure you opened the ports in VirtualBox too!).
Docker container has a separate IP address. By default it is private (accessible only from the host-machine).
Docker provides all metadata (including IP address) via its API:
https://docs.docker.io/reference/api/docker_remote_api_v1.10/#inspect-a-container
https://docs.docker.io/reference/api/docker_remote_api_v1.10/#monitor-docker-s-events
You can also take a look at a little tool called docker-gen for inspiration. It monitors docker-events and created configuration-files on host machine using templates.
To obtain the ip address of a docker container, if you know its id (a long hex string) or if you named it:
docker inspect --format '{{ .NetworkSettings.IPAddress }}' <container-id-or-name>
Docker is running its own network and to get information about it you can run the following commands:
docker network ls
docker network inspect <network name>
docker inspect <container id>
In the output, you should be able to find the IP.
But there is also a couple of things you need to be aware of, regarding Dockerfile and docker run command:
when you EXPOSE a port in Dockerfile, the service in the container is not accessible from outside Docker, but from inside other Docker containers
and when you EXPOSE and use docker run -p ... flag, the service in the container is accessible from anywhere, even outside Docker
So for example, if your apache is running on port 8080 you should expose it in Dockerfile and then you can run it as:
docker run -d -p 8080:8080 <image name> and you should be able to access it from your host on HTTP://localhost:8080.
It is an old question/answer but it might help somebody else ;)
working as of 2020
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name_or_id