I am using docker for windows 10 for development. Before I used Docker Toolbox on windows 8. I am used to "tune" the host virtual machine in this case the MobyLinuxVM.
When I try to connect in hyper-v manager i get error cannot connect. When I try to docker-machine ls I get no docker machines. How can I possibly access the underlying machine on docker for windows 10?
Problems I want to solve are (aka why I want to connect):
Ubuntu apt-get doesnt work for me (I am behind proxy) get errors like E: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/xenial/universe/source/Sources Cannot initiate the connection to 3128:80 (0.0.12.56). - connect (22: Invalid argument). On other hand Centos yum, curl,... works. http_proxy variables are set.
I want to turn off swap on the host.
update
Solved problem with apt-get by changin configuration of http proxy in docker settings from 1.2.3.4:1234 to http://1.2.3.4:1234/.
update 2
Worked around the problem by modifying /etc/init.d/automount in host and added swapoff -a.
I was able to access host MobyLinuxVM through container run with various privieleges.
First I ran container like that (note the double slash when mounting root filesystem. Single slash didnt work for me in powershell)
$ docker run --net=host --ipc=host --uts=host --pid=host -it --security-opt=seccomp=unconfined --privileged --rm -v //:/host alpine sh
after that when I got into container I just did
$ chroot /host
and then I could access all i needed. /etc/fstab or swapoff -a.
Related
I want to build docker swarm cluster on windows. To do this I choose Windows Server 2019 in 1809 Version. I work on my local machine and using Vagrant box vm.box=StefanScherer/windows_2019 I created enviroment for developing purposes.
Set Hostname.
Set private network (192.168.52.100)
Install Docker-EE
On this Windows I have installed docker-ee by using command Install-Package Docker -ProviderName DockerMsftProvider -RequiredVersion 19.03 -Force, and docker work perfectly.
docker version return everething ok
docker run -it --rm -p 8000:80 --name aspnetcore_sample mcr.microsoft.com/dotnet/core/samples:aspnetapp -> also work perfectly.
My first issue is when I perform command docker swarm init --advertise-addr=192.168.52.100 I notice my internet connection is lost for a while (also init/join/leave).
And the secound issue is routing mesh, it is not working
Steps to reproduce:
docker service create --publish published=8050,target=80,mode=ingress --name aspnetcore_sample mcr.microsoft.com/dotnet/core/samples:aspnetapp
Open web browser http://127.0.0.1:8050/ (on machine where I init swarm)
Now I should have access to this sample app under 8050 port. But http://127.0.0.1:8050/ is not working
I know I can use mode=host but I think mode=ingress should work.
I also checked it with the same commands on linux and it works without any problem
How can I resolve this issue?
I created a docker image, where Virtualbox is running inside of a Docker Container (Base Image is a modified Ubuntu-20.04)
I fixed everything regarding creating the image and now my Virtual Machines are running inside the Container. Now I am trying to access these via RDP from the Host (also Ubuntu 20.04), via rdesktop.
The command running the Image is:
docker run -d --privileged --name vbox --device /dev/vboxdrv:/dev/vboxdrv -v /sys/fs/cgroup:/sys/fs/cgroup:ro -v /tmp:/tmp -v ~/machines:/machines -p 3389:3389 -it vboxsystemd
The VM started headless with vboxmanage startvm nameofvm --type headless, vrde is on and the port is 3389
I got the IP of the container by docker inspect vbox, then i tried connecting it by typing rdesktop ip:3389
It just says it cannot connect, and I have no idea how I could fix this.
Currently I am only running 1 VM, which I imported from the local Virtualbox, where it is working without any problems. It is also starting without problems inside the container.
Thanks in advance for helping me.
I am trying to use the python in a docker container on a remote machine as the interpreter in Pycharm. Since that is a mouthful, here is a diagram:
There is a Jupyter Notebook running in the container, which I am able to connect to through my local browser (although this is just for testing the connection). The command I am using to launch the Docker container is
docker run --runtime=nvidia -it --rm --shm-size=2g -v /home/timo/storage:/storage -v /etc/passwd:/etc/passwd -v /etc/group:/etc/group --ulimit memlock=-1 -p 8888:8888 -p 7722:22 --ipc=host latest:latest
I can forward the port 8888 which the Jupyter notebook is running on with ssh -L 8888:0.0.0.0:8888 BBB.BBB.BBB.BBB and thus use it on the local machine. But I don't much like using Jupyter for developing and would like to use the Python interpreter in the Docker Container in Pycharm.
When I select "Add Python Interpreter" in Pycharm, I get the following options:
The documentation for Pycharm suggests using the "Add Python Interpreter/Docker" tool which looks like this:
However the documentation doesn't say how to set up the Docker container and the connections if the Docker is on a remote machine.
So my questions are: should I use a Unix or a TCP socket to connect to my remote docker? Or should I somehow forward all the relevant ports from the container and use the "SSH Interpreter" option? And if so, how do I set this all up? Am I setting up my Docker Container properly in the first place?
I think I have trawled through every forum and online resource, over the last two days, but have not come any closer to getting this to work. I have also tried to get this to work in Spyder, but to no avail either. So any advice is very appreciated!
Many thanks!
Thank you for depicting the dilemma so poignantly and clearly in your cartoon :-). My colleague and I were trying to do something similar and what ultimately worked beautifully was creating an SSH config directly to the Docker container jumping from the remote machine, and then setting it as a remote SSH interpreter so that pycharm doesn't even realize it's a Docker container. It also works well for vscode.
set up ssh service in docker container (subset of steps in https://dev.to/s1ntaxe770r/how-to-setup-ssh-within-a-docker-container-i5i, port22 stuff wasn't needed)
docker exec -it <container> bash: create admin interactive prompt for docker
apt-get install openssh-server
service ssh start
confirm with service ssh status -> * sshd is running
determine IP and test SSHing from remote machine into container (adapted from https://phoenixnap.com/kb/how-to-ssh-into-docker-container, steps 2 and 3)
from normal command prompt on remote machine (not within container): docker inspect -f "{{ .NetworkSettings.IPAddress }}" <container> to get container IP
test: ping -c 3 <container_ip>
ssh: ssh <container_ip>; should drop you into the container as your user; however, requires container to be configured properly (docker run cmd has -v /etc/passwd:/etc/passwd:ro \ etc.). It may ask for a password. note: if you do this for a different container later that is assigned the same IP, you will get a warning and may need to delete the previous key from known_hosts; just follow the instructions in the warning.
test SSH from local machine
if you don't have it set up already, set up passwordless ssh key-based authentication to the remote machine with the docker container
make SSH command that uses your remote machine as a jump server to the container: ssh -J <remote_machine> <container_ip>, as described in https://wiki.gentoo.org/wiki/SSH_jump_host; if successful you should drop into the container just as you did from the remote machine
save this setup in your ~/.ssh/config; follow the ProxyJump Example from https://wiki.gentoo.org/wiki/SSH_jump_host
test config with ssh <container_host_name_defined_in_ssh_config>; should also drop you into interactive container
configure pycharm (or vscode or any IDE that accepts remote SSH interpreter)
Preferences -> Project -> Python Interpreter -> Add -> SSH Interpreter -> New server configuration
host: <container_host_name_defined_in_ssh_config>
port: 22
username: <username_on_remote_server>
select interpreter, can navigate using the folder icon, which will walk you through paths within the docker, or you can enter the result of which python from the container
follow pycharm prompts
Say I have a virtual box virtual machine provisioned through Vagrant. I then provision it with docker-machine - so far all good: I can docker-machine ssh into the box and list it ok with docker-machine ls.
In the past, when not yet using dokcer-machine, my usual workflow would involve sshing into the virtual box, installing docker and spinning up my containers.
As far as I understand this is not longer needed as I can control docker containers within the virtual box through docker-machine (and docker itself) from outside the virtual box (essentially from my win dev machine).
Question: how can I mount directories from inside the vm into the container when I am running the docker command from outside the container?
Example to further clarify:
1) old approach. ssh into vbox and run
docker run -i -t --net=try-net \
--name XXXX \
-v ${PWD}/xxxx/yyyy.py:/zzzzz/xxxx/yyyy.py \
-d me/image
2) docker-machine approach. I switch the docker-machine env onto the box. Now how do I reference a folder in the vbox from outside the box? Is this even possible?
From my win host in a Linux like shell:
docker run -v /c/x/y/z:/home --name postgres3 -d postgres:9.5
gets me:
c:\Program Files\Docker\Docker\Resources\bin\docker.exe: Error response from daemon: Invalid bind mount spec "c:\x\y\z\;C:\Program Files (x86)\Git\home": invalid mode: \Program Files (x86)\Git\home.
If you spin up containers using a docker-toolbox install, the VM's are pre-configured to share the /Users folder from the host into the VM which can then be used by containers.
Since you're doing this manually with your own Vagrant install, you'll need to share the folders yourself. This question should walk you through the steps to share a folder from the parent OS into the VM which can be used by Docker containers you spin up with docker-machine.
Edit: with the parent OS synced the the VM, any containers you run inside the VM will just mount volumes there. Docker-machine isn't really a factor, it's just pointing the docker CLI to the selected docker host. The docker CLI would look like:
docker run -v /path/on/vm:/path/in/container image
I have set up Docker for Windows (Hyperv Beta) on my Laptop.
My intention is to laborate on some setups for containers I intend to install in my real server later. I am fairly new to Docker (but know the basics) so I wanted to laborate with volumes and volume images a bit.
However all anonymous volumes end up on the virtual Linux host. I would like to access the filesystem of the host, not within a container.
I cannot access it from within a container easily due to (well founded) security constraints. Neither can I find a way to access it from the windows prompt.
(Using Docker for Windows version 1.12.0-beta21)
I know that it possible to mount volumes using the c share made by Docker for Windows, but that raises the complexity for me. My intent is to use Docker tutorials unmodified and inspect the results in the host filesystem. Preferably through a (bash) shell in the host VM or with a windows file access into the virtual machine.
Later on I would also like to copy volume contents into the vm volumes although that could be solved using a volume against the c drive.
I have after research on my own deducted the following technique to create a privileged container that works as if it was the Linux root host. This is the best I have been able to pinpoint so far.
docker run --net=host --ipc=host --uts=host --pid=host -it --security-opt=seccomp=unconfined --privileged --rm -v /:/host alpine /bin/sh
Docker-machine will allow you to ssh to the default machine by typing:
"docker-machine ssh"
You'll be logged into the VM that is running docker.