I am trying to create a VM on my Ubuntu linux machine. The VM I am trying to create is 12.04 Ubuntu 32 bit virtual machine. I have had success creating said VM from a Windows environment, but when I migrated to a 16.04 Ubuntu environment I can't replicate my success. The problem comes from when I chose a HostOnly network for my VM. On my windows computer, after setting the IP and subnet mask using Host network manager on Virtualbox, Upon selection of a VirtualBox host only adapter, I was able to make Packer ssh into my VM. However, when I tried the same thing on my Ubuntu machine, going to preferences -> network, and setting the IP and subnet mask, whenever I select VBoxnet0 and than have Packer attempt to ssh or when I ping into my VM from my Ubuntu machine, I get host unreachable. Both netstat and ifconfig are telling that vboxnet0 is there. Based on what I have written, what might be the reasons I can't connect to my VM?
Related
While running Minikube on mac, we need to specify vm-driver as it needs a hypervisor to run the virtual machine on which it would run the K8 Cluster.
Why can’t Minikube use the hypervisor of the host machine say mac as host machine already has a hypervisor by default ?
Minikube has several drivers that can plug into different virtualization backends. That includes the ability to run the cluster inside a single container (the current default) or to use the Hyperkit hypervisor (which Docker Desktop also uses).
If you want to use a different hypervisor by default, you can configure minikube to do that:
minikube config set driver hyperkit
Minikube creates a simple local kubernetes cluster having one Virtual Machine. Minikube needs a hypervisor such as VirtualBox or KVM to create this VM. Minikube started a virtual machine for us(based on our local environment), and a Kubernetes cluster is running in that VM i.e. all your nodes and services are running under the VM box! This is only case on windows or osx.
You can work on Minikube even without installing VirtualBox. Minikube also supports a --driver=none option that runs the Kubernetes components on the host and not in a VM. Using this driver requires Docker and a Linux environment but not a hypervisor.
I have the following setup:
Windows 10 Host (Hyper-V enabled)
Docker Desktop installed on host
VMWare Workstation Pro (16)
Windows 10 VM - Docker CLI installed on vm
The Windows 10 VM is used as a dev environment, with project-specific stuff on there.
I also use the host as a development machine for other projects - so want to be able to use docker on both.
What I'd like to do is access the docker engine running on the host, from my VM
By access docker, I mean use the docker cli to run containers, build images etc... setting DOCKER_HOST or something like that?
Is this possible? Or any other way?
So far, I've set my VM to use NAT networking and tried:
docker -H tcp://192.168.126.2:2375 images
Which returns
error during connect: Get http://192.168.126.2:2375/v1.40/images/json: dial tcp 192.168.126.2:2375: connectex: No connection could be made because the target machine actively refused it.
192.168.126.2 is the ip of the default gateway, from within the VM (so - my host?)
On the host machine, if I do docker -H tcp://0.0.0.0:2375 images I get the expected result.
On the host machine, I've also set:
"hosts": ["tcp://0.0.0.0:2375"],
within the docker engine config:
so what i would do and usually am doing is in VMware Workstation in Network editor I connect VMs to a bridge and select my main line that provides connectivity whether it is an Ethernet port or Wifi and associate it to lets say VMnet0. Then in VM settings I assign that VM's NIC to VMnet0 and that is how my VM and my host are on same LAN.
I would not use NAT.
I have win 10 as an operating system, I have installed virtualbox where ubuntu is installed and docker with its containers is installed in ubuntu.
I set the virtualbox network in bridge, and in DHCP I assigned an IP that I can easily reach from the win 10 chrome browser (outside of virtualbox). The problem is that I cannot access the docker container where a webserver runs in loalhost, I can access it without problems inside the virtualbox and externally I can access another webserver in the virtualbox but not the webserver docker! How could I solve it?
thanks for any replies!
It seems that I have solved, I describe here the simple solution that I have adopted.
the VM has an IP assigned in dhcp by the bridge network. (this setting has remained for a second webserver to work) In virtualbox settings I simply enabled a second NAT network from "network-settings" and in "advanced-port forwarding" I only added host port on 80 and guest port on 80, because docker run establishes its ip and its port reachable only on local host (in this case reachable only inside virtualbox).
How can I use the DockerNAT virtual switch for an Hyper-V VM so it can 'talk' to other docker containers and enable internet access as in the MobyLinux VM?
Long story:
I want to install Univention on my Windows Server host via Hyper-V. On my host an nginx docker container is also running as a proxy. If someone calls univention.domain.com it should automatically redirect to my hyper-v Univention VM. This works when I set the network adapter of the Hyper-V machine to the DockerNAT and then give it the IP Address 10.0.75.100 as the Gateway address of the DockerNAT is 10.0.75.1 and the IP Address of the MobyLinux is 10.0.75.2. When I now ping 10.0.75.100 from my nginx container it works.
But as Univention needs an internet connection to install applications I'm not quite satisfied with this configuration as I am not able to connect to the internet when I use the DockerNAT network interface.
Then on the other hand I am able to ping from the nginx container (running as a linux container in the hyper-v VM of MobyLinux) f.e. 8.8.8.8. So the MobyLinux container created by Docker has to have internet access, right? Although it also uses the DockerNAT interface. But its set as an 'internal' virtual switch and the connection of my main NIC isn't marked as 'shared'.
p.s.: I am aware that there is a Univention docker image but Univention started to use docker for it's apps. So I can't run most of their apps in the app store, as docker container in docker container doesn't fit well (Univention can't enable docker due to network problems)
Windows Server 2019 17623
Docker 18.04.0-ce-rc2
I've got the following setup:
OSX running MySQL listening on all network adaptors at port 3306
XDEBUG enabled IDE listening on port 9000 on the base OSX system.
docker-machine host running on the OSX system with the host ip 192.168.99.100
A debian based docker container with a mysql client running on the docker host and HHVM running with xdebug looking to connect to some lucky remote host on port 9000.
The ip addresses change frequently on the OSX system due to being assigned via DHCP, so I want the docker container to be able to somehow be able to hit the mysql server regardless of what IP the native OSX network adaptors get assigned (without manually updating it). Also, I need a stable ip I can provide my HHVM server.ini file a remotes host for Xdebug.
With running a base system of linux this isn't an issue as the docker host and the actual native machine running docker are one-and-the-same. Also, there are several ways for a container to learn of the host's ip so the issue isn't hitting the docker host.
However, in OSX running docker-machine, the host ain't the native OSX system, but instead is a VM running in virtual box (assuming you're using the vb driver, and who the sam hill blazes isn't?).
The only thing I could think of was to port forward request on 3306 to the docker-machine host (192.168.99.100 which never changes) to the OSX's port 3306. Then have the container hit the docker-machine host for Mysql requests. IF this works, I could rinse and repeat for any port I port I need to link like xdebug on port 9000.
Does anyone know how to accomplish this or have another suggestion?
Figured a way out without needing to make any changes that provides a consistent ip to connect to on the base OSX system. Docker machine sets things up in such a way to make this possible.
Docker machine creates a virtualbox VM with 2 network adaptors, one set up as host-only, the other set as NAT. Don't know why it creates 2, but
The host-only adaptor provides the OSX an ip of 192.168.99.1 and the various VM's using it get addresses starting with 192.168.99.100. However, inside the VM network, you can't use the address 192.168.99.1 to hit ports on the parent OSX system (not sure why, but guessing host only intends to be only communication between the VMs).
The NAT network adaptor is set so the OSX get's the ip 10.0.2.2 and the VM get's 10.0.2.15. With a NAT, you can route to the OSX system at 10.0.2.2 from both the docker host VM and containers running on the host.
Since this 10.0.2.2 address for the OSX machine doesn't change (unless you screw with the virtual box networking settings) bingo, got what I need.