I've been practicing docker and docker-swarm for quite sometime. I had created docker-machine's (manager, worker1 and worker2 nodes) using virtualbox and was able to complete the orchestration
Now, I am trying to repeat the same using Hyper-V (using internal v-switch) in my office, but it hung with the following
ERROR: Waiting for the host
My office desktop has got only one NIC, If I create 'external vswitch' and share it using 'network adapter sharing' I lose connectivity to all my office / client related applications
Hence I chose to create hyper-v 'manager' node using 'internal switch'. I also tried setting up MAT and provided IP address to 'internal switch'. But NOTHING worked
Should I need / create a hyper-v external switch prior creating Internal switch ? Or am I doing something wrong with internal switch setup?
Related
I am using docker 19.03.5 in my ubuntu 18.04. Usually, I can access all the containers. Everything is ok but the real problem arises when I connect my machine to a VPN client(Cisco anyconnect). I can not access any containers while as soon as I connected to the VPN. Is there any way so that I can access docker containers even connected to a VPN?
I have faced this problem and tried all possible solutions available in Internet, but nothing worked. It looks like Cisco Anyconnect vpn takes an exclusive control over the routing of the system and any changes made are not showing any effect.
The following worked for me. Instead of Cisco Anyconnect, use OpenConnect VPN. Both uses same protocol. To download:
sudo apt install openconnect network-manager-openconnect network-manager-openconnect-gnome
Reboot your PC and then go to VPN Settings -> Multi-protocol VPN client (open connect) and provide the settings as per your organisation requirement.
That's because the VPN is configured to use full-tunelling. The network administrator should configure a split-tunnelling profile.
Full tuneling:
[PC] ---> [VPN] ---> { all networks
Split Tunneling:
[PC] ------> [VPN] ---> [Configured networks]
\-----> [Internet / other networks]
Another thing you can try is editing the routes.
You can add a route pointing to your container IP and specifying the good network adapter, then you have to set a higher priority on this route than on the default gateway of the VPN.
This issue is not related to Docker daemon / container settings. VPN server configuration is to blame (disabled split-include or prohibited local network access).
Because of that you have limited options how to resolve this:
Ask VPN server administrator to allow split-include (mikrotik terminology)
Check anyconnect client settings for something like "route all traffic thru VPN" and disable it
Create custom static routing on your machine to access specific IP range (servers behind VPN server) thru interface created by anyconect client
I am trying to run the infinispan docker image on a Windows 10 machine with docker desktop for windows.
I wrote a small test Java program that connects to localhost:11222 using hotrod and accesses a cache.
The problem is that after the initial connect the client receives from the server a new address 172.17.0.3:11222 and it fails connecting to this address because this is a docker internal one and
docker desktop for windows cannot route messages directly to an internal container address.
Is there any workaround available in infinispan or on the windows machine ?
The simplest solution is to disable the handling of topology updates in your Hot Rod client:
infinispan.client.hotrod.client_intelligence=BASIC
More information about client intelligence here.
Note that this is not recommended in production: the client will ignore new servers coming up and it will keep trying to contact the servers in the initial server list long after they stop.
I am running a single node Standalone Service Fabric dev cluster. The node is installed on a Hyper-V virtual machine with two network adapters attached to the external host network. The first VM net adapter is configured with a static IP address and is used as the cluster endpoint. The second VM net adapter is configured through DHCP and has an IP from the same subnet as the first net adapter.
I have looked at the topic "Service Fabric container networking modes" (https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-networking-modes). I have enabled DnsService, IPProviderEnabled and ContainerNetworkSetup. The step 2. I skipped, because it is applicable to Azure Resource Manager configuration. I have configured the network type to Open in the application manifest. The service is hosting a docker container.
When I publish the application to the cluster I get Warning events in the Microsoft-Service-Fabric\Admin channel with Hosting Category.
Here are the text of some of the messages:
SFApplication1Type_App6:ebanking2016xg_ContainerPkg#257e8304-637b-4e58-bc13-388542cf6d6c#d526398e-e01e-43fd-b1d4-9cba19bd608c:131816448810648848: End BeginAssignIpAddress. Error FABRIC_E_INVALID_OPERATION
Failed to remove enpoint resource file=C:\ProgramData\SF\vm0\Fabric\work\Applications\SFApplication1Type_App6\ebanking2016xg_ContainerPkg.d526398e-e01e-43fd-b1d4-9cba19bd608c.Endpoints.txt. Error=0x80070002. NodeVersion=6.3.176.9494:0:0.
SFApplication1Type_App6:ebanking2016xg_ContainerPkg#257e8304-637b-4e58-bc13-388542cf6d6c#d526398e-e01e-43fd-b1d4-9cba19bd608c:131816448810648848: End(Setup->EndCleanupServicePackageEnvironment due to error FABRIC_E_INVALID_OPERATION): error 0x80070002
End(SetupPackageEnvironment): Id=SFApplication1Type_App6:ebanking2016xg_ContainerPkg#257e8304-637b-4e58-bc13-388542cf6d6c#d526398e-e01e-43fd-b1d4-9cba19bd608c, Version=1.0:1.0:131816452585647419, ErrorCode=FABRIC_E_INVALID_OPERATION
...
Activate: Activate:SFApplication1Type_App6:ebanking2016xg_ContainerPkg#257e8304-637b-4e58-bc13-388542cf6d6c#d526398e-e01e-43fd-b1d4-9cba19bd608c:1.0:1.0:131816452585647419, ErrorCode=FABRIC_E_INVALID_OPERATION, RetryCount=0
This group of warning messages continue to appear on a 10 seconds interval. And the application stays in Activating status on the node.
When I do not set the network type to Open, the application activates successfully using the nat mode.
So a couple of questions emerge:
Is network type Open supported on a Standalone Service Fabric installation?
What is the required configuration on host, guest, cluster, and node level?
I've installed a Crate DB on a Virtual Machine Ubuntu (xenial).
Since I want to connect to it from both my VM and my Windows host, I've tried to set the VM's IP on both params in crate.yml:
network.host
network.publish_host
The rest of the parameters I can see in crate.yml
But that won't do the trick (I get ERR_CONNECTION_TIMED_OUT error when I try to connect to "my_VMs_ip:4200" from my Windows host PC) and I can't find any way around it on crate.io nor on Google.
Would any of you have an idea?
Thanks a lot
NB: I'm running Crate 2.0.7
I've developed a Grails application and I want my coworkers to be able to test it. They are on my network so I figure they can access it by using my IP address and the port number (8080). I've tried running it according to the steps laid out here and here to no avail.
I noticed that whenever I run the program, even when I follow those instructions, it says:
Grails application running at http://localhost:8080 in environment: development
Basic networking stuff here.
When something starts on interface 127.0.0.1 port something
Usually that port is then available for all the interfaces on the machine
if you run netstat -plant you will see running ports open on the machine.
Basically what ever ipconfig or ifconfig tells under Linux as your internal interface something like 192.168.1.x
The app is then available on http://192.168.1.x:8080
If you can't access it from other machines on network start by trying to ping {your machine ip}
It sounds like network security stopping local access from 1 machine accessing another.
Or even better still your good old MS firewall try stopping your security stuff on your desktop
It's not clear if you can access the app yourself on your own machine? It should be available at:
http://localhost:8080/appname
Your co-workers should be able to access the app by changing localhost to your computer name:
http://mycomputername:8080/appname