I have Bonjour (mDNSResponder - Linux version) up and running on a Ubuntu Box (Host A). I have managed to port Avahi to a new platform. As I see, ./mDNSNetMonitor is able to discover the service published by avahi on say different host, Host B (/etc/avahi/services/myservice.service) . This means that Bonjour is able to discover the service published by Avahi.
My question here is, why do I need avahi-compat-libdns_sd library. In the context of experiment, do I need to port avahi-compat-libdns_sd library also to the new platform (Host B)? Note that Avahi running on Host B is 'Dbus enabled'.
Basically it provides a dns_sd.h header file and a backing implementation using avahi to provide compability with the Bonjour SDK interface. So no, you do not need to also port the avahi-compat-libdns_sd part for your experiment.
Related
I've been trying to configure Node-RED running locally at http://localhost:1880 to run on a static IP address that I would configured via my router's "DHCP Static IP Configuration" so that Node-RED could be accessible within the entire LAN.
How would I go about changing the IP address that Node-RED is hosted on, cause I haven't seemed to find any resources for it.
Would love to know the exact approach of running Node-RED on a LAN via a router; like should the static IP address be assigned to a particular device with a specific MAC address or can Node-RED reside on the router itself.
By default Node-RED binds to 0.0.0.0 which is the shortcut to say bind to all available interfaces (the log says to access via http://localhost:1880 because this will always be available). You should find that if you know the IP address of the machine running Node-RED and you enter http://ip-address:1880 from another machine on your LAN it should connect to the Node-RED editor.
You can change this bind address in the settings.js file (found in the userDir which is logged early on when Node-RED starts and is by default in ~/.node-red on a Linux/Unix machine). You can uncomment the uiHost line and change the IP address to what ever the static IP address of your host machine is. Under 99.9% of circumstances you should not do this and just leave it as the default 0.0.0.0
As for how you set your device that is hosting Node-RED to have a fixed IP address, that will be entirely dependent on the type of router you have, but usual approach would be to set the routers built in DHCP server to just asign a static IP address to that device as identified by it's MAC address. This means that you do not need to change anything on the device.
It is unlikely you will be able (or want) to to run Node-RED actually on your router, most home (or enterprise) routers are specialist devices and running a programming environment like Node-RED on them is really not a good idea from a security point of view unless you 110% know what you are doing.
Speaking of security, make sure you enable adminAuth in your settings.js before setting up any port forwarding on the router to expose Node-RED to the outside world. An unsecured Node-RED editor is likely to be quickly scanned by something like Shodan and promptly ushttps://nodered.org/docs/user-guide/runtime/securing-node-reded to host Crypto mining or much worse. Read the following carefully https://nodered.org/docs/user-guide/runtime/securing-node-red
I've been practicing docker and docker-swarm for quite sometime. I had created docker-machine's (manager, worker1 and worker2 nodes) using virtualbox and was able to complete the orchestration
Now, I am trying to repeat the same using Hyper-V (using internal v-switch) in my office, but it hung with the following
ERROR: Waiting for the host
My office desktop has got only one NIC, If I create 'external vswitch' and share it using 'network adapter sharing' I lose connectivity to all my office / client related applications
Hence I chose to create hyper-v 'manager' node using 'internal switch'. I also tried setting up MAT and provided IP address to 'internal switch'. But NOTHING worked
Should I need / create a hyper-v external switch prior creating Internal switch ? Or am I doing something wrong with internal switch setup?
I've developed a Grails application and I want my coworkers to be able to test it. They are on my network so I figure they can access it by using my IP address and the port number (8080). I've tried running it according to the steps laid out here and here to no avail.
I noticed that whenever I run the program, even when I follow those instructions, it says:
Grails application running at http://localhost:8080 in environment: development
Basic networking stuff here.
When something starts on interface 127.0.0.1 port something
Usually that port is then available for all the interfaces on the machine
if you run netstat -plant you will see running ports open on the machine.
Basically what ever ipconfig or ifconfig tells under Linux as your internal interface something like 192.168.1.x
The app is then available on http://192.168.1.x:8080
If you can't access it from other machines on network start by trying to ping {your machine ip}
It sounds like network security stopping local access from 1 machine accessing another.
Or even better still your good old MS firewall try stopping your security stuff on your desktop
It's not clear if you can access the app yourself on your own machine? It should be available at:
http://localhost:8080/appname
Your co-workers should be able to access the app by changing localhost to your computer name:
http://mycomputername:8080/appname
The new version of Docker (version 1.10) includes a DNS server to pass alias information from other hosts on the same network. There used to be hosts file entries for resolving linked containers (or containers on the same network). I am wondering if it is possible to use this embedded DNS server on an overlay network? I have looked in the documentation (and in issues) and cannot find information about this.
So the way the new embedded DNS "server" works is that it isn't a formal server. It's just an embedded listener for traffic to 127.0.0.11:53 (udp of course). When docker sees that query traffic on the container's network interface, it steps in with its embedded DNS server and replies with any answers it might have to the query. The documentation has some options you can set to affect how this DNS server behaves, but since it only listens for query traffic on that localhost address, there is no way to expose this to an overlay network in the way that you are thinking. However this seems to be a moving target, and I have seen this question before in IRC, so it may one day be the case that this embedded DNS server at least becomes pluggable, or possibly exposable in the way you would like.
I have created a VM which has a server running at localhost:8675/ which I had wanted to connect to my host machine at the same port for ease of understanding. I was following these to documents for information:
https://www.virtualbox.org/manual/ch06.html
http://www.howtogeek.com/122641/how-to-forward-ports-to-a-virtual-machine-and-use-it-as-a-server/
When I was in my VMWare Workstation, I clicked on my VM, then did: Edit > Virtual Network Editor. After that, enabled Change Settings which relaunched the window in admin mode. I clicked on the Row with Type NAT and external Connection NAT and in the VMNet Information with the NAT radio button pressed, I clicked the NAT Settings Button.
I said: Add... and then did:
Host: 8675
Type: TCP
VMIP: 127.0.0.1:8675
Description: Port Foward of 8675 from Host to VM.
It looks like everything is good. I say Ok and Apply in succession. It looked like it shut down nat and restarted some services.
I confirmed in the VM, the 127.0.0.1:8675 is correct.
In the HOST, I tried to go to: http://localhost:8675/ and it says: ERR_CONNECTION_REFUSED
I figured this was all I needed to do.
I was looking up some additional information and noticed that some people have had to configure firewalls. I wasnt sure if i needed to though, as I was thinking that the HOST and VM are all in 1 actual machine, it might be entirely self contained.
Is there a critical task I am missing?
I saw this post: https://superuser.com/questions/571196/port-forwarding-to-a-vmware-workstation-virtual-machine
which told me to just adjust it to bridged and use it that way. Does this solve the issue of connecting HOST / VM Issue.
I don't want to say this is the correct answer though as the question itself is particular to NAT, but this is a valid alternative answer that does work.
This is solves the base issue at hand, but not the question.
When you use NAT, the host system and the guest boxes have completely different IP addresses on their virtual subnet, so my guess is that when from the host system you try to connect to localhost:8675 you are actually trying to connect to port 8675 of the host and not of the guest. So don't use the localhost or 127.0.0.1 syntax, but discover the real IP address of the guest and use it.
If your guest is Windows use the ipconfig command, if Linux use ifconfig.
Probably you will also have to configure the firewall on the guest side.
EDIT:
Commenting the sentence "NAT: Used to share the host's IP address.": it probably refers to the IP address of the real ethernet adapter you have on your host and that is shared by host and guests to access the internet. That's not related to the way your host and guests communicate together. For example I use VMware Workstation to run a virtual Linux box in Windows. Selecting NAT, VMware creates a virtual subnet called VMnet8. In this subnet the virtual router has address 192.168.120.0, my Windows host is assigned a virtual ethernet adapter with address 192.168.120.1 and my Linux guest has got address 192.168.120.128. So when I want to access a Samba shared folder from Windows I type "net use * \192.168.120.128" in a Windows command prompt. When I want to access a Windows shared folder from Linux I type "sudo mount.cifs //192.168.120.1/path_to_shared_folder target_folder".
I believe you actually answered your question correctly as I was following it and achieved desired outcome.
IMHO, the error: ERR_CONNECTION_REFUSED indicates that a firewall on your host OS or guest OS (your VM) or on both doesn't allow the communication through the given ports.
The easiest thing would be to try to disable firewalls on boths, your HOST and GUEST OS.
Not sure what are your OSes, but here is just a good guide for setting up firewall rules on Ubuntu