Need mDNS response for service discovery with subtypes - avahi

I am trying to do service discovery based on subtypes.
For example i am running avahi-publish -s --domain=local --subtype="_ann._sub._http._tcp" "serviceName" "_http._tcp" 5353 "text Record".
Now i am querying for subtype ex: AT+MDNSSD=_ann,_sub,_http,_tcp,local.
But the response is from the avahi-publish is not containing the subtype. I am getting the response message with name as, "serviceNaem._http._tcp.local".
Can any body tell how i can register the service with avahi-publish, so that i can get response as "serviceName._ann._sub._http._tcp.local", in the resource record.

avahi-browse doesn't explicitly list the sub-types. If you know what you're looking for, you can filter for it, though:
[localhost]$ avahi-publish -s --subtype=_ann._sub._http._tcp serviceName _http._tcp 5353 &
[1] 3012
[localhost]$ avahi-browse -t _http._tcp
+ eth0 IPv4 serviceName Web Site local
[localhost]$ avahi-browse -t _ann._sub._http._tcp
+ eth0 IPv4 serviceName Web Site local
[localhost]$ kill 3012
If you don't publish a sub-type, then filtering for it will return nothing:
[localhost]$ avahi-publish -s serviceName _http._tcp 5353 &
[1] 3026
[localhost]$ avahi-browse -t _http._tcp
+ eth0 IPv4 serviceName Web Site local
[localhost]$ avahi-browse -t _ann._sub._http._tcp
[localhost]$ kill 3026
If you watch your traffic with [wire|t]shark, filtering for port 5353, you'll see that your sub-type is being returning in the DNS query as a PTR record.

You can register subtypes by using the <subtype></subtype> element in the service file (see the avahi.service manpage https://linux.die.net/man/5/avahi.service).
The following example works for me:
<service>
<type>_http._tcp</type>
<subtype>_ann._sub._http._tcp</subtype>
<name>MyService</name>
<port>12345</port>
<service>

Related

Docker failed to load listeners, cannot assign requested address

I'm using this guide to try and run up Docker using WSL2. I've got everything starting however there is an issue when I actually try to run up Docker. Once I run the command sudo dockerd -H `ifconfig eth0 | grep -E "([0-9]{1,3}.){3}[0-9]{1,3}" | grep -v 127.0.0.1 | awk '{ print $2 }' | cut -f2 -d:
WARN[2022-02-01T11:07:40.033323500-06:00] Binding to IP address without --tlsverify is insecure and gives root access on this machine to everyone who has access to your network. host="tcp://169.254.77.26:2375"
WARN[2022-02-01T11:07:40.033991800-06:00] Binding to an IP address, even on localhost, can also give access to scripts run in a browser. Be safe out there! host="tcp://169.254.77.26:2375"
WARN[2022-02-01T11:07:41.036303800-06:00] Binding to an IP address without --tlsverify is deprecated. Startup is intentionally being slowed down to show this message host="tcp://169.254.77.26:2375"
WARN[2022-02-01T11:07:41.043536700-06:00] Please consider generating tls certificates with client validation to prevent exposing unauthenticated root access to your network host="tcp://169.254.77.26:2375"
WARN[2022-02-01T11:07:41.044564400-06:00] You can override this by explicitly specifying '--tls=false' or '--tlsverify=false' host="tcp://169.254.77.26:2375"
WARN[2022-02-01T11:07:41.045654100-06:00] Support for listening on TCP without authentication or explicit intent to run without authentication will be removed in the next release host="tcp://169.254.77.26:2375"
failed to load listeners: listen tcp 169.254.77.26:2375: bind: cannot assign requested address
I'm not too familiar with Docker so not sure what I can adjust to make it launch properly. Any suggestions are appreciated, thanks!
I'm doing exactly the same.
What worked for me was this comment https://dev.to/nelsonpena/comment/1jmkb . But it was not too explicit
I opened windows PowerShell and used the command
wsl --set-version Ubuntu 2
if you have another distro of linux it would be
wsl --set-version <distroname> 2
I closed wsl and opened it again. and executed the command
echo `ifconfig eth0 | grep -E "([0-9]{1,3}\.){3}[0-9]{1,3}" | grep -v 127.0.0.1 | awk '{ print $2;exit }' | cut -f2 -d:`
and got API listen on [the IP]

Expose port using DockerOperator

I am using DockerOperator to run a container. But I do not see any related option to publish required port. I need to publish a webserver port when the task is triggered. Any help or guide will be helpful. Thank you!
First, don't forget docker_operator is deprecated, replaced (now) with providers.docker.operators.docker.
Second, I don't know of a command to expose a port in a live (running) Docker container.
As described in this article from Sidhartha Mani
Specifically, I needed access to the filled mysql database. .
I could think of a few ways to do this:
Stop the container and start a new one with the added port exposure. docker run -p 3306:3306 -p 8080:8080 -d java/server.
The second option is to start another container that links to this, and knows how to port forward.
Setup iptables rules to forward a host port into the container.
So:
Following existing rules, I created my own rule to forward to the container
iptables -t nat -D DOCKER ! -i docker0 -p tcp --dport 3306-j DNAT \
--to-destination 172.17.0.2:3306
This just says that whenever a packet is destined to port 3306 on the host, forward it to the container with ip 172.17.0.2, and its port 3306.
Once I did this, I could connect to the container using host port 3306.
I wanted to make it easier for others to expose ports on live containers.
So, I created a small repository and a corresponding docker image (named wlan0/redirect).
The same effect as exposing host port 3306 to container 172.17.0.2:3306 can be achieved using this command.
This command saves the trouble of learning how to use iptables.
docker run --privileged -v /proc:/host/proc \
-e HOST_PORT=3306 -e DEST_IP=172.17.0.2 -e DEST_PORT=3306 \
wlan0/redirect:latest
In other words, this kind of solution would not be implemented from a command run in the container, through an Airflow Operator.
As per my understanding DockerOperator will create a new container, then why is there no way of exposing ports while create a new container.
First, the EXPOSE part is, as I mentioned here, just a metadata added to the image. It is not mandatory.
The runtime (docker run) -p option is about publishing, not exposing: publishing a port and mapping it to a host port (see above) or another container port.
That might be not needed with an Airflow environment, where there is a default network, and even the possibility to setup a custom network or subnetwork.
Which means other (Airflow) containers attached to the same network should be able to access a ports of any container in said network, without needing any -p (publication) or EXPOSE directive.
In order to accomplish this, you will need to subclass the DockerOperator and override the initializer and _run_image_with_mounts method, which uses the API client to create a container with the specified host configuration.
class DockerOperatorWithExposedPorts(DockerOperator):
def __init__(self, *args, **kwargs):
self.port_bindings = kwargs.pop("port_bindings", {})
if self.port_bindings and kwargs.get("network_mode") == "host":
self.log.warning("`port_bindings` is not supported in `host` network mode.")
self.port_bindings = {}
super().__init__(*args, **kwargs)
def _run_image_with_mounts(
self, target_mounts, add_tmp_variable: bool
) -> Optional[Union[List[str], str]]:
"""
NOTE: This method was copied entirely from the base class `DockerOperator`, for the capability
of performing port publishing.
"""
if add_tmp_variable:
self.environment['AIRFLOW_TMP_DIR'] = self.tmp_dir
else:
self.environment.pop('AIRFLOW_TMP_DIR', None)
if not self.cli:
raise Exception("The 'cli' should be initialized before!")
self.container = self.cli.create_container(
command=self.format_command(self.command),
name=self.container_name,
environment={**self.environment, **self._private_environment},
ports=list(self.port_bindings.keys()) if self.port_bindings else None,
host_config=self.cli.create_host_config(
auto_remove=False,
mounts=target_mounts,
network_mode=self.network_mode,
shm_size=self.shm_size,
dns=self.dns,
dns_search=self.dns_search,
cpu_shares=int(round(self.cpus * 1024)),
port_bindings=self.port_bindings if self.port_bindings else None,
mem_limit=self.mem_limit,
cap_add=self.cap_add,
extra_hosts=self.extra_hosts,
privileged=self.privileged,
device_requests=self.device_requests,
),
image=self.image,
user=self.user,
entrypoint=self.format_command(self.entrypoint),
working_dir=self.working_dir,
tty=self.tty,
)
logstream = self.cli.attach(container=self.container['Id'], stdout=True, stderr=True, stream=True)
try:
self.cli.start(self.container['Id'])
log_lines = []
for log_chunk in logstream:
log_chunk = stringify(log_chunk).strip()
log_lines.append(log_chunk)
self.log.info("%s", log_chunk)
result = self.cli.wait(self.container['Id'])
if result['StatusCode'] != 0:
joined_log_lines = "\n".join(log_lines)
raise AirflowException(f'Docker container failed: {repr(result)} lines {joined_log_lines}')
if self.retrieve_output:
return self._attempt_to_retrieve_result()
elif self.do_xcom_push:
if len(log_lines) == 0:
return None
try:
if self.xcom_all:
return log_lines
else:
return log_lines[-1]
except StopIteration:
# handle the case when there is not a single line to iterate on
return None
return None
finally:
if self.auto_remove == "success":
self.cli.remove_container(self.container['Id'])
elif self.auto_remove == "force":
self.cli.remove_container(self.container['Id'], force=True)
Explanation: The create_host_config method of the APIClient has an optional port_bindings keyword argument, and create_container method has an optional ports argument. These calls aren't exposed in the DockerOperator, so you have to copy the _run_image_with_mounts method and override it with a copy and supply those arguments with the port_bindings field set in the initializer. You can then supply the ports to publish as a keyword argument. Note that in this implementation, the expectation is argument is a dictionary:
t1 = DockerOperatorWithExposedPorts(image=..., task_id=..., port_bindings={5000: 5000, 8080:8080, ...})

How to know if load balancing works in Docker Swarm?

I created a service called accountservice and replicated it 3 times after. In my service I get IP address of the producing service instance and populate it in JSON response. The question is everytime I run curl $manager-ip:6767/accounts/10000 the returned IP is the same as before (I tried 100 times)
manager-ip environment variable:
set -x manager-ip (docker-machine ip swarm-manager-1)
Here's my Dockerfile:
FROM iron/base
EXPOSE 6767
ADD accountservice-linux-amd64 /
ADD healthchecker-linux-amd64 /
HEALTHCHECK --interval=3s --timeout=3s CMD ["./healthchecker-linux-amd64", "-port=6767"] || exit 1
ENTRYPOINT ["./accountservice-linux-amd64"]
And here's my automation script to build and run service:
#!/usr/bin/env fish
set -x GOOS linux
set -x CGO_ENABLED 0
set -x GOBIN ""
eval (docker-machine env swarm-manager-1)
go get
go build -o accountservice-linux-amd64 .
pushd ./healthchecker
go get
go build -o ../healthchecker-linux-amd64 .
popd
docker build -t azbshiri/accountservice .
docker service rm accountservice
docker service create \
--name accountservice \
--network my_network \
--replicas=1 \
-p 6767:6767 \
-p 6767:6767/udp \
azbshiri/accountservice
And here's the function I call to get the IP:
package common
import "net"
func GetIP() string {
addrs, err := net.InterfaceAddrs()
if err != nil {
return "error"
}
for _, addr := range addrs {
ipnet, ok := addr.(*net.IPNet)
if ok && !ipnet.IP.IsLoopback() {
if ipnet.IP.To4() != nil {
return ipnet.IP.String()
}
}
}
panic("Unable to determine local IP address (non loopback). Exiting.")
}
And I scale the service using the command below:
docker service scale accountservice=3
A few things:
Your results are normal. By default, a Swarm service has a VIP (virtual IP) in front of the service tasks to act as a load balancer. Trying to reach that service from inside the virtual network will only show that IP.
If you want to use a round-robin approach and skip the VIP, you could create a service with --endpoint-mode=dnsrr that would then return a different service task for each DNS request (but your client might be caching DNS names, causing that to show the same IP, which is why VIP is usually better).
If you wanted to get a list of IP's for task replicas, do a dig tasks.<servicename> inside the service's network.
If you wanted to test something easy, have your service create a random string, or use hostname on startup and return that so you can tell the different replicas when accessing. A easy example is to run one service using image elasticsearch:2 which will return JSON on port 9200 with a different random name per container.

How can I specify canonical server name in composer connection profile?

We need to run "composer" command outside of docker container's network.
When I specify orderer and peer host name (e.g. peer0.org1.example.com) in /etc/hosts file, "composer" command seems to work.
However, if I specify server's IP address, it does not work. Here is sample.
$ composer network list -p hlfv1 -n info-share-bc -i PeerAdmin -s secret
✖ List business network info-share-bc
Error trying to ping. Error: Error trying to query chaincode. Error: Connect Failed
Command succeeded
This is a command example when I specify host name in /etc/hosts.
$ composer network list -p hlfv1 -n info-share-bc -i PeerAdmin -s secret
✔ List business network info-share-bc
name: info-share-bc
models:
- org.hyperledger.composer.system
- bc.share.info
<snip>
I believe when the server name can not be resolved, we will specify the option called "ssl-target-name-override", hyperledger node.js SDK as described here.
https://jimthematrix.github.io/Remote.html
- ssl-target-name-override {string} Used in test environment only,
when the server certificate's hostname (in the 'CN' field) does not
match the actual host endpoint that the server process runs at,
the application can work around the client TLS verify failure by
setting this property to the value of the server certificate's hostname
Is there any option to specify host name in connection profile (connection.json) ?
Found a work around: hostnameOverride option in connection profile resolved the connection issue.
"eventURL": "grpcs://<target-host>:17053",
"hostnameOverride": "peer0.org1.example.com",

Distributed RabbitMQ Nodes don't recognize each other

I'm working on a RabbitMQ distributed POC and I'm stuck at the basics of clustering the nodes.
I'm trying to follow the rabbit's tutorial on clustering so this is my reference.
After installing erlang (R14B04) and rabbit (2.8.2-1) I've copied the .erlang.cookie file contents from one node to the other two.
I wasn't sure about how to get erlang to notice this change to I had to restart the machines themselves (pretty brute force but I don't know erlang at all).
In addtion I opened in iptables 4369 and 5 additional ports for communications and placed under
/usr/lib64/erlang/bin/sys.config the following config:
{kernel,[{inet_dist_listen_min, XX00},{inet_dist_listen_max,XX05}]}]
Then another restart (dumb I know) to verify erlang takes these into consideration but still when I run:
rabbitmqctl cluster rabbit#HostName1
I get:
Clustering node rabbit#HostName2 with [rabbit#HostName1] ...
Error: {no_running_cluster_nodes,[rabbit#HostName1],
[rabbit#HostName1]}
There is a chance my fiddling with the erlang.cookie or with the ports did not succeed but I don't know how to check them. I tried typing erl in the cmd and then erl_epmd:names() or other commands to get more information but I'm probably way off in erlang land.
Would truly appreciate any help
Update:
I tried pinging two erlang nodes manually and got pang back.
I did the following:
Connected to two nodes, stopped rabbitmq (wasn't sure if needed but to be sure), started erlang like so (erl -sname dilbert and erl -sname dilbert2) when the erlang command line started i ran node(). on each of them and got dilbert#HostName1 and dilbert2#HostName2 respectively. I then tried to run net_adm:ping('dilbert'). and net_adm:ping('dilbert#HostName1'). with the single quote and without them from both nodes (changed names of course) and got on all 8 cases pang.
When I ran nodes(). on one of the machines I got back an empty array.
I've also tried to allow all traffic in the firewall (script) and then try to run the above commands (don't worry they're back on now) and still got back pang.
Update2:
For some reason I had cookies mismatch which I needed to resolve (thanks #kjw0188 for the suggestion [I ran erlang:get_cookie(). in the erlang command line]).
This did not help and I needed to stop iptables completely (not sure why but I'll figure it soon) and load the erlang node with -name dilbert#my-ip because my rackspace servers have no dns-name. This finally enabled me to get a pong and see the nodes see each other (nodes(). returns a non-empty array after the ping).
The problem I'm facing now is how to instruct RabbitMQ to use -name instead of -sname when starting erlang.
So I had multiple issues with connecting my two RabbitMQ nodes-
I'll add that my nodes are hosted on rackspace, and so don't have a default exposable hostname, and require iptables since there is no DMZ or built in security group concept like amazon.
Problems:
1. Cookie- Not sure how or why but I had multiple instances of .erlang.cookie (in /root, in my home directory and in /var/lib/rabbitmq/) I kept only the one in rabbitmq and verified all nodes have the same cookie.
2. IPTables- In order for the nodes to communicate I needed to open the epmd port and the range of ports for the actual communication inet_dist_listen_min inet_dist_listen_max.
/sbin/iptables -A INPUT -i eth1 -p tcp --dport ${epmd} -s ${otherNode} -j ACCEPT
/sbin/iptables -A INPUT -i eth1 -p tcp --dport ${inet_dist_listen_min}:${inet_dist_listen_max} -s ${otherNode} -j ACCEPT
empd is the usuall 4369 port and for the other range use whatever range you want.
${otherNode} is the ip of my other node.
I also needed to configure erlang through rabbitmq to use these ports (see config file at end)
3. HostName- Seeing as I don't have a hostname I needed to edit the rabbit scripts to use -name and not -sname (the first tells erlang to take the whole name, the latter stands for short name and thus appends an # symbol and the hostname).
This was accomplished by editing:
/usr/lib/rabbitmq/bin/rabbitmqctl
Added at the beginning the definition of the RABBITMQ_NODE_IP_ADDRESS property
DEFAULT_NODE_IP_ADDRESS=auto
DEFAULT_NODE_PORT=5672
[ "x" = "x$RABBITMQ_NODE_IP_ADDRESS" ] && RABBITMQ_NODE_IP_ADDRESS=${NODE_IP_ADDRESS}
[ "x" = "x$RABBITMQ_NODE_PORT" ] && RABBITMQ_NODE_PORT=${NODE_PORT}
[ "x" = "x$RABBITMQ_NODE_IP_ADDRESS" ] && [ "x" != "x$RABBITMQ_NODE_PORT" ] && RABBITMQ_NODE_IP_ADDRESS=${DEFAULT_NODE_IP_ADDRESS}
[ "x" != "x$RABBITMQ_NODE_IP_ADDRESS" ] && [ "x" = "x$RABBITMQ_NODE_PORT" ] && RABBITMQ_NODE_PORT=${DEFAULT_NODE_PORT}
and in the actual erl command I changed
-sname ${RABBITMQ_NODENAME} \ to
-name ${RABBITMQ_NODENAME}#${RABBITMQ_NODE_IP_ADDRESS}\.
This made rabbitmq listen only on the specified ip address (specified in the config file at the end) and load with that ip instead of the usuall hostname.
edited /usr/lib/rabbitmq/bin/rabbitmq-server
Changed the actual erl command from -sname ${RABBITMQ_NODENAME} \ to -name ${RABBITMQ_NODENAME}#${RABBITMQ_NODE_IP_ADDRESS}\
Added a rabbit conf (/etc/rabbitmq/rabbitmq-env.conf) file with-
#the ip address which rabbit should use, this is to limit rabbit to only use internal rackspace communication and not publicly accessible ports
NODE_IP_ADDRESS=myIpAdress
#had to change the nodename becaue otherwise rabbitmq used rabbit#Hostname and not only rabbit
NODENAME=myCompany
#This instructed rabbit to instruct erlang which ports it should use for its communications with other nodes
export SERVER_ERL_ARGS="$SERVER_ERL_ARGS -kernel inet_dist_listen_min somePort -kernel inet_dist_listen_max someOtherBiggerPort"
Some resources which helped me along the way:
RabbitMQ Clustering Guide
Clustering RabbitMQ servers for High Availability
rabbitmq-env.conf(5) manual page
Node communication by public IP address erlang mailing list (The middle post)
Configuring RabbitMQ Cluster on Cloud
Hope this will help anyone else.
EDIT:
Not sure how I was mistaken but it seemed my erlang-rabbit port instructions were not taken into consideration or were not enough. Ended up having to allow all communications between the two nodes...
One thing to really watch out for is whitespace of any kind in the erlang cookie file, especially line breaks AFTER the contents of the cookie. So long as both are identical, things are okay, but when one has a line break and the other doesn't, thing won't work.
Background: I was facing the same issue while setting up Rabbitmq cluster. I was using 2 docker containers running on my host-machine, which is equivalent to 2 separate nodes and I could not create a cluster of these two.
Solution: 1. Make sure you have same erlang cookie on all your cluster nodes, the default location is /var/lib/rabbitmq/.erlang.cookie. This file is used for authentication, so make sure, you have it same on all the nodes. After changing the .erlang.cookie restart your rabbitmq service.
Make sure that nodes are accessible from one other, use ping or telnet to check the connection.
Check that /etc/hosts have correct entries, for example if rabbit2 wants to join cluster rabbit1, /etc/hosts of rabbit2 should contain.
172.68.1.6 rabbit1
172.68.1.7 rabbit2
Now stop service using $rabbitmqctl stop_app followed by $rabbitmqctl join_cluster rabbit#rabbit1, start your service by rabbitmqctl start_app and check $rabbitmqctl cluster_status to see weather you have joined the cluster or not.
I followed the rabbitmq official documentation to setup the cluster.
to change RabbitMQ sname/name behaviour you can edit the scripts:
rabbitmq-multi
rabbitmq-server
rabbitmqctl
Example
In script rabbitmqctl there is the following piece of code:
exec erl \
-pa "${RABBITMQ_HOME}/ebin" \
-noinput \
-hidden \
${RABBITMQ_CTL_ERL_ARGS} \
-sname rabbitmqctl$$ \
-s rabbit_control \
-nodename $RABBITMQ_NODENAME \
-extra "$#"
You have to change it in:
exec erl \
-pa "${RABBITMQ_HOME}/ebin" \
-noinput \
-hidden \
${RABBITMQ_CTL_ERL_ARGS} \
-name rabbitmqctl$$ \
-s rabbit_control \
-nodename $RABBITMQ_NODENAME \
-extra "$#"
http://pearlin.info/?p=1672
so you need to copy the cookie from the node you trying to connect
example :- rabbit#node1
rabbit#node2
go to rabbit#node1 and copy the cookie from cat /var/lib/rabbitmq/.erlang.cookie
go to rabbit#node2 remove the current cookie and paste the new one.
on same node
/usr/sbin/rabbitmqctl stop_app
/usr/sbin/rabbitmqctl reset
/usr/sbin/rabbitmqctl cluster rabbit#node1
should do it.
same documented here.
http://pearlin.info/?p=1672

Resources