Given a known TCP port and name for a remote beam.smp service, as well as a known cookie, is it possible to short circuit the Erlang Port Mapper Daemon handshake phase of the Erlang distribution protocol and establish an Erlang shell directly to the target beam.smp service?
The protocol is documented here:
http://erlang.org/doc/apps/erts/erl_dist_protocol.html
And here:
https://github.com/blackberry/Erlang-OTP/blob/master/lib/kernel/internal_doc/distribution_handshake.txt
But it is not clear to me if the recv_challenge/send_challenge authentication occurs via the Erlang Port Mapper Daemon or the beam.smp service bound to a specific port.
Thank you for your time.
Authentication occurs between Erlang VMs (beam or beam.smp). epmd only handles port registration. Simply short-circuiting epmd is not extremely easy, and other approaches might be more appropriate to your actual need.
Unfortunately, epmd is not an option for the default distribution protocol (inet_tcp_dist) or for its SSL counterpart. There are two undocumented options that look like you can disable epmd (-no_epmd) or provide an alternative implementation (epmd_module). However, the dependency of the distribution protocols on epmd is hard-coded and does not depend on these options.
So you could:
override the erl_epmd module at the code server level (probably the dirtiest approach);
provide an alternative distribution protocol which would copy (or call) inet_tcp_dist except for the parts where erl_epmd is called. Mainly, you need to provide your own implementation of setup/5.
If you don't want the shell node to connect to epmd for registering its name, you will also need to override listen/1. In this case, you can pass -no_epmd to the command line.
Alternatively, you can connect to epmd to register the listening node in order to create a shell connection using the default protocol.
This approach is particularly useful if epmd lost track of a node (e.g. it was killed, unfortunately epmd is a single point of failure). To do so:
Create a TCP connection to epmd and send a packet to register the lost node with its known port and name. Keep the TCP connection open or epmd will unregister the node.
Connect a new shell to the lost node using the name used in previous step.
You can then close the connection established in (1) and eventually re-register the lost node to epmd by calling erl_epmd:register_node/2 (and sending well-crafted tcp_closed message if required).
Related
I have a server and I am using Ubuntu 20.04, nginx , mosquitto and node-red and docker , let's call the website http://mywebsite.com. The problem that I am facing that I have created a client lets call it client1 in docker so the URL will be http://mywebsite.com/client1
and I want to establish an MQTT connection via mosquitto and I'm sending the data on topic test
The problem that on node red node of MQTT when I write the IP address of my mosquitto container it works
But if I change the IP address 192.144.0.5 with mywebsite.com/client1 I can't connect to mosquitto and I can't send or receive any form of data
any idea on how to solve this problem
OK, you are going to have several problems here.
You can not do path based proxying with MQTT. If you want to have multiple MQTT brokers (1 per client) bound to a single public facing domain/IP address then they are all going to have to run on separate ports (other than the default 1883).
Nginx can do MQTT protocol proxying (e.g. like this), so you can use this to expose the different ports and forward them to the separate instances of mosquitto, but even if you had a different hostname (all pointing at the same IP address) nginx has no way to know which host name was used because there is no equivalent to the HOST HTTP header to direct it. If you were to use MQTT with TLS then you may be able to get it to work with SNI, but I've never seen anybody do that yet (possible docs for SNI based routing here) It works, explanation about how to do it here.
If you use MQTT over Websockets then you should be able to use hostname based routing.
Path based proxying for Node-RED currently doesn't work properly if you enable admin authentication, because the admin auth tokens are currently stored in browser local storage and only scoped to the hostname, not the hostname + path. This will mean that a client will only ever be able to log into one instance at a time.
You can work round this by using host based proxying, e.g. http://client1.mywebsite.com
A fix for this is on the backlog for Node-RED, probably (no promises) to be looked at after version 1.2.0 ships
I have created a mobile application which uses secure MQTT (8883) for communication, however it looks like port 8883 is blocked by many ISP and networks.
I had read some blogs which recommend using 443 in such cases, however I am not sure if that really would solve the issue. What are the disadvantages in changing the default Secure MQTT port (8883) to 443. Can someone share their feedback in using port 443 for MQTT ?
Note: I am using EMQ MQTT (emqtt) broker with Paho MQTT client.
The list of recognised ports are there to help ensure that you can run multiple services in their default configuration on a machine without them clashing, as a rule they do not actually effect how the service runs.
With some very well used protocols (e.g. HTTP and HTTPS) network administrators may make assumptions about.
Just moving the port for native MQTT (with TLS) from 8883 to 443 to get round port blocking by networks(*) probably won't actually solve the problem. This is because the types of network that deploy these types of firewall settings also tend to conduct transparent proxying.
If you want a solution that will work even in the worst of cases then running MQTT over Secure Websockets (which is bootstrapped from HTTPS) is probably your best bet. Most of the Paho client library implementations (you don't say which you are using so can say for sure) support both native MQTT and MQTT over Websockets these days and can be given a list of broker URIs so once the broker is set up to support both you can try to connect via native MQTT then fall back to MQTT over Websockets if the connection fails.
I have together 6 containers running in docker swarm. Kafka+Zookeeper, MongoDB, A, B, C and Interface. Interface is the main access point from public - only this container publish the port - 5683. The interface container connects to A, B and C during startup. I am using docker-compose file + docker stack deploy, each service has a name which is used as host for interface. Everything starts successfully and works fine. After some time (20 mins,1h,..) I am not able to make request to interface. Interface receives my requests but application lost connection with service A,B,C or all of them. If I restart interface, it's able to reconnect to services A,B,C.
I firstly thought it's problem of application so I expose 2 new ports on each service (interface, A,B,C) and connect with profiler and debugger to them. Application is running properly, no leaks, no blocked threads, normally working and waiting for connections. Debugger shows me that when I make a request to interface and interface tries to request service A, Connection reset by peer exception was thrown.
During this debugging I found out interesting stuff. I attached debugger to interface when the services started and also debugger was disconnected after some time. + I was not able to reconnect it, until I made request to the container -> application. PRoblem - handshake failed.
Another interesting stuff that I found out was that I was not able to request neither interface. So I used wireshark to see what's going on and: SYN - ACK was fine. Then application post some data and interface respond with FIN,ACK. I assume that this also happen when interface tries to request service A and it FIN the connection. Codebase of Interface, A,B and C is the same regarding netty server.
Finally, I don't think it's a application issue. Why? I tried to deploy containers not as services. I run each container separately, published the ports of each and endpoint of services were set to localhost. (not overlay network). And it is working. Containers run without problem. + I didn't say at the beginning, that the the java applications (interface, A,B,C) runs without problem when they are running as standalone application - not in docker.
Could you please help me what could be the issue? Why the docker in case of overlay network is closing sockets?
I am using newest docker. I used also older.
Finally, I was able to solve the problem.
What was happening, one more time. Interface opens permanent TCP connection to A,B,C. When you try to run these services A,B,C as a standalone java applications, everything is working. When we dockerize them and run in swarm, it was working only few minutes. Strange was that the connection between Interface and another service was interrupted in the moment when you made a request from client to interface.
After many many unsuccessful tests and debugging each container I tried to run each docker container separately, with mapped ports and as endpoint I specified localhost. (each container exposed ports and interface was connecting to localhost) Funny thing happen, it was working. When you run containers like this, different network driver for container is used. Bridge one. If you run it in swarm, overlay network driver is used.
So it had to be something with the docker network, not with application itself. Next step was tcpdump from each container after couple of minutes, when it should stop working. It was very interesting.
Client -> Interface (OK, request accepted)
Interface ->(forward request because it belongs to A) A
Interface -> A [POST]
A -> Interface [RESET]
A was reseting opened TCP communication after couple of minutes without communication. Why?
Docker uses IP Virtual Server and IPVS maintains its own connection table. The default timeout for CLOSE_WAIT connections in IPVS table is 60 seconds. Hence when the server sends something after 60 seconds, the IPVS connection is no longer available and the packet looks invalid for a new TCP session and gets RST. On the client side, the connection remains forever in FIN_WAIT2 state because the app still has the socket open; kernel's fin_wait timer kicks in only for orphaned TCP sockets.
This is what I read about it and how understand it. I am not sure if my explanation of problem is correct, but based on these assumptions I implemented ping-pong between Interface and A,B,C services in case there is no communication for <60seconds. And, it’s working.
Got the same issue.
Specified
endpoint_mode: dnsrr
to properties of the service which plays "server" role and it works just fine.
https://forums.docker.com/t/tcp-timeout-that-occurs-only-in-docker-swarm-not-simple-docker-run/58179
Can someone explain why using a port is necessary when running things locally?
I assume the reason is because the same software could be run remotely and in that case specifying a port would be necessary.
When a database or server is running locally, do requests from a locally running web browser really "go through the port" specified?
Good question. In fact, there are local-only communication protocols, such as pipes and UNIX domain sockets that do not actually require port numbers to operate. This is because they refer to files or other identifiers that are only valid on the computer itself.
However, most servers are designed for TCP/IP connections. TCP/IP itself specifies a port number in the protocol. It is normally intended for remote use, but when a server that is used to TCP/IP runs "on local host", it must supply a port number to satisfy the TCP protocol.
Port numbers also enable multiple servers to coexist on a single computer, all running on different ports. For a protocol without port numbers, this is achieved by using different identifiers (e.g. a filesystem file) for each server.
Some servers can operate on both TCP/IP and local sockets. For example, MySQL can accept connections both through the usual TCP port, and also through a local socket (mysql.sock). Connecting through the local socket is reserved for local users only, and may be faster on some systems.
Sometimes You may have some other software installed in your computer that may use the same port. For instance Apache and IIS: imagine you set port 8080 to IIS as default, what about if you had previously installed Apache set port 8080 ?
Another example will be if you installed Mysql Workbench and days later install XAMPP you may have trouble with the ports if you don't change one instance's port different from 3306
This is why it is necessary to specify ports even though is locally.
We use Jenkins 1.504 on Windows.
We need to have Master and Slave in different sub-networks with firewall in between.
We can't have ANY to ANY port firewall rules, we must specify exact port numbers.
I know the port Master is listening on.
I also see that Slave opens connection to the Master from the arbitrary port dynamically assigned every run, and port on the Master side is also arbitrary.
I can fix Master's port by specifying it in Manage Jenkins > Configure Global Security > TCP port for JNLP slave agents).
How to fix Slave port?
UPDATE: Found Connection Mechanism described here: https://wiki.jenkins-ci.org/display/JENKINS/Jenkins+CLI#JenkinsCLI-Connectionmechanism
I think it might work for us, but still would be better to have fixed-2-fixed ports connection.
We had a similar situation, but in our case Infosec agreed to allow any to 1, so we didnt had to fix the slave port, rather fixing the master to high level JNLP port 49187 worked ("Configure Global Security" -> "TCP port for JNLP slave agents").
TCP
49187 - Fixed jnlp port
8080 - jenkins http port
Other ports needed to launch slave as a windows service
TCP
135
139
445
UDP
137
138
A slave isn't a server, it's a client type application. Network clients (almost) never use a specific port. Instead, they ask the OS for a random free port. This works much better since you usually run clients on many machines where the current configuration isn't known in advance. This prevents thousands of "client wouldn't start because port is already in use" bug reports every day.
You need to tell the security department that the slave isn't a server but a client which connects to the server and you absolutely need to have a rule which says client:ANY -> server:FIXED. The client port number should be >= 1024 (ports 1 to 1023 need special permissions) but I'm not sure if you actually gain anything by adding a rule for this - if an attacker can open privileged ports, they basically already own the machine.
If they argue, then ask them why they don't require the same rule for all the web browsers which people use in your company.
I have a similar scenario, and had no problem connecting after setting the JNLP port as you describe, and adding a single firewall rule allowing a connection on the server using that port. Granted it is a randomly selected client port going to a known server port (a host:ANY -> server:1 rule is needed).
From my reading of the source code, I don't see a way to set the local port to use when making the request from the slave. It's unfortunate, it would be a nice feature to have.
Alternatives:
Use a simple proxy on your client that listens on port N and then does forward all data to the actual Jenkins server on the remote host using a constant local port. Connect your slave to this local proxy instead of the real Jenkins server.
Create a custom Jenkins slave build that allows an option to specify the local port to use.
Remember also if you are using HTTPS via a self-signed certificate, you must alter the configuration jenkins-slave.xml file on the slave to specify the -noCertificateCheck option on the command line.