mina server response port different from port incoming messages - port

I set up a server that receives messages over port xxx, but I want to respond to port yyy.
Is there a simple way to achieve this?
My server:
IoAcceptor acceptor = new NioSocketAcceptor();
acceptor.setHandler(new MessageHandler());
acceptor.getFilterChain().addLast("logger", new LoggingFilter());
acceptor.getFilterChain().addLast("codec", new protocolCodecFilter(codecFactory));
acceptor.getSessionConfig().setReadBufferSize(bufferSize);
acceptor.bind(new InetSocketAddress(port));
The encode method of my encoder:
public void encode(IoSession session, Object message, ProtocolEncoderOutput out) throws Exception {
byte[] writeBytes = (byte[]) message;
IoBuffer buffer = IoBuffer.allocate(writeBytes.length).setAutoExpand(false);
buffer.put(writeBytes);
buffer.flip();
out.write(buffer);
writeMessage(session,writeBytes);
}
The msessage should be written to a different port. How do I achieve this?

If you want to response message using different tcp port, you must make another other tcp connection first, which means you have two servers and tow clients.
request
client1---------->server1
reponse
server2---------->client2

Related

Finagle connect to a service and not the host

I have a server that hosts many services.
In a scala application, I need to query one of its services : service1/api/endpoint1
The problem I'm facing, is that Http.client.newService expects a host with a port, so in my case
val client: Service[Request, Response] = Http.client
.withRetryBudget(budget)
.withMaxResponseSize(StorageUnit.fromMegabytes(1900))
.withRetryBackoff(Backoff.const(10.seconds))
.newService("myHost:9999")
When posting a request
val req = RequestBuilder
.post("/service/api/endpoint1")
.body(jsonReq)
.request
val request: Future[Response] = client(req)
val response = Await.result[Response](request, Duration.fromMinutes(4))
I got the error
An existing connection was forcibly closed by the remote host at remote address myhost:9999
because myhost:9999 doesn't allow direct connections, one can only connect to one of it's services, myhost:9999/service1 or so..
Is there a way to achieve this ? i.e to create the httpclient with the whole url myHost:9999/service1 ?

How to get the remote host from a rsocket

I now receive an rsocket connection in my spring project, and then I want to get its remote address and port, how should I get it?Similar to using socket.getRemoteSocketAddress() to get the remote address of the socket.
#ConnectMapping
public void connectMapping(RSocketRequester requester) {
// there is a resockt connect, how can i get the remote host from it
RSocket rSocket = requester.rsocket();
// TODO
logger.info("host port");
}
Unfortunately, I think even if you grab the RSocketRequester in #ConnectMapping or #MessageMapping method it is an internal detail. io.rsocket.core.RSocketRequester via RequesterResponderSupport holds the DuplexConnection which represents a connection over tcp, web socket or in-process. It is not exposed via a public API.
This is a worthy request but you will need to file a feature request to get this added unless I'm missing something obvious.
It isn't clear that there is a hook in https://docs.spring.io/spring-boot/docs/2.3.0.RELEASE/api/org/springframework/boot/rsocket/server/RSocketServerCustomizer.html to let you see the DuplexConnection (tcp or web socket etc) as it's established.

Does `select` handles multiple endpoints or multiple sockets?

I am new to network programming, and I have some confusion with select function.
For a server program, we need to first create a fd to socket endpoint (server's ip and port without client's ip and port) with socket, bind and listen, then if there is a TCP connection to this socket endpoint, then accept returns the fd to the socket (server's ip, port and client's ip, port). Then we use recv on this socket's fd, and if there is no data to receive, the recv will block (for blocking socket).
I learned that select is used to handle non-blocking multiple connections. But which connection level does it handles? Does it handles multiple socket endpoints, or handles multiple sockets of one single socket endpoint?
For a normal server program, I think the socket endpoint is always single, and there are maybe hundreds of sockets connected to this endpoint. So I think handling multiple socket endpoints may be less useful to handling multiple sockets. But when talking about IO multiplexing, I find that many articles seems talking about handling multiple socket endpoints. While for handling multiple sockets to a single socket, I can't find a way to get all sockets, and put them to select's set of fds, since accept only accepts one sockets a time.
"Does it handles multiple socket endpoints or handles multiple sockets of one single socket endpoint?" -- There is no such thing as multiple sockets of a single socket endpoint. Every socket is an endpoint to the network communication. It just happens that the piece of code which deals with that socket might be different from the others. Consider the following socket descriptors:
int sock_acpt = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
listen(sock_acpt, 5);
int sock_cli = accept(sock_apt, ....);
Both the socket descriptors sock_acpt and sock_cli are the endpoints of different communications. After setting sock_acpt in passive mode by calling listen(), the socket just listens for TCP connection, and the server's TCP stack will manage any data that appears on that socket (most probably TCP Handshakes). While sock_cli is the end of an already established connection, and in general, the data on that socket is managed by the user-level application.
Now coming to select(), it is an IO Multiplexer, not a Network IO multiplexer. So any descriptor which can be viewed as an IO endpoint can be used with the select(). Referring to our socket descriptors sock_acpt and sock_cli, both are IO endpoints of different communications, so you can use both of them with select(). You generally do something like below
for ( ; ; ) {
fd_set rd_set;
FD_ZERO(&rd_set);
FD_SET(sock_acpt, &rd_set);
FD_SET(sock_cli, &rd_set);
if (select(sock_acpt > sock_cli ? sock_acpt + 1 : sock_cli + 1, \
&rd_set, NULL, NULL, NULL) <= 0) {
continue;
}
if (FD_ISSET(sock_acpt, &rd_set)) {
// Accept new connection
// accept(sock_acpt, ...);
}
if (FD_ISSET(sock_cli, &rd_set)) {
// Read from the socket
// read(sock_cli, ...);
}
}
But using select() is not limited to sockets, you can use with the file IO (fileno(stdin)), with the signal IO (signalfd()) and any other which can be viewed as IO endpoint.

TCP connection issue by using BlueSocket framework on IOS

Getting error of "Error code: -9989(0x-2705), Connection refused" by using BlueSocket framework and connecting between Mac and IOS.
Here is the logic:
I am treating Mac as a server:
// making TCP IPV4 socket
try self.listenSocket = Socket.create(family: .inet, type: .stream, proto: .tcp)
// start lisening port of 8888
try socket.listen(on: self.port)
// accept client connection when there is
let newSocket = try socket.acceptClientConnection()
// keep opening and reading data ....
iPhone as a client:
self.socket = try Socket.create(family: .inet)
try self.socket?.connect(to: ip, port: 8888)
try self.socket?.setReadTimeout(value: readWriteTimeOut)
try self.socket?.setWriteTimeout(value: readWriteTimeOut)
self.socket?.readBufferSize = Socket.SOCKET_MAXIMUM_SSL_READ_BUFFER_SIZE
client first time connect with the server works fine.
after the server receives data, client-side automatically closes the socket.
client tries to connect the server again to send back data by using same code above.
Then error displays!
I think by default when the server-side uses socket.listen, it has SO_REUSEADDR set to true
Need suggestions on how to resolve this issue. Thanks!
Just need to make sure while loop for socket.acceptClientConnection() is opening all the time....

HttpURLConnection implementation

I have read that HttpURLConnection supports persistent connections, so that a connection can be reused for multiple requests. I tried it and the only way to send a second POST was by calling openConnection for a second time. Otherwise I got a IllegalStateException("Already connected");
I used the following:
try{
URL url = new URL("http://someconection.com");
}
catch(Exception e){}
HttpURLConnection con = (HttpURLConnection) url.openConnection();
//set output, input etc
//send POST
//Receive response
//Read whole response
//close input stream
con.disconnect();//have also tested commenting this out
con = (HttpURLConnection) url.openConnection();
//Send new POST
The second request is send over the same TCP connection (verified it with wireshark) but I can not understand why (although this is what I want) since I have called disconnect.
I checked the source code for the HttpURLConnection and the implementation does keep a keepalive cache of connections to the same destinations. My problem is that I can not see how the connection is placed back in the cache after I have send the first request. The disconnect closes the connection and without the disconnect, still I can not see how the connection is placed back in the cache. I saw that the cache has a run method to go through over all idle connections (I am not sure how it is called), but I can not find how the connection is placed back in the cache. The only place that seems to happen is in the finished method of httpClient but this is not called for a POST with a response.
Can anyone help me on this?
EDIT
My interest is, what is the proper handling of an HttpUrlConnection object for tcp connection reuse. Should input/output stream be closed followed by a url.openConnection(); each time to send the new request (avoiding disconnect())? If yes, I can not see how the connection is being reused when I call url.openConnection() for the second time, since the connection has been removed from the cache for the first request and can not find how it is returned back.
Is it possible that the connection is not returned back to the keepalive cache (bug?), but the OS has not released the tcp connection yet and on new connection, the OS returns the buffered connection (not yet released) or something similar?
EDIT2
The only related i found was from JDK_KeepAlive
...when the application calls close()
on the InputStream returned by
URLConnection.getInputStream(), the
JDK's HTTP protocol handler will try
to clean up the connection and if
successful, put the connection into a
connection cache for reuse by future
HTTP requests.
But I am not sure which handler is this. sun.net.www.protocol.http.Handler does not do any caching as I saw
Thanks!
Should input/output stream be closed
followed by a url.openConnection();
each time to send the new request
(avoiding disconnect())?
Yes.
If yes, I can not see how the connection is being
reused when I call
url.openConnection() for the second
time, since the connection has been
removed from the cache for the first
request and can not find how it is
returned back.
You are confusing the HttpURLConnection with the underlying Socket and its underlying TCP connection. They aren't the same. The HttpURLConnection instances are GC'd, the underlying Socket is pooled, unless you call disconnect().
From the javadoc for HttpURLConnection (my emphasis):
Each HttpURLConnection instance is
used to make a single request but the
underlying network connection to the
HTTP server may be transparently
shared by other instances. Calling the
close() methods on the InputStream or
OutputStream of an HttpURLConnection
after a request may free network
resources associated with this
instance but has no effect on any
shared persistent connection. Calling
the disconnect() method may close the
underlying socket if a persistent
connection is otherwise idle at that
time.
I found that the connection is indeed cached when the InputStream is closed. Once the inputStream has been closed the underlying connection is buffered. The HttpURLConnection object is unusable for further requests though, since the object is considered still "connected", i.e. its boolean connected is set to true and is not cleared once the connection is placed back in the buffer. So each time a new HttpUrlConnection should be instantiated for a new POST, but the underlying TCP connection will be reused, if it has not timed out.
So EJP answer's was the correct description. May be the behavior I saw, (reuse of the TCP connection) despite explicitly calling disconnect() was due to caching done by the OS? I do not know. I hope someone who knows can explain.
Thanks.
How do you "force use of HTTP1.0" using the HttpUrlConnection of JDK?
According to the section ā€˛Persistent Connectionsā€¯ of the Java 1.5 guide support for HTTP1.1 connections can be turned off or on using the java property http.keepAlive (default is true). Furthermore, the java property http.maxConnections indicates the maximum number of (concurrent) connections per destination to be kept alive at any given time.
Therefore, a "force use of HTTP1.0" could be applied for the whole application at once by setting the java property http.keepAlive to false.
Hmmh. I may be missing something here (since this is an old question), but as far as I know, there are 2 well-known ways to force closing of the underlying TCP connection:
Force use of HTTP 1.0 (1.1 introduced persistent connections) -- this as indicated by the http request line
Send 'Connection' header with value 'close'; this will force closing as well.
Abandoning streams will cause idle TCP connections. The response stream should be read completely. Another thing I overlooked initially, and have seen overlooked in most answers on this topic is forgetting to deal with the error stream in case of exceptions. Code similar to this fixed one of my apps that wasn't releasing resources properly:
HttpURLConnection connection = (HttpURLConnection)new URL(uri).openConnection();
InputStream stream = null;
BufferedReader reader = null;
try {
stream = connection.getInputStream();
reader = new BufferedReader(new InputStreamReader(stream, Charset.forName("UTF-8")));
// do work on part of the input stream
} catch (IOException e) {
// read the error stream
InputStream es = connection.getErrorStream();
if (es != null) {
BufferedReader esReader = null;
esReader = new BufferedReader(new InputStreamReader(es, Charset.forName("UTF-8")));
while (esReader.ready() && esReader.readLine() != null) {
}
if (esReader != null)
esReader.close();
}
// do something with the IOException
} finally {
// finish reading the input stream if it was not read completely in the try block, then close
if (reader != null) {
while (reader.readLine() != null) {
}
reader.close();
}
// Not sure if this is necessary, closing the buffered reader may close the input stream?
if (stream != null) {
stream.close();
}
// disconnect
if (connection != null) {
connection.disconnect();
}
}
The buffered reader isn't strictly necessary, I chose it because my use case required reading one line at a time.
See also: http://docs.oracle.com/javase/1.5.0/docs/guide/net/http-keepalive.html

Resources