Socket Connection with Authentication - dart

We are connecting using TCP with authentication in java and the connection is getting established successfully.
We found ServerSocket and Socket class in dart in io package. However not sure how to connect a server with authentication parameters or credentials in DART. With host address and port is working fine for anonymous users. But not sure how to pass the credentials or which methods accepts with credentials.
Thanks.

Related

Unable to login to Azure IoT Hub with cellular MQTT AT command

I'm using a u-Blox SARA-R422M8S cellular module trying to connect to Azure Iot Hub with the MQTT AT commands. The module supports MQTT 3.1.1. The login request fails with Broker connection refused, not authorized. Using the same credentials in the python example at Microsoft Azure documentation, the login succeeds and I can publish. I've uploaded the Baltimore root cert and activated the TLS for the socket, so this seems ok as well as I get another error code elsewise.
Anyone experienced similar?
PS, here are the AT commands used:
AT+USECPRF=0
AT+USECPRF=0,0,1
AT+USECPRF=0,3,"root_ca"
AT+UPSD=0,0,0
AT+UPSD=0,100,1
AT+UMQTT=11,1,0
AT+UMQTT=2,".azure-devices.net",8883
AT+UMQTT=4,"myhub.azure-devices.net/mydev/?api-version=2018-06-30","mysas""
As per the docs:
For the ClientId field, use the deviceId.
So you need to set the Client ID with something like:
AT+UMQTT=0,"mydev"

Signalr connection forcefully close when sending request to aws elastic beanstalk

At beginning of project I use http for connecting with ec2 directly with ip address not domain name and its connects fine and worked fine to my c# client and web client that connected to ec2 through ip adderess.
Recently I added Https to my load balancer and configured all ec2 with https security groups and there trouble started.
Signalr web client with https and ip address connects fine on ec2 but c# client with https and ip address not connecting. C# client throw connection close method continuously.
To solve that I change my connection url from ip to elastic beanstalk domain name to c# client and signalr connected but following things happen.
1) First time when I connect with beanstalk domain name, it response with 400 header error on connection establishment and serve also reply data from database so first time connection established.
2) After server's reply I invoke another method of server at that time error occurred stated that connection disconnected please start connection before making request to server.
3) In signalr there is a connection close method that invoke if connection has been close and it is not invoking.
4) I have searched my query in Internet and found that we have to configured socket connection on beanstalk as they have same issue with nginx. I am using IIS and there is no particular answer for that.
5) I have try to connect directly on ec2 instance domain name but signalr did not established connection and directly fire connection close method without any error or warning.
6) In my network configuration I have enabled inbound port with 443 and 80. If i made request from my browser to that domain url of beanstalk or ec2 its works fine
If you have any idea to configure socket on aws ec2 or elastic beanstalk might help to solve this problem.

Using data flow with https on cloud foundry

I am trying to deploy a data flow server on Cloud foundry and create a simple app.
Only https end point could be exposed. I cannot enable https using this :
http://docs.spring.io/spring-cloud-dataflow/docs/current-SNAPSHOT/reference/htmlsingle/#configuration-security-enabling-https
As ssl is managed by cf. How do I make data flow server using https?
I have this error:
dataflow:>app list
Command failed org.springframework.web.client.ResourceAccessException: I/O error on GET request for "http://dataflow-server.run.aws-usw02-pr.ice.predix.io/apps": Connect to dataflow-server.run.aws-usw02-pr.ice.predix.io:80 [dataflow-server.run.aws-usw02-pr.ice.predix.io/54.201.89.124, dataflow-server.run.aws-usw02-pr.ice.predix.io/52.88.128.224] failed: Connection refused (Connection refused); nested exception is org.apache.http.conn.HttpHostConnectException: Connect to dataflow-server.run.aws-usw02-pr.ice.predix.io:80 [dataflow-server.run.aws-usw02-pr.ice.predix.io/54.201.89.124, dataflow-server.run.aws-usw02-pr.ice.predix.io/52.88.128.224] failed: Connection refused (Connection refused)
Thanks in advance.
Best Regards
as you already mentioned, you can not enable https at the container level inside cloudfoundry today. The traffic between the router and diego cell is not encrypted (unless you are using IPSEC).
So your dataflow server would not be configured with https, just deploy the server as it is. You should rely on your cloudfoundry install to have an open port at 443 on it's loadbalancer that forwards traffic to the router. Later CF incarnations support certificate placement at the router level.
Now, at the client (dataflow-shell) if you are using a valid certificate you don't need to do anything, but if you have a selfsigned certificate, you need to tell it to accept self-signed certificates, or skip validation all together.

Does SFTP need Bi-Directional access

I have following script to get given file from given remote directory by accepting following parameters
Host Name that you are connecting to get File
User Name of the Host
Local Directory where you wanted to transfer file
Remote Directory from where you wanted to get file
File name that you wanted to get from Remote server
FSERVER=$1
FUSER=$2
SRC_DIR=$3
REMOTE_SRC_DIR=$4
FILE_NAME=$5
cd $SRC_DIR
sftp $FUSER#$FSERVER <<GOTO
cd $REMOTE_SRC_DIR
ascii
get $FILE_NAME
bye
To access the files from $REMOTE_SRC_DIR to SRC_DIR do I need port open from both side? I.e. bi-directional or just one port from Remote Server to Source and it should need "INITIATE" session from the source. And what is the reason?
As per my understanding we are connecting to remote server path and then writing the query Get File name. So we need to bi-directional access.
SFTP uses a single TCP connection. In general, TCP connection is stateful. As such, once opened both sides can send data to each other. Only the passive side of the connection needs to initially have a well known port number opened (22 for SSH/SFTP in this case). The active side opens a random port number that the passive side learns from the TCP connection initiation packed. This passive-side port closes with the TCP connection. While the active-side port is kept open for future TCP connections.
The SFTP protocol uses strictly request-response model. I.e. although the TCP allows both sides to send data anytime, with the SFTP, the server never sends data on its own, but always in a response to client request. Note that this does not mean, that no unsolicited data flows from the server to the client on network level, as in both underlying protocols of the SFTP (the TCP and the SSH) both sides of connection can send (and send) packets anytime.
Simplified flow is:
SFTP client initiates TCP connection to remote port 22 (this causes implicit open of random local port on client side, this is done by operating system).
SSH protocol initialization and authentication occurs.
SFTP client requests SSH server to start SFTP server. Note that SFTP server is not a continuously running process. It is a sub-process/sub-service of SSH server, which is continuously running (=listening on port 22)
SFTP protocol initialization occurs.
SFTP (contrary to FTP protocol) is stateless, as such it does not have a concept of a working directory. As such changing remote working directory with the cd command is simulated on client side. The SFTP server is not aware at all of client remote working directory. SFTP client typically only verifies existence of the new working directory with the SFTP server.
The ascii command: The OpenSSH sftp client does not have ascii command. You should get "Invalid command." Unless you use other client than OpenSSH.
The get command: For file transfers the SFTP protocol offers a similar block-level API as most operating systems (contrary to a stream API of FTP protocol). So SFTP client sends "open file" request, over the existing connection, followed by repetitive "read block" requests and "close file" request. As with any SFTP requests, responses go back over the same TCP connection.
At the end, the TCP connection is terminated and connection-specific random local port is closed.

Connection refused on client and server, both are on MAC

I am using Mac book and having client and server running on same machine. Server opens socket whenever it has to send command to Client. Problem is Client opens socket at startup but whenever server opens on . Ip adress is different. I would like to know if i create tunnel between these 2 sockets. Right now, while in server i am getting "Connection refused" error.
Any help is appreciated.
Thanks
Just use 127.0.0.1 loopback address for testing your initial socket code. Server bind()s and listen()s on some known port, client connect()s to it. Once you figure out that setup you can move on to routing between real addresses.

Resources