I'm writing a speed test, but i'm having trouble on the client side for uploading.
I have a the following setup, which basically continues to write data into the socket while a condition is true, and then closes the socket:
var ws = await createWebSocket(sb.serverAddress, sb.authToken);
while (condition) {
var bytes = generateRandomBytes(_BUFFER_SIZE_BYTES);
ws.add(bytes);
print('added');
var megabits = (bytes.length * 8) / 1000000;
channel.sink.add(megabits);
}
await ws.close();
My problem is that I can't work out how to wait for the bytes to be accepted by the underlying buffer. Even if I set _BUFFER_SIZE_BYTES to an huge size it still loops at break neck speed printing out added, where I really want to wait until all the bytes are accepted by the send buffer (having been accepted by the server) before adding a new list of bytes.
With an http post request you can do: await postReq.flush();, but I don't see any such method for web sockets.
Ok so I think I have a reasonable solution to this problem.
Client side has to wait for a response from the server before sending more bytes:
var bytes = generateRandomBytes(_CHUNK_SIZE_BYTES);
ws.listen((data) async {
ws.add(bytes);
var megabits = (bytes.length * 8) / 1000000;
channel.sink.add(megabits);
}
});
Server (Go) sends a message to the client signalling that it can send a chunk, and then reads the entire response from the client, before signalling to the client that it is ready to accept another one:
for start := time.Now(); time.Since(start) < time.Second*maxDuration; {
err := conn.WriteMessage(websocket.TextMessage, []byte("next"))
if err != nil {
break
}
// will get an error if try writing to closed socket
_, bytes, err := conn.ReadMessage()
if err != nil {
fmt.Println(err)
break
}
fmt.Println(len(bytes))
}
I think this solution is ok. I've set the chunk size to 10Mb which seems to work ok. Let me know if anyone has a better idea.
Related
I'm trying to send multiple packets at once to a server, but the socket keeps "merging" all sync calls to write as a single call, I did a minimal reproducible example:
import 'dart:io';
void main() async {
// <Server-side> Create server in the local network at port <any available port>.
final ServerSocket server =
await ServerSocket.bind(InternetAddress.anyIPv4, 0);
server.listen((Socket client) {
int i = 1;
client.map(String.fromCharCodes).listen((String message) {
print('Got a new message (${i++}): $message');
});
});
// <Client-side> Connects to the server.
final Socket socket = await Socket.connect('localhost', server.port);
socket.write('Hi World');
socket.write('Hello World');
}
The result is:
> dart example.dart
> Got a new message (1): Hi WorldHello World
What I expect is:
> dart example.dart
> Got a new message (1): Hi World
> Got a new message (2): Hello World
Unfortunately dart.dev doesn't support dart:io library, so you need to run in your machine to see it working.
But in summary:
It creates a new tcp server at a random port.
Then creates a socket that connects to the previous created server.
The socket makes 2 synchronous calls to the write method.
The server only receives 1 call, which is the 2 messages concatenated.
Do we have some way to receive each synchronous write call in the server as separated packets instead buffering all sync calls into a single packet?
What I've already tried:
Using socket.setOption(SocketOption.tcpNoDelay, true); right after Socket.connect instantiation, this does modify the result:
final Socket socket = await Socket.connect('localhost', server.port);
socket.setOption(SocketOption.tcpNoDelay, true);
// ...
Using socket.add('Hi World'.codeUnits); instead of socket.write(...), also does not modify the result as expected, because write(...) seems to be just a short version add(...):
socket.add('Hi World'.codeUnits);
socket.add('Hello World'.codeUnits);
Side note:
Adding an async delay to avoid calling write synchronously:
socket.add('Hi World'.codeUnits);
await Future<void>.delayed(const Duration(milliseconds: 100));
socket.add('Hello World'.codeUnits);
make it works, but I am pretty sure this is not the right solution, and this isn't what I wanted.
Environment:
Dart SDK version: 2.18.4 (stable) (Tue Nov 1 15:15:07 2022 +0000) on "windows_x64"
This is a Dart-only environment, there is no Flutter attached to the workspace.
As Jeremy said:
Programmers coding directly to the TCP API have to implement this logic themselves (e.g. by prepending a fixed-length message-byte-count field to each of their application-level messages, and adding logic to the receiving program to parse these byte-count fields, read in that many additional bytes, and then present those bytes together to the next level of logic).
So I chose to:
Prefix each message with a - and suffix with ..
Use base64 to encode the real message to avoid conflict between the message and the previously defined separators.
And using this approach, I got this implementation:
// Send packets:
socket.write('-${base64Encode("Hi World".codeUnits)}.');
socket.write('-${base64Encode("Hello World".codeUnits)}.');
And to parse the packets:
// Cache the previous parsed packet data.
String parsed = '';
void _handleCompletePacket(String rawPacket) {
// Decode the original message from base64 using [base64Decode].
// And convert the [List<int>] to [String].
final String message = String.fromCharCodes(base64Decode(rawPacket));
print(message);
}
void _handleServerPacket(List<int> rawPacket) {
final String packet = String.fromCharCodes(rawPacket);
final String next = parsed + packet;
final List<String> items = <String>[];
final List<String> tokens = next.split('');
for (int i = 0; i < tokens.length; i++) {
final String char = tokens[i];
if (char == '-') {
if (items.isNotEmpty) {
// malformatted packet.
items.clear();
continue;
}
items.add('');
continue;
} else if (char == '.') {
if (items.isEmpty) {
// malformatted packet.
items.clear();
continue;
}
_handleCompletePacket(items.removeLast());
continue;
} else {
if (items.isEmpty) {
// malformatted packet.
items.clear();
continue;
}
items.last = items.last + char;
continue;
}
}
if (items.isNotEmpty) {
// the last data of this packet was left incomplete.
// cache it to complete with the next packet.
parsed = items.last;
}
}
client.listen(_handleServerPacket);
There are certainly more optimized solutions/approaches, but I got this just for chatting messages within [100-500] characters, so that's fine for now.
Grpc.Net client:
a gRpc client sends large amount of data to a gRpc server
after the gRpc server receives the data from the client, the http2 channel becomes idle (but is open) until the server returns the response to the client
the gRpc server receives the data and starts processing it. If the data processing takes longer than 2 minutes (which is the default idle timeout for http calls) then the response never reaches the client because the channel is actually disconnected, but the client does not know this because it was shutdown by other hardware in between due to long idle time.
Solution:
when the channel is created at the gRpc client side, it must have a httpClient set on it
the httpClient must be instantiated from a socketsHttpHandler with
the following properties set (PooledConnectionIdleTimeout, PooledConnectionLifetime, KeepAlivePingPolicy, KeepAlivePingTimeout, KeepAlivePingDelay)
Code snipped:
SocketsHttpHandler socketsHttpHandler = new SocketsHttpHandler()
{
PooledConnectionIdleTimeout = TimeSpan.FromMinutes(180),
PooledConnectionLifetime = TimeSpan.FromMinutes(180),
KeepAlivePingPolicy = HttpKeepAlivePingPolicy.Always,
KeepAlivePingTimeout = TimeSpan.FromSeconds(90),
KeepAlivePingDelay = TimeSpan.FromSeconds(90)
};
socketsHttpHandler.SslOptions.RemoteCertificateValidationCallback = (sender, cert, chain, sslPolicyErrors) => { return true; };
HttpClient httpClient = new HttpClient(socketsHttpHandler);
httpClient.Timeout = TimeSpan.FromMinutes(180);
var channel = GrpcChannel.ForAddress(_agentServerURL, new GrpcChannelOptions
{
Credentials = ChannelCredentials.Create(new SslCredentials(), credentials),
MaxReceiveMessageSize = null,
MaxSendMessageSize = null,
MaxRetryAttempts = null,
MaxRetryBufferPerCallSize = null,
MaxRetryBufferSize = null,
HttpClient = httpClient
});
A workaround is to package your message in an oneof and then send a KeepAlive from a seperate thread every x seconds, for the duration of the calculations.
For example:
message YourData {
…
}
message KeepAlive {}
message DataStreamPacket {
oneof data {
YourData data = 1;
KeepAlive ka = 2;
}
}
Then in your code:
stream <-
StartThread() {
each 5 seconds:
Send KeepAlive
}
doCalculations()
StopThread()
SendData()
this is what I needed. I had this problem for months now, but my only solution was to decrease the volume of data.
I am aiming to make a post request to trigger a IFTTT webhook action. I am using the MKR1010 board. I am able to connect to the network and turn the connected LED on and off using the cloud integration.
The code is as follows, but doesn't trigger the web hook. I can manually paste the web address in a browser and this does trigger the web hook. When the code is posted it returns a 400 bad request error.
The key has been replaced in the below code with a dummy value.
Does anybody know why this is not triggering the web hook? / Can you explain why the post request is being rejected by the server? I don't even really need to read the response from the server as long as it is sent.
Thank you
// ArduinoHttpClient - Version: Latest
#include <ArduinoHttpClient.h>
#include "thingProperties.h"
#define LED_PIN 13
#define BTN1 6
char serverAddress[] = "maker.ifttt.com"; // server address
int port = 443;
WiFiClient wifi;
HttpClient client = HttpClient(wifi, serverAddress, port);
// variables will change:
int btnState = 0; // variable for reading the pushbutton status
int btnPrevState = 0;
void setup() {
// Initialize serial and wait for port to open:
Serial.begin(9600);
// This delay gives the chance to wait for a Serial Monitor without blocking if none is found
delay(1500);
// Defined in thingProperties.h
initProperties();
// Connect to Arduino IoT Cloud
ArduinoCloud.begin(ArduinoIoTPreferredConnection);
/*
The following function allows you to obtain more information
related to the state of network and IoT Cloud connection and errors
the higher number the more granular information you’ll get.
The default is 0 (only errors).
Maximum is 4
*/
setDebugMessageLevel(2);
ArduinoCloud.printDebugInfo();
// setup the board devices
pinMode(LED_PIN, OUTPUT);
pinMode(BTN1, INPUT);
}
void loop() {
ArduinoCloud.update();
// Your code here
// read the state of the pushbutton value:
btnState = digitalRead(BTN1);
if (btnPrevState == 0 && btnState == 1) {
led2 = !led2;
postrequest();
}
digitalWrite(LED_PIN, led2);
btnPrevState = btnState;
}
void onLed1Change() {
// Do something
digitalWrite(LED_PIN, led1);
//Serial.print("The light is ");
if (led1) {
Serial.println("The light is ON");
} else {
// Serial.println("OFF");
}
}
void onLed2Change() {
// Do something
digitalWrite(LED_PIN, led2);
}
void postrequest() {
// String("POST /trigger/btn1press/with/key/mykeyhere")
Serial.println("making POST request");
String contentType = "/trigger/btn1press/with/key";
String postData = "mykeyhere";
client.post("/", contentType, postData);
// read the status code and body of the response
int statusCode = client.responseStatusCode();
String response = client.responseBody();
Serial.print("Status code: ");
Serial.println(statusCode);
Serial.print("Response: ");
Serial.println(response);
Serial.println("Wait five seconds");
delay(5000);
}
Why do you want to make a POST request and send the key in the POST body? The browser sends a GET request. It would be
client.get("/trigger/btn1press/with/key/mykeyhere");
In HttpClient post() the first parameter is 'path', the second parameter is contentType (for example "text/plain") and the third parameter is the body of the HTTP POST request.
So your post should look like
client.post("/trigger/btn1press/with/key/mykeyhere", contentType, postData);
I've built a http server using netty. Everything is fine when it's running in my mac, but when I run it in a docker image, the http response always get truncated when great than 460k.
What's the problem will be? Please help.
Do you use aggregator to aggregate the http response or not? Take a look at the source code of HttpObjectDecoder. It will chunk bigger http response no matter if the http message itself is transfer coding or not.
The default maxChunk size is 8k.And even readable bytes is enough, it will chunk it. see the code below:
` case READ_FIXED_LENGTH_CONTENT: {
int readLimit = actualReadableBytes();
// Check if the buffer is readable first as we use the readable byte count
// to create the HttpChunk. This is needed as otherwise we may end up with
// create a HttpChunk instance that contains an empty buffer and so is
// handled like it is the last HttpChunk.
//
// See https://github.com/netty/netty/issues/433
if (readLimit == 0) {
return;
}
int toRead = Math.min(readLimit, maxChunkSize);
if (toRead > chunkSize) {
toRead = (int) chunkSize;
}
ByteBuf content = readBytes(ctx.alloc(), buffer, toRead);
chunkSize -= toRead;
`
I will try to explain the problem in shortest possible words. I am using c++ builder 2010.
I am using TIdTCPServer and sending voice packets to a list of connected clients. Everything works ok untill any client is disconnected abnormally, For example power failure etc. I can reproduce similar disconnect by cutting the ethernet connection of a connected client.
So now we have a disconnected socket but as you know it is not yet detected at server side so server will continue to try to send data to that client too.
But when server try to write data to that disconnected client ...... Write() or WriteLn() HANGS there in trying to write, It is like it is wating for somekind of Write timeout. This hangs the hole packet distribution process as a result creating a lag in data transmission to all other clients. After few seconds "Socket Connection Closed" Exception is raised and data flow continues.
Here is the code
try
{
EnterCriticalSection(&SlotListenersCriticalSection);
for(int i=0;i<SlotListeners->Count;i++)
{
try
{
//Here the process will HANG for several seconds on a disconnected socket
((TIdContext*) SlotListeners->Objects[i])->Connection->IOHandler->WriteLn("Some DATA");
}catch(Exception &e)
{
SlotListeners->Delete(i);
}
}
}__finally
{
LeaveCriticalSection(&SlotListenersCriticalSection);
}
Ok i already have a keep alive mechanism which disconnect the socket after n seconds of inactivity. But as you can imagine, still this mechnism cant sync exactly with this braodcasting loop because this braodcasting loop is running almost all the time.
So is there any Write timeouts i can specify may be through iohandler or something ? I have seen many many threads about "Detecting disconnected tcp socket" but my problem is little different, i need to avoid that hangup for few seconds during the write attempt.
So is there any solution ?
Or should i consider using some different mechanism for such data broadcasting for example the broadcasting loop put the data packet in some kind of FIFO buffer and client threads continuously check for available data and pick and deliver it to themselves ? This way if one thread hangs it will not stop/delay the over all distribution thread.
Any ideas please ? Thanks for your time and help.
Regards
Jams
There are no write timeouts implemented in Indy. For that, you will have to use the TIdSocketHandle.SetSockOpt() method to set the socket-level timeouts directly.
The FIFO buffer is a better option (and a better design in general). For example:
void __fastcall TForm1::IdTCPServer1Connect(TIdContext *AContext)
{
...
AContext->Data = new TIdThreadSafeStringList;
...
}
void __fastcall TForm1::IdTCPServer1Disconnect(TIdContext *AContext)
{
...
delete AContext->Data;
AContext->Data = NULL;
...
}
void __fastcall TForm1::IdTCPServer1Execute(TIdContext *AContext)
{
TIdThreadSafeStringList *Queue = (TIdThreadSafeStringList*) AContext->Data;
TStringList *Outbound = NULL;
TStringList *List = Queue->Lock();
try
{
if( List->Count > 0 )
{
Outbound = new TStringList;
Outbound->Assign(List);
List->Clear();
}
}
__finally
{
Queue->Unlock();
}
if( Outbound )
{
try
{
AContext->Connection->IOHandler->Write(Outbound);
}
__finally
{
delete Outbound;
}
}
...
}
...
try
{
EnterCriticalSection(&SlotListenersCriticalSection);
int i = 0;
while( i < SlotListeners->Count )
{
try
{
TIdContext *Ctx = (TIdContext*) SlotListeners->Objects[i];
TIdThreadSafeStringList *Queue = (TIdThreadSafeStringList*) Ctx->Data;
Queue->Add("Some DATA");
++i;
}
catch(const Exception &e)
{
SlotListeners->Delete(i);
}
}
}
__finally
{
LeaveCriticalSection(&SlotListenersCriticalSection);
}