How to stop accepting new TCPStreams if I reach a number of clients connected has is over certain value - join

Hello I wan to limit the number of clients connected to my TCP tokio server. I have the following function to listen
async fn listen (connected_clients: Arc<Mutex<u8>>, tried_numbers:Arc<Mutex<Vec<u32>>>) {
let listener = tokio::net::TcpListener::bind("localhost:8881").await.unwrap();
*connected_clients.lock().unwrap()=0;
loop{
//if *connected_clients.lock().unwrap() <=6{}
let (mut socket, _) = listener.accept().await.unwrap();
let connected_clients = connected_clients.clone();
let tried_numbers = tried_numbers.clone();
*connected_clients.lock().unwrap()+=1;
tokio::spawn(async move {
let (sread, _swrite) = socket.split();
let reader = BufReader::new(sread);
process_socket(connected_clients, tried_numbers, reader).await;
});
}
}
The problem is that if I wrap everything inside the condition to have less than 6 clients then it stalls everything done in data_printing because this is what I got in main:
tokio::join!(
listen(connected_clients,tried_numbers),
data_printing(connected_clients2,tried_numbers2,total_unique_numbers,output_file)
);
so how can I do this?

Related

netmq TryReceiveMultipartMessage() works abnormal

I used the netmq (VisualStudio 2022, by Nuget install netmq) as https://www.nuget.org/packages/NetMQ/ described.
One SubscriberSocket one thread to connect and receive message from one publisher. source code like below:
public void ZMQReceiveThread(string serverIP, int port)
{
//Create SubscriberSocket
SubscriberSocket subSocket = new SubscriberSocket();
//Connect to Publisher
subSocket.Connect("tcp://" + serverIP + ":" + port.ToString());
//Subscribe all topics
subSocket.Subscribe("");
//set timeout value
int timeout = 10000 * 300; //300ms
TimeSpan ts = new TimeSpan(timeout);
while (!_isStopEnabled)
{
NetMQMessage recvMessage = null;
bool bSuccess = subSocket.TryReceiveMultipartMessage(ts, ref recvMessage, 1);
if(bSuccess == true) //Recieve data successfully
{
//Handle the recvMessage
}
else //Timeout
{
//output log message
Loger.Error($"TryReceiveMultipartMessage({ts.TotalMilliseconds} ms) timeout...");
continue;
}
}
}
sometimes the subSocket.TryReceiveMultipartMessage() timeout although the publisher sent message continuously (we used another test app written in C language linked libzmq(https://github.com/zeromq/libzmq) which can receive the continuous message).
Any comments about this topic?
thanks a lot in advance.
I looked through the netmq source code(https://github.com/zeromq/netmq) but cannot find any clues about TryReceiveMultipartMessage()

TCP input data packets are combined in Dart

I have a simple TCP client in dart:
import 'dart:io';
void main() {
const sendData = "\$I,Z,0.5,5,0*\r\n";
final socket = Socket.connect("192.168.1.100", int.parse("8008"))
.timeout(Duration(seconds: 5))
.whenComplete(() {
print("Connected!");
}).catchError((_) {
print("Error!");
});
socket.then((soc) {
soc.write(sendData);
soc.listen((data) {
print(String.fromCharCodes(data).trim);
});
});
}
This program sends a special message to server and after that, the server sends back a bunch of data every 10 ms. The output is as follows:
$I,1,250,0,206*$I,1,248,0,192*$I,1,246,0,178*$I,1,245,0,165*
$I,1,244,0,153*$I,1,244,0,141*$I,1,244,0,131*$I,1,245,0,121*
$I,1,246,0,113*$I,1,248,0,105*$I,1,250,0,98*
$I,1,253,0,92*$I,2,0,0,86*$I,2,4,0,82*$I,2,8,0,79*$I,2,12,0,76*
$I,2,18,0,74*$I,2,23,0,74*$I,2,29,0,74*$I,2,36,0,75*$I,2,42,0,77*$I,2,50,0,80*$I,2,58,0,84*$I,2,66,0,89*$I,2,74,0,94*
$I,2,83,0,101*$I,2,93,0,109*$I,2,103,0,117*$I,2,113,0,126*$I,2,124,0,136*$I,2,134,0,147*
$I,2,146,0,159*$I,2,157,0,171*$I,2,169,0,185*$I,2,182,0,199*$I,2,194,0,214*$I,2,207,0,230*$I,2,220,0,246*$I,2,233,1,8*$I,2,247,1,26*$I,3,5,1,44*$I,3,19,1,64*$I,3,33,1,84*
$I,3,48,1,105*$I,3,63,1,126*$I,3,77,1,148*$I,3,93,1,171*$I,3,108,1,194*$I,3,123,1,217*
$I,3,138,1,242*$I,3,154,2,10*$I,3,169,2,35*$I,3,185,2,61*$I,3,201,2,87*$I,3,216,2,113*$I,3,232,2,140*$I,3,248,2,167*$I,4,7,2,195*
$I,4,23,2,223*$I,4,39,2,251*$I,4,54,3,23*$I,4,70,3,51*
The server sends data in $I,1,250,0,206* format, i.e. starts with $ and ends with *. As one may note, several consecutive data packages are concatenated incorrectly.
Whenever I increase the interval, for example to 200 ms, everything is ok.
What should I do?
UPDATE
Besides the BrettSutton answer which is true, the contributers in dart github gave a more complete answer here.
I decided to parse the packet in Socket listen handler, split that and append it to a list. As I want to show the data on a chart, I reset the list after 100 data. However the data could be logged as well.
soc.listen((data) {
var av = data.length;
if (av != 0) {
var stList = String.fromCharCodes(data).trim().split("\$");
stList.forEach((str) {
if (str.isNotEmpty) {
var strS = str.split(",");
if (strS != null) y = parseData(strS);
sampleList.add(y);
}
});
print(sampleList);
if (sampleList.length > 100) {
sampleList.clear();
}
print("==========");
}
});

WebRTC(iOS): local video is not getting stream on remote side

I am trying to make an app with Audio, video call using WebRTC.
remote video and audio are working properly in my app, but my local stream is not appearing on the client side.
here is what I have written to add a video track
let videoSource = self.rtcPeerFactory.videoSource()
let videoCapturer = RTCCameraVideoCapturer(delegate: videoSource)
guard let frontCamera = (RTCCameraVideoCapturer.captureDevices().first { $0.position == .front }),
// choose highest res
let format = (RTCCameraVideoCapturer.supportedFormats(for: frontCamera).sorted { (f1, f2) -> Bool in
let width1 = CMVideoFormatDescriptionGetDimensions(f1.formatDescription).width
let width2 = CMVideoFormatDescriptionGetDimensions(f2.formatDescription).width
return width1 < width2
}).last,
// choose highest fps
let fps = (format.videoSupportedFrameRateRanges.sorted { return $0.maxFrameRate < $1.maxFrameRate }.last) else {
print(.error, "Error in createLocalVideoTrack")
return nil
}
videoCapturer.startCapture(with: frontCamera,
format: format,
fps: Int(fps.maxFrameRate))
self.callManagerDelegate?.didAddLocalVideoTrack(videoTrack: videoCapturer)
let videoTrack = self.rtcPeerFactory.videoTrack(with: videoSource, trackId: K.CONSTANT.VIDEO_TRACK_ID)
and this is to add Audio track
let constraints: RTCMediaConstraints = RTCMediaConstraints.init(mandatoryConstraints: [:], optionalConstraints: nil)
let audioSource: RTCAudioSource = self.rtcPeerFactory.audioSource(with: constraints)
let audioTrack: RTCAudioTrack = self.rtcPeerFactory.audioTrack(with: audioSource, trackId: K.CONSTANT.AUDIO_TRACK_ID)
my full webRTC log attached here.
some logs I am getting (I think this is something wrong)
(thread.cc:303): Waiting for the thread to join, but blocking calls have been disallowed
(basic_port_allocator.cc:1035): Port[31aba00:0:1:0:relay:Net[ipsec4:2405:204:8888:x:x:x:x:x/64:VPN/Unknown:id=2]]: Port encountered error while gathering candidates.
...
(basic_port_allocator.cc:1017): Port[38d7400:audio:1:0:local:Net[en0:192.168.1.x/24:Wifi:id=1]]: Port completed gathering candidates.
(basic_port_allocator.cc:1035): Port[3902c00:video:1:0:relay:Net[ipsec5:2405:204:8888:x:x:x:x:x/64:VPN/Unknown:id=3]]: Port encountered error while gathering candidates.
finally, find the solution
it was due to TCP protocol in the TURN server.

How is a CoreMIDI Thru Connection made in swift 4.2?

The following builds and runs and prints the non error console message at the end when passed two valid MIDIEndPointRefs. But midi events are not passed thru from source to dest as expected. Is something missing?
func createThru2(source:MIDIEndpointRef?, dest:MIDIEndpointRef?) {
var connectionRef = MIDIThruConnectionRef()
var params = MIDIThruConnectionParams()
MIDIThruConnectionParamsInitialize(&params)
if let s = source {
let thruEnd = MIDIThruConnectionEndpoint(endpointRef: s, uniqueID: MIDIUniqueID(1))
params.sources.0 = thruEnd
params.numSources = 1
print("thru source is \(s)")
}
if let d = dest {
let thruEnd = MIDIThruConnectionEndpoint(endpointRef: d, uniqueID: MIDIUniqueID(2))
params.destinations.0 = thruEnd
params.numDestinations = 1
print("thru dest is \(d)")
}
var localParams = params
let nsdata = withUnsafePointer(to: &params) { p in
NSData(bytes: p, length: MIDIThruConnectionParamsSize(&localParams))
}
let status = MIDIThruConnectionCreate(nil, nsdata, &connectionRef)
if status == noErr {
print("created thru")
} else {
print("error creating thru \(status)")
}
}
Your code works fine in Swift 5 on macOS 10.13.6. A thru connection is established and MIDI events are passed from source to destination. So the problem does not seem to be the function you posted, but in the endpoints you provided or in using Swift 4.2.
I used the following code to call your function:
var source:MIDIEndpointRef = MIDIGetSource(5)
var dest:MIDIEndpointRef = MIDIGetDestination(9)
createThru2(source:source, dest:dest)
5 is a MIDI keyboard and 9 is a MIDI port on my audio interface.
Hmmm. I just tested this same code on Swift 5 and OS X 12.4, and it doesn't seem to work for me. MIDIThruConnectionCreate returns noErr, but no MIDI packets seem to flow.
I'm using the MIDIKeys virtual source, and MIDI Monitor virtual destination.
I'll try it with some hardware and see what happens.

Delaying a Tokio Stream

Given a Stream, I want to create a new Stream where elements are yielded with a time delay between them.
I tried to write code that does that using tokio_core::reactor::Timeout and the and_then combinator for Streams, but the delay doesn't work: I get all the elements immediately, without a delay.
Here is a self contained example (playground):
extern crate tokio_core;
extern crate futures;
use std::time::Duration;
use futures::{Future, Stream, stream, Sink};
use self::futures::sync::{mpsc};
use tokio_core::reactor;
const NUM_ITEMS: u32 = 8;
fn main() {
let mut core = reactor::Core::new().unwrap();
let handle = core.handle();
let chandle = handle.clone();
let (sink, stream) = mpsc::channel::<u32>(0);
let send_stream = stream::iter_ok(0 .. NUM_ITEMS)
.and_then(move |i: u32| {
let cchandle = chandle.clone();
println!("Creating a timeout object...");
reactor::Timeout::new(Duration::new(1,0), &cchandle)
.map_err(|_| ())
.and_then(|_| Ok(i))
});
let sink = sink.sink_map_err(|_| ());
handle.spawn(sink.send_all(send_stream).and_then(|_| Ok(())));
let mut incoming_items = Vec::new();
{
let keep_messages = stream.for_each(|item| {
incoming_items.push(item);
println!("item = {}", item);
Ok(())
});
core.run(keep_messages).unwrap();
}
assert_eq!(incoming_items, (0 .. NUM_ITEMS).collect::<Vec<u32>>());
}
For completeness, this is the output I get:
Creating a timeout object...
Creating a timeout object...
item = 0
Creating a timeout object...
item = 1
Creating a timeout object...
item = 2
Creating a timeout object...
item = 3
Creating a timeout object...
item = 4
Creating a timeout object...
item = 5
Creating a timeout object...
item = 6
item = 7
I suspect that the problem is somewhere in these lines:
reactor::Timeout::new(Duration::new(1,0), &cchandle)
.map_err(|_| ())
.and_then(|_| Ok(i))
It is possible that I don't really wait on the returned Timeout object, though I'm not sure how to solve it.
As I suspected, the problem was was the manipulation (using and_then) of the newly created Timeout. We either need to first unwrap the result from the call to reactor::Timeout::new, which could become messy if done manually, or use into_future, to convert the result into a Future, and then work with it using Future combinators.
Code for solving the problem:
extern crate tokio_core;
extern crate futures;
use std::time::Duration;
use futures::{Future, Stream, stream, Sink, IntoFuture};
use self::futures::sync::{mpsc};
use tokio_core::reactor;
const NUM_ITEMS: u32 = 8;
fn main() {
let mut core = reactor::Core::new().unwrap();
let handle = core.handle();
let chandle = handle.clone();
let (sink, stream) = mpsc::channel::<u32>(0);
let send_stream = stream::iter_ok(0 .. NUM_ITEMS)
.and_then(move |i: u32| {
let cchandle = chandle.clone();
println!("Creating a timeout object...");
reactor::Timeout::new(Duration::new(1,0), &cchandle)
.into_future()
.and_then(move |timeout| timeout.and_then(move |_| Ok(i)))
.map_err(|_| ())
});
let sink = sink.sink_map_err(|_| ());
handle.spawn(sink.send_all(send_stream).and_then(|_| Ok(())));
let mut incoming_items = Vec::new();
{
let keep_messages = stream.for_each(|item| {
incoming_items.push(item);
println!("item = {}", item);
Ok(())
});
core.run(keep_messages).unwrap();
}
assert_eq!(incoming_items, (0 .. NUM_ITEMS).collect::<Vec<u32>>());
}
Note that two and_then are being used. The first one unwraps the Result obtained from calling reactor::Timeout::new. The second one actually waits for the Timeout to fire.

Resources