Using WaitForMultipleObjects() with ACE_SOCK_Stream - get event only when there's data - ace

Is it possible to use WaitForMultipleObjects() with ACE_SOCK_Stream, and make it return only when there's data to read from it?
I tried to following:
// set some params
DWORD handlesCount = 1;
DWORD timeoutMs = 5 * 1000;
HANDLE* handles = new HANDLE[handlesCount];
handles[0] = sock_stream.get_handle();
while (true) {
int ret = WaitForMultipleObjects(handlesCount, handles, false, timeoutMs);
std::cout << "Result: " << ret << std::endl;
But the WaitForMultipleObjects() returns immediately the socket stream index, indicating that its ready (it prints 0 in an endless loop).
The socket is accepted via a ACE_SOCK_Acceptor (ACE_SOCK_Acceptor->accept()).
How do I make WaitForMultipleObjects() wait until the socket has data to read?

The socket handle is not suitable for use in WFMO. You should use WSAEventSelect to associate the desired event(s) with an event handle that's registered with WFMO.
Since you are also familiar with ACE, you can check the source code for ace/WFMO_Reactor.cpp, register_handler() method to see a use-case and how it works with WFMO.

Related

Task does not update testbench sclk

I'm trying to understand why my signal is not updating when it is processed by the task.
As you could see below, the problem is related to the signal that internally on the task are changing correctly but even in a hierarchical call do not change the signal outside the task.
//-------------------------------
timeunit 1ps;
timeprecision 1ps;
`define CLK_HALF_PERIOD 10
`define SCK_HALF_PERIOD 30
module tbench ();
logic clk;
logic sclk;
logic RST;
hwpe_stream_intf_stream MOSI();
hwpe_stream_intf_stream MISO();
logic try;
initial begin
spi_send (.addr({1'b1,3'b111,12'd1,16'd0 }),
.data(1),
.MISO(try),
.MOSI(MOSI.data),
.SCK(sclk));
end
always
begin
# `CLK_HALF_PERIOD clk = 1;
# `CLK_HALF_PERIOD clk = 0;
end
task automatic spi_send (
input logic [31:0] addr,
input logic [31:0] data,
input logic MISO, // not used
ref logic MOSI,
ref logic SCK
);
integer i = 0;
$display ("add=%-32d",addr );
for (i=0; i<32; i=i+1) begin
//$display("add", 31-i , " MOSI ",MOSI);
// MOSI = ;
MOSI = addr[31-i];
tbench.try = MOSI;
#`SCK_HALF_PERIOD
tbench.sclk = 1'b1;
#`SCK_HALF_PERIOD;
tbench.sclk = 1'b0;
$display("add", addr[30-i] , " MOSI ",MOSI);
end
endtask
endmodule
tbench.sclk and MOSI are not changing globally, but only locally.
Here is the interface:
interface hwpe_stream_intf_stream() ;
logic valid;
logic ready;
logic data;
logic [8/8-1:0] strb;
modport source (
output valid, data, strb,
input ready
);
modport sink (
input valid, data, strb,
output ready
);
endinterface
You need to zoom in to the beginning of your waveforms to see sclk toggling. It toggles between 0 and 2000ps, then stops toggling.
You can add this to your testbench to stop the simulation much sooner to make it more obvious:
initial #3ns $finish;

CAN message signals, CAPL

I am trying to save the signal data in the each my of a CAN message in separate variables.
For eg. I have a CAN message 'msg1' of dlc =4, with signals {8, 5, 7, 21} in CANalyzer's CAPL,
I would like to save them in variables like:
int var1 = msg1.byte(0);
but I keep getting zero (0) as the final value of the variable after the operation.
Any help is much appreciated.
Thanks
If you are not doing this already, implement an on message event using the keyword this:
on message msg1 {
var1 = this.byte(0);
...
}
The event will always be triggered when CANalyzer receives the message specified in the on message event. This way you can also make sure that the value stored by var1 is up to date.
You can also use a more general approach using arrays.
on message msg1 {
int i;
int var[msg1.dlc];
for (i = 0; i < msg1.dlc; i++) {
var[i] = this.byte(i);
}
}

NSURLSession request body passed by slow NSInputStream (bandwidth management)

Hi based on this answer I wrote subclass of NSInputStream and it works pretty well.
Now It turned out that I have scenario where I'm feeding to server large amount of data and to prevent starvation of other services I need control speed of feeding data. So I improved functinality of my subclass with following conditions:
when data should be postponed, hasBytesAvailable returns NO and reading attempts ends with zero bytes read
when data can be send, - read:maxLength: allows to read some maximum amount data at once (by default 2048).
when - read:maxLength: returns zero bytes read, needed delay is calculated and after that delay NSStreamEventHasBytesAvailable event is posted.
Here is interesting parts of code (it is mixed with C++):
- (NSInteger)read:(uint8_t *)buffer maxLength:(NSUInteger)len {
if (![self isOpen]) {
return kOperationFailedReturnCode;
}
int delay = 0;
NSInteger readCount = (NSInteger)self.cppInputStream->Read(buffer, len, delay);
if (readCount<0) {
return kOperationFailedReturnCode;
}
LOGD("Stream") << __PRETTY_FUNCTION__
<< " len: " << len
<< " readCount: "<< readCount
<< " time: " << (int)(-[openDate timeIntervalSinceNow]*1000)
<< " delay: " << delay;
if (!self.cppInputStream->IsEOF()) {
if (delay==0)
{
[self enqueueEvent: NSStreamEventHasBytesAvailable];
} else {
NSTimer *timer = [NSTimer timerWithTimeInterval: delay*0.001
target: self
selector: #selector(notifyBytesAvailable:)
userInfo: nil
repeats: NO];
[self enumerateRunLoopsUsingBlock:^(CFRunLoopRef runLoop) {
CFRunLoopAddTimer(runLoop, (CFRunLoopTimerRef)timer, kCFRunLoopCommonModes);
}];
}
} else {
[self setStatus: NSStreamStatusAtEnd];
[self enqueueEvent: NSStreamEventEndEncountered];
}
return readCount;
}
- (void)notifyBytesAvailable: (NSTimer *)timer {
LOGD("Stream") << __PRETTY_FUNCTION__ << "notifyBytesAvailable time: " << (int)(-[openDate timeIntervalSinceNow]*1000);
[self enqueueEvent: NSStreamEventHasBytesAvailable];
}
- (BOOL)hasBytesAvailable {
bool result = self.cppInputStream->HasBytesAvaible();
LOGD("Stream") << __PRETTY_FUNCTION__ << ": " << result << " time: " << (int)(-[openDate timeIntervalSinceNow]*1000);
return result;
}
I wrote some test for that and it worked.
Problem appeared when I used this stream with NSURLSession as source of body of HTTP request. From logs I can see that NSURLSession tries to read everything at once. On first read I return limited portion of data. Immediately after that NSURLSession asks if there are bytes available (I return NO).
After some time (for example 170 ms), I'm sending notification that bytes are now available but NSURLSession doesn't respond to that and do not invoke any method of my stream class.
Here is what I see in logs (when running some test):
09:32:14990[0x7000002a0000] D/Stream: -[CSCoreFoundationCppInputStreamWrapper open]
09:32:14990[0x7000002a0000] D/Stream: -[CSCoreFoundationCppInputStreamWrapper hasBytesAvailable]: 1 time: 0
09:32:14990[0x7000002a0000] D/Stream: -[CSCoreFoundationCppInputStreamWrapper read:maxLength:] len: 32768 readCount: 2048 time: 0 delay: 170
09:32:14990[0x7000002a0000] D/Stream: -[CSCoreFoundationCppInputStreamWrapper hasBytesAvailable]: 0 time: 0
09:32:14990[0x7000002a0000] D/Stream: -[CSCoreFoundationCppInputStreamWrapper hasBytesAvailable]: 0 time: 0
09:32:14990[0x7000002a0000] D/Stream: -[CSCoreFoundationCppInputStreamWrapper hasBytesAvailable]: 0 time: 0
09:32:15161[0x7000002a0000] D/Stream: -[CSCoreFoundationCppInputStreamWrapper notifyBytesAvailable:]notifyBytesAvailable time: 171
Where time is amount of milliseconds since stream has been opened.
Looks looks NSURLSession is unable to handle input streams with limited data rate.
Does anyone else had similar problem?
Or has alternative concept how to achieve bandwidth management on NSURLSession?
solutions tha I can support is:
using NSURLSessionStreamTask, from iOS9 and OSX10.11.
using ASIHTTPRequest instead.
Unfortunately, NSInputStream is a class cluster. That makes subclassing hard. And in the case of NSInputStream, any subclasses are completely unsupported and are likely to fail in fascinating ways. (See http://blog.bjhomer.com/2011/04/subclassing-nsinputstream.html for details.)
Instead of subclassing NSInputStream, you should use a bound pair of streams and create your own data provider class to feed data into it. To do this:
Call CFStreamCreateBoundPair.
Cast the resulting CFReadStream object to an NSInputStream pointer.
Cast the CFWriteStream object to an NSOutputStream pointer.
Pass the input stream when you create the upload task or request object.
Create a class that uses a timer to periodically pass the next chunk of data to the output stream.
If you do this, the data your data provider class passes to the NSOutputStream will become available for reading from the NSInputStream on the other end.

LuaSocket - attempt to call field 'try' (a nil value)

Platform: (where Lua and LuaSocket are ported)
An embedded system using ARM 7 development board running 3rd party RTOS with TCP/IP stack.
What works:
Using Lua standard library such as "io" calls, print, assert, etc
sending UDP packets by using the udp = assert(socket.udp) method, assert(udp:send(something))
Problem:
When executing an example smtp lua script:
local smtp = require("socket.smtp")
from = "myEmail"
rcpt = {"<someOne's Email>"}
mesgt = { heasers = {someHeader}, body = "Hello World" }
r, e = smtp.send {
from = from,
rcpt = rcpt,
source = smtp.message(mesgt),
server = "someServer",
port = 25,
}
-- an error returns after execution:
-- lua\socket\smtp.lua:115: attempt to call field 'try' (a nil value)
-- Corresponding code in smtp.lua around line 115:
function open(server, port, create)
local tp = socket.try(tp.connect(server or SERVER, port or PORT,
TIMEOUT, create))
local s = base.setmetatable({tp = tp}, metat)
-- make sure tp is closed if we get an exception
s.try = socket.newtry(function()
s:close()
end)
return s
end
// Where try = newtry() in socket.lua and the corresponding C code is the same
// from the one provided with the library for UNIX:
static int global_newtry(lua_State *L) {
lua_settop(L, 1);
if (lua_isnil(L, 1)) lua_pushcfunction(L, do_nothing);
lua_pushcclosure(L, finalize, 1);
return 1;
}
Well, since the error says that "try is nil", then my best guess is that the C lib is not correctly, or not completely, linked to your Lua. This could be the result of a faulty installation, a missing lib, or something of that sort.

Passing a pointer to a linked list in C++

I have a fairly basic program that is intended to sort a list of numbers via a Linked List.
Where I am getting hung up is when the element needs to be inserted at the beginning of the list. Here is the chunk of code in question
Assume that root->x = 15 and assume that the user inputs 12 when prompted:
void addNode(node *root)
{
int check = 0; //To break the loop
node *current = root; //Starts at the head of the linked list
node *temp = new node;
cout << "Enter a value for x" << endl;
cin >> temp->x;
cin.ignore(100,'\n');
if(temp->x < root->x)
{
cout << "first" << endl;
temp->next=root;
root=temp;
cout << root->x << " " << root->next->x; //Displays 12 15, the correct response
}
But if, after running this function, I try
cout << root->x;
Back in main(), it displays 15 again. So the code
root=temp;
is being lost once I leave the function. Now other changes to *root, such as adding another element to the LL and pointing root->next to it, are being carried over.
Suggestions?
This because you are setting the local node *root variable, you are not modifying the original root but just the parameter passed on stack.
To fix it you need to use a reference to pointer, eg:
void addNode(node*& root)
or a pointer to pointer:
void addNode(node **root)

Resources