TRichEdit suspend/resume undo function - c++builder

Is there a way to suspend/resume the Undo recording in a TRichEdit control? Is there a message to send or a mode to set?
EDIT
I have solved it by using the ITextDocument interface. See my post below.

Okay I solved it.
You have to use the ITextDocument interface to set the various undo modes. In this example Script_Edit is a TRichEdit control.
#include <Richole.h>
#include <Tom.h>
// Define the ITextDocument interface GUID
#define DEFINE_GUIDXXX(name, l, w1, w2, b1, b2, b3, b4, b5, b6, b7, b8) \
EXTERN_C const GUID CDECL name \
= { l, w1, w2, { b1, b2, b3, b4, b5, b6, b7, b8 } }
DEFINE_GUIDXXX(IID_ITextDocument,0x8CC497C0,0xA1DF,0x11CE,0x80,0x98,
0x00,0xAA,0x00,0x47,0xBE,0x5D);
IRichEditOle *IRich;
ITextDocument *IDoc;
// Get the IRichEditOle interface object
SendMessage(Script_Edit->Handle,EM_GETOLEINTERFACE,0,(LPARAM)&IRich);
// Get the ITextDocument interface
IRich->QueryInterface(IID_ITextDocument,(void**)&IDoc);
// Suspend the Undo recording
IDoc->Undo(tomSuspend,NULL);
... Do your stuff ...
// Resume the Undo recording
IDoc->Undo(tomResume,NULL);
// Release the interfaces
IDoc->Release();
IRich->Release();
The ITextDocument->Undo() can be used with:
ITextDocument->Undo(tomFalse, NULL); //Prevents Undo and empties buffer.
ITextDocument->Undo(tomTrue, NULL); //Restarts Undo again.
ITextDocument->Undo(tomSuspend, NULL); //Suspends Undo.
ITextDocument->Undo(tomResume, NULL); //Resumes Undo.
I hope this can be useful to others too...

See the EM_SETUNDOLIMIT message:
Sets the maximum number of actions that can stored in the undo queue of a rich edit control.
Parameters
wParam
Specifies the maximum number of actions that can be stored in the undo queue.
lParam
This parameter is not used; it must be zero.
Return value
The return value is the new maximum number of undo actions for the rich edit control. This value may be less than wParam if memory is limited.
Remarks
By default, the maximum number of actions in the undo queue is 100. If you increase this number, there must be enough available memory to accommodate the new number. For better performance, set the limit to the smallest possible value.
Setting the limit to zero disables the Undo feature.

Related

Why is this basic MQL4-code taking so much time to load up on my MT4?

I am learning MQL4 language and am using this Code to plot a Simple moving Average, the Code works fine, but when I load it up on my MT4 it takes a lot of time, am I missing something ?
int start() // Special function start()
{
int i, // Bar index
n, // Formal parameter
Counted_bars; // Number of counted bars
// Sum of Low values for period
// --------------------------------------------------------------------
Counted_bars=IndicatorCounted(); // Number of counted bars
i=Bars-Counted_bars-1; // Index of the first uncounted
while(i>=0) // Loop for uncounted bars
{
Buf_0[i]=(iMA(Symbol(),PERIOD_M5,200,i,MODE_EMA,PRICE_HIGH,0);
i--; // Calculating index of the next bar
}
// --------------------------------------------------------------------
return; // Exit the special funct. start()
}
// --------------------------------------------------------------------
Q : am I missing something?
No, this is a standard feature to process all the Bars back, towards the earliest parts of the history.
If your intentions require a minimum setup-time, it is possible to "shorten" the re-painted part of the history to just, say, last week, not going all the way back all the Bars-number of Bars a few years back, as all that data have been stored in the OHLCV-history database.
That "shortened"-part of the history will this way become as long as your needs can handle and not a single bar "longer".
Hooray, The Problem was solved.
BONUS PART :
Given your code instructs to work with EMA, not SMA, there is one more vector of attack onto the shortest possible time.
For EMA, any next Bar value will become a product of alfa * High[next] added to a previously known as EMA[next+1]
Where a constant alfa = 2. / ( N_period + 1 ) is known and constant across the whole run of the history processed.
This approach helped me gain about ~20-30 [us] FASTER processing for a 20-cell Price-vector, when using this algorithmic shortcut on an array of float32-values compared to cell-by-cell processing. Be sure to benchmark the code for your use-case and may polish further tricks with using different call-signatures of iHigh() instead of accessing an array of High[]-s for any potential speedups, if in utmost need to shave-off any further [us] possible.

What does dispatch_atomic_maximally_synchronizing_barrier(); mean?

Recently I have read the blog from mikeash which tells the detail implementation of dispatch_once. I also get the source code of it in macosforge
I understand most of the code except this line:
dispatch_atomic_maximally_synchronizing_barrier();
It is a macro and defined :
#define dispatch_atomic_maximally_synchronizing_barrier() \
do { unsigned long _clbr; __asm__ __volatile__( \
"cpuid" \
: "=a" (_clbr) : "0" (0) : "rbx", "rcx", "rdx", "cc", "memory" \
); } while(0)
I know it is used to make sure it "Defeat the speculative read-ahead of peer CPUs", but I don't know that cpuid and the words followed. I know little about assemble language.
Could anyone elaborate it for me ? Thanks a lot.
libdispatch source code pretty much explains it.
http://opensource.apple.com/source/libdispatch/libdispatch-442.1.4/src/shims/atomic.h
// see comment in dispatch_once.c
#define dispatch_atomic_maximally_synchronizing_barrier() \
http://opensource.apple.com/source/libdispatch/libdispatch-442.1.4/src/once.c
// The next barrier must be long and strong.
//
// The scenario: SMP systems with weakly ordered memory models
// and aggressive out-of-order instruction execution.
//
// The problem:
//
// The dispatch_once*() wrapper macro causes the callee's
// instruction stream to look like this (pseudo-RISC):
//
// load r5, pred-addr
// cmpi r5, -1
// beq 1f
// call dispatch_once*()
// 1f:
// load r6, data-addr
//
// May be re-ordered like so:
//
// load r6, data-addr
// load r5, pred-addr
// cmpi r5, -1
// beq 1f
// call dispatch_once*()
// 1f:
//
// Normally, a barrier on the read side is used to workaround
// the weakly ordered memory model. But barriers are expensive
// and we only need to synchronize once! After func(ctxt)
// completes, the predicate will be marked as "done" and the
// branch predictor will correctly skip the call to
// dispatch_once*().
//
// A far faster alternative solution: Defeat the speculative
// read-ahead of peer CPUs.
//
// Modern architectures will throw away speculative results
// once a branch mis-prediction occurs. Therefore, if we can
// ensure that the predicate is not marked as being complete
// until long after the last store by func(ctxt), then we have
// defeated the read-ahead of peer CPUs.
//
// In other words, the last "store" by func(ctxt) must complete
// and then N cycles must elapse before ~0l is stored to *val.
// The value of N is whatever is sufficient to defeat the
// read-ahead mechanism of peer CPUs.
//
// On some CPUs, the most fully synchronizing instruction might
// need to be issued.
dispatch_atomic_maximally_synchronizing_barrier();
For x86_64 and i386 architecture, it uses cpuid instruction to flush the instruction pipeline as #Michael mentioned. cpuid is serializing instruction to prevent memory reordering. And __sync_synchronize for the other architecture.
https://gcc.gnu.org/onlinedocs/gcc-4.6.2/gcc/Atomic-Builtins.html
__sync_synchronize (...)
This builtin issues a full memory barrier.
these builtins are considered a full barrier. That is, no memory operand will be moved across the operation, either forward or backward. Further, instructions will be issued as necessary to prevent the processor from speculating loads across the operation and from queuing stores after the operation.

serial data flow: How to ensure completion

I have a device that sends serial data over a USB to COM port to my program at various speeds and lengths.
Within the data there is a chunk of several thousands bytes that starts and ends with special distinct code ('FDDD' for start, 'FEEE' for end).
Due to the stream's length, occasionally not all data is received in one piece.
What is the recommended way to combine all bytes into one message BEFORE parsing it?
(I took care of the buffer size, but have no control over the serial line quality, and can not use hardware control with USB)
Thanks
One possible way to accomplish this is to have something along these lines:
# variables
# buffer: byte buffer
# buffer_length: maximum number of bytes in the buffer
# new_char: char last read from the UART
# prev_char: second last char read from the UART
# n: index to the buffer
new_char := 0
loop forever:
prev_char := new_char
new_char := receive_from_uart()
# start marker
if prev_char = 0xfd and new_char = 0xdd
# set the index to the beginning of the buffer
n := 0
# end marker
else if prev_char = 0xfe and new_char = 0xee
# the frame is ready, do whatever you need to do with a complete message
# the length of the payload is n-1 bytes
handle_complete_message(buffer, n-1)
# otherwise
else
if n < buffer_length - 1
n := n + 1
buffer[n] := new_char
A few tips/comments:
you do not necessarily need a separate start and end markers (you can the same for both purposes)
if you want to have two-byte markers, it would be easier to have them with the same first byte
you need to make sure the marker combinations do no occur in your data stream
if you use escape codes to avoid the markers in your payload, it is convenient to take care of them in the same code
see HDLC asynchronous framing (simply to encode, simple to decode, takes care of the escaping)
handle_complete_message usually either copies the contents of buffer elsewhere or swaps another buffer instead of buffer if in hurry
if your data frames do not have integrity checking, you should check if the payload length is equal to buffer_length- 1, because then you may have an overflow
After several tests, I came up with the following simple solution to my own question (for c#).
Shown is a minimal simplified solution. Can add length checking, etc.
'Start' and 'End' are string markers of any length.
public void comPort_DataReceived(object sender, SerialDataReceivedEventArgs e)
SerialPort port = (SerialPort)sender;
inData = port.ReadExisting();
{
if (inData.Contains("start"))
{
//Loop to collect all message parts
while (!inData.Contains("end"))
inData += port.ReadExisting();
//Complete by adding the last data chunk
inData += port.ReadExisting();
}
//Use your collected message
diaplaydata(inData);

The CPU and Memory (value, register)

When a value is copied from one register to another, what happens to the value
in the source register? What happens to the value in the destination register.
I'll show how it works in simple processors, like DLX or RISC, which are used to study CPU-architecture.
When (AT&T syntax, or copy $R1 to $R2)
mov $R1, $R2
or even (for RISC-styled architecture)
add $R1, 0, $R2
instruction works, CPU will read source operands: R1 from register file and zero from... may be immediate operand or zero-generator; pass both inputs into Arithmetic Logic Unit (ALU). ALU will do an operation which will just pass first source operand to destination (because A+0 = A) and after ALU, destination will be written back to register file (but to R2 slot).
So, Data in source register is only readed and not changed in this operation; data in destination register will be overwritten with copy of source register data. (old state of destination register will be lost with generating of heat.)
At physical level, any register in register file is set of SRAM cells, each of them is the two inverters (bi-stable flip-flop, based on M1,M2,M3,M4) and additional gates for writing and reading:
When we want to overwrite value stored in SRAM cell, we will set BL and -BL according to our data (To store bit 0 - set BL and unset -BL; to store bit 1 - set -BL and unset BL); then the write is enabled for current set (line) of cells (WL is on; it will open M5 and M6). After opening of M5 and M6, BL and -BL will change state of bistable flip-flop (like in SR-latch). So, new value is written and old value is discarded (by leaking charge into BL and -BL).

Correct Media Type settings for a DirectShow filter that delivers Wav audio data?

I am using Delphi 6 Pro with the DSPACK DirectShow component library to create a DirectShow filter that delivers data in Wav format from a custom audio source. Just to be very clear, I am delivering the raw PCM audio samples as Byte data. There are no Wave files involved, but other Filters downstream in my Filter Graph expect the output pin to deliver standard WAV format sample data in Byte form.
Note: When I get the data from the custom audio source, I format it to the desired number of channels, sample rate, and bits per sample and store it in a TWaveFile object I created. This object has a properly formatted TWaveFormatEx data member that is set correctly to reflect the underlying format of the data I stored.
I don't know how to properly set up the MediaType parameter during a GetMediaType() call:
function TBCPushPinPlayAudio.GetMediaType(MediaType: PAMMediaType): HResult;
.......
with FWaveFile.WaveFormatEx do
begin
MediaType.majortype := (1)
MediaType.subtype := (2)
MediaType.formattype := (3)
MediaType.bTemporalCompression := False;
MediaType.bFixedSizeSamples := True;
MediaType.pbFormat := (4)
// Number of bytes per sample is the number of channels in the
// Wave audio data times the number of bytes per sample
// (wBitsPerSample div 8);
MediaType.lSampleSize := nChannels * (wBitsPerSample div 8);
end;
What are the correct values for (1), (2), and (3)? I know about the MEDIATYPE_Audio, MEDIATYPE_Stream, and MEDIASUBTYPE_WAVE GUID constants, but I am not sure what goes where.
Also, I assume that I need to copy the WaveFormatEx stucture/record from the my FWaveFile object over to the pbFormat pointer (4). I have two questions about that:
1) I assume that should use CoTaskMemAlloc() to create a new TWaveFormatEx object and copy my FWaveFile object's TWaveFormatEx object on to it, before assigning the pbFormat pointer to it, correct?
2) Is TWaveFormatEx the correct structure to pass along? Here is how TWaveFormatEx is defined:
tWAVEFORMATEX = packed record
wFormatTag: Word; { format type }
nChannels: Word; { number of channels (i.e. mono, stereo, etc.) }
nSamplesPerSec: DWORD; { sample rate }
nAvgBytesPerSec: DWORD; { for buffer estimation }
nBlockAlign: Word; { block size of data }
wBitsPerSample: Word; { number of bits per sample of mono data }
cbSize: Word; { the count in bytes of the size of }
end;
UPDATE: 11-12-2011
I want to highlight one of the comments by #Roman R attached to his accepted reply where he tells me to use MEDIASUBTYPE_PCM for the sub-type, since it is so important. I lost a significant amount of time chasing down a DirectShow "no intermediate filter combination" error because I had forgotten to use that value for the sub-type and was using (incorrectly) MEDIASUBTYPE_WAVE instead. MEDIASUBTYPE_WAVE is incompatible with many other filters such as system capture filters and that was the root cause of the failure. The bigger lesson here is if you are debugging an inter-Filter media format negotiation error, make sure that the formats between the pins being connected are completely equal. I made the mistake during initial debugging of only comparing the WAV format parameters (format tag, number of channels, bits per sample, sample rate) which were identical between the pins. However, the difference in sub-type due to my improper usage of MEDIASUBTYPE_WAVE caused the pin connection to fail. As soon as I changed the sub-type to MEDIASUBTYPE_PCM as Roman suggested the problem went away.
(1) is MEDIATYPE_Audio.
(2) is typically a mapping from FOURCC code into GUID, see Media Types, Audio Media Types section.
(3) is FORMAT_WaveFormatEx.
(4) is a pointer (typically allocated by COM task memory allocator API) to WAVEFORMATEX structure.
1) - yes you should allocate memory, put valid data there, by copying or initializing directly, and put this pointer to pbFormat and structure size into cbFormat.
2) - yes it looks good, it is defined like this in first place: WAVEFORMATEX structure.

Resources