NS_OPTIONS Bitmask Autogeneration - ios

I have a large enum (for the sake of transparency 63 values), and I am now creating a NS_Options bitflag based on that enum. Is there a way that I can write this so that it will be flexible?
The main concerns I have with hardcoding it are:
If I add/remove an enum, I will have to manually add/remove it in my bitflag.
There is a lot of typing to generate these.
My .h file is getting intensely long (because I like to use whitespace and adequate comments)
The only solution I've come up with thus far is:
#define FlagForEnum(enum) 1 << enum
typedef NS_ENUM(NSInteger, ExampleEnum)
{
Value1,
Value2,
...
ValueN
}
typedef NS_OPTIONS(NSNumber, ExampleEnumFlags)
{
Value1Flag = FlagForEnum(Value1),
Value2Flag = FlagForEnum(Value2),
...
ValueNFlag = FlagForEnum(ValueN)
}
This is a barely adequate solution when I remove an enum (at least I get a compile error), and if the enum ordering gets changed, the flags' bitshifted position changes too (not that it truly matters, but it seems comforting). But it doesn't solve the 'this-is-a-lot-of-typing' problem, or the 'what-if-I-forget-to-add-a-flag' problem.

You can use a technique called X Macro
#define VALUES \
VALUE_LINE(Value1) \
VALUE_LINE(Value2) \
VALUE_LINE(Value3)
typedef NS_ENUM(NSUInteger, ExampleEnum)
{
#define VALUE_LINE(x) x,
VALUES
#undef VALUE_LINE
}
typedef NS_OPTIONS(NSUInteger, ExampleEnumFlags)
{
#define VALUE_LINE(x) x##Flag = 1 << x,
VALUES
#undef VALUE_LINE
}

Here is a slightly better (in terms of less typing) preprocessor #define solution. Although this still isn't as elegant as I'd like.
#define BitShift(ENUM_ATTRIBUTE) (1 << ENUM_ATTRIBUTE)
#define CreateEnumFlag(ENUM_ATTRIBUTE) ENUM_ATTRIBUTE##Flag = BitShift(ENUM_ATTRIBUTE)
typedef NS_ENUM(NSUInteger, ExampleEnum)
{
Value1,
Value2,
...
ValueN
}
typedef NS_Options(NSUInteger, ExampleEnumFlags)
{
CreateEnumFlag(Value1),
CreateEnumFlag(Value2),
...
CreateEnumFlag(ValueN)
}
This will create flags of the form Value1Flag, Value2Flag, ..., ValueNFlag.

Related

Clang-format struct initialization - indent with two spaces?

I'm trying to format a C file with clang-format. I want indentations to be two space characters, which mostly works except for in global variable struct initialization. For this, it continues to produce lines which are indented with four spaces.
Here is my .clang-format file
BasedOnStyle: LLVM
ColumnLimit: '80'
IndentWidth: '2'
clang-format produces this surprising output
typedef struct {
int x;
} foo;
static foo bar{
1, // This line is being indented with 4 spaces!
};
And this is what I'd expect the file to look like:
typedef struct {
int x;
} foo;
static foo bar{
1, // This line is being indented with 2 spaces!
};
I've tried using a few different values for ConstructorInitializerIndentWidth, but that field doesn't appear to affect this pattern.
Is there a setting that I could provide to get this behavior?
Try ContinuationIndentWidth: 2. That worked for me, when I had that problem.

How do I find the SourceLocation of the commas between function arguments using libtooling?

My main goal is trying to get macros (or even just the text) before function parameters. For example:
void Foo(_In_ void* p, _Out_ int* x, _Out_cap_(2) int* y);
I need to gracefully handle things like macros that declare parameters (by ignoring them).
#define Example _In_ int x
void Foo(Example);
I've looked at Preprocessor record objects and used Lexer::getSourceText to get the macro names In, Out, etc, but I don't see a clean way to map them back to the function parameters.
My current solution is to record all the macro expansions in the file and then compare their SourceLocation to the ParamVarDecl SourceLocation. This mostly works except I don't know how to skip over things after the parameter.
void Foo(_In_ void* p _Other_, _In_ int y);
Getting the SourceLocation of the comma would work, but I can't find that anywhere.
The title of the questions asks for libclang, but as you use Lexer::getSourceText I assume that it's libTooling. The rest of my answer is viable only in terms of libTooling.
Solution 1
Lexer works on the level of tokens. Comma is also a token, so you can take the end location of a parameter and fetch the next token using Lexer::findNextToken.
Here is a ParmVarDecl (for function parameters) and CallExpr (for function arguments) visit functions that show how to use it:
template <class T> void printNextTokenLocation(T *Node) {
auto NodeEndLocation = Node->getSourceRange().getEnd();
auto &SM = Context->getSourceManager();
auto &LO = Context->getLangOpts();
auto NextToken = Lexer::findNextToken(NodeEndLocation, SM, LO);
if (!NextToken) {
return;
}
auto NextTokenLocation = NextToken->getLocation();
llvm::errs() << NextTokenLocation.printToString(SM) << "\n";
}
bool VisitParmVarDecl(ParmVarDecl *Param) {
printNextTokenLocation(Param);
return true;
}
bool VisitCallExpr(CallExpr *Call) {
for (auto *Arg : Call->arguments()) {
printNextTokenLocation(Arg);
}
return true;
}
For the following code snippet:
#define FOO(x) int x
#define BAR float d
#define MINUS -
#define BLANK
void foo(int a, double b ,
FOO(c) , BAR) {}
int main() {
foo( 42 ,
36.6 , MINUS 10 , BLANK 0.0 );
return 0;
}
it produces the following output (six locations for commas and two for parentheses):
test.cpp:6:15
test.cpp:6:30
test.cpp:7:19
test.cpp:7:24
test.cpp:10:17
test.cpp:11:12
test.cpp:11:28
test.cpp:11:43
This is quite a low-level and error-prone approach though. However, you can change the way you solve the original problem.
Solution 2
Clang stores information about expanded macros in its source locations. You can find related methods in SourceManager (for example, isMacroArgExpansion or isMacroBodyExpansion). As the result, you can visit ParmVarDecl nodes and check their locations for macro expansions.
I would strongly advice moving in the second direction.
I hope this information will be helpful. Happy hacking with Clang!
UPD speaking of attributes, unfortunately, you won't have a lot of choices. Clang does ignore any unknown attribute and this behaviour is not tweakable. If you don't want to patch Clang itself and add your attributes to Attrs.td, then you're limited indeed to tokens and the first approach.

MQL4 multi-timeframe indicator

I would like to write out an indicator that can take in input the int shift of an assigned timeframe, and turns out a value related to another timeframe.
As an example, I would like to write an MACD indicator over a 100 periods of M15, that can return out its value 1,2,3,4,5,6,7... minutes before the current candle.
Since in the current candle this indicator "changes" its value, tick by tick, I think that should be possible to write out such an indicator, but I can not figure out how to do it.
MQL4 language principally has tools for this:
However, as noted above, your experimentation will need thorough quant validations, as the earlier Builds did not support this in [MT4-Strategy Tester] code-execution environment ( and more recent shifts into New-MQL4.56789 have devastated performance constraints for all [CustomIndicators], the all [MT4-graph]-GUI-s together plus the all [Expert Advisor]-s use, since all these suddenly share one ( yes, ONE and THE ONLY ) computing thread.
Ok, you have been warned :o)
So,
if indeed keen to equip your [CustomIndicator] so as to be independent of the GUI-native-TimeFrame, all your calculations inside such [CustomIndicator]-code must use indirect access-tools to source the PriceDOMAIN data - so never use any { Open[] | High[] | Low[] | Close[] }-TimeSeries data directly, but only using { iOpen() | iHigh() | iLow() | iClose() }
All these access tools conceptually have a common signature:
double iLow( string symbol, // symbol
int timeframe, // timeframe
int shift // shift
);
and
if your code
obeys this duty,
your [CustomIndicator]( iff the StrategyTester will not finally spoil the game -- due quant testing will show this )
will be working with data from timeframe & shift of your wish.
Implementation remarks:
Your [CustomIndicator]-code has to implement a "non-GUI-shift" independently from the GUI-native-TimeFrame shift-counting. See an iCustom() signature template for inspiration. The GUI-TimeFrame-shift is like moving the line-graph on GUI-screen, i.e. in GUI-native-TimeFrame steps, not taking into account your [CustomIndicator] "internal"-"non-GUI-shift" values, so your code has to be smarter, so as to process this "internal"-"non-GUI-shift" during a value generation. If in doubts, during prototyping, validate the proper "mechanics" on Time[aShiftINTENDED] vs iTime( _Symbol, PERIOD_INTENDED, aShiftINTENDED )
Due to quite a lot of points, where an iCustom() call-interface may be a bit misleading, or a revision-change-management error-prone, we got used to use a formal template for each [Custom Indicator] code, helping to maintain referential integrity with a iCustom() use in the actual [ExpertAdvisor] code. It might seem a bit dumb, but those, who have spent man*hours in search for a bug in { un- | ill- }-propagated call-interface changes, this may become a life-saver.
We formalise the call-interface in such a way, that this section, maintained in the [CustomIndicator]-code, can always be copied into the [ExpertAdviser] code, so that the iCustom() signature-match can be inspected.
//vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
//!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
//!!!!
//---- indicator parameters -------------------------------------------------
// POSITIONAL ORDINAL-NUMBERED CALLING INTERFACE
// all iCustom() calls MUST BE REVISED ON REVISION
//!!!!
//!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
#define XEMA_CUSTOM_INDICATOR_NAME "EMA_DEMA_TEMA_XEMA_wShift" // this.
//--- input parameters ------------------------------------------------------ iCustom( ) CALL INTERFACE
input int nBARs_period = 8;
extern double MUL_SIGMA = 2.5;
sinput ENUM_APPLIED_PRICE aPriceTYPE = PRICE_CLOSE;
extern int ShiftBARs = 0;
//!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
//!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
//!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
/* = iCustom( _Symbol,
PERIOD_CURRENT, XEMA_CUSTOM_INDICATOR_NAME, // |-> iCustom INDICATOR NAME
XEMA_nBARs_period, // |-> input nBARs_period
XEMA_MUL_SIGMA, // |-> input MUL_SIGMA
XEMA_PRICE_TYPE, // |-> input aPriceTYPE from: ENUM_APPLIED_PRICE
XEMA_ShiftBARs, // |-> input ShiftBARs
XEMA_<_VALUE_>_BUFFER_ID, // |-> line# --------------------------------------------from: { #define'd (e)nums ... }
0 // |-> [0]-aTimeDOMAIN-offset
); //
*/
#define XEMA_Main_AXIS_BUFFER_ID 0 // <----xEMA<maxEMAtoCOMPUTE>[]
#define XEMA_UpperBAND_BUFFER_ID 1
#define XEMA_LowerBAND_BUFFER_ID 2
#define XEMA_StdDEV____BUFFER_ID 3
#define XEMA_SimpleEMA_BUFFER_ID 4 // sEMA
#define XEMA_DoubleEMA_BUFFER_ID 10 // dEMA
#define XEMA_TripleEMA_BUFFER_ID 11 // tEMA
//!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
//!!!!
//---- indicator parameters -------------------------------------------------
// POSITIONAL ORDINAL-NUMBERED CALLING INTERFACE
// all iCustom() calls MUST BE REVISED ON REVISION
//!!!!
//!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
//^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Found a way to write it in a very simple way:
double M1 (int shift) {double val = iCustom(NULL,PERIOD_M1, "my_indicator",100,2.0,30.0,0,shift); return(val);}
double M15 (int shift) {double val = iCustom(NULL,PERIOD_M15,"my_indicator",100,2.0,30.0,0,shift); return(val);}
int s1_15;
double B_M1_M15(int i) {
if (i>=0 && i<15 ) s1_15=0;
else if (i>=15 && i<30 ) s1_15=1;
else if (i>=30 && i<45 ) s1_15=2;
else if (i>=45 && i<60 ) s1_15=3;
else if (i>=60 && i<75 ) s1_15=4;
return NormalizeDouble(MathAbs(M1(i) - M15(s1_15)),Digits);
}
and so on for every others couples of timeframe.

Case Insensitive String Comparison of Boost::Spirit Token Text in Semantic Action

I've got a tokeniser and a parser. the parser has a special token type, KEYWORD, for keywords (there are ~50). In my parser I want to ensure that the tokens are what I'd expect, so I've got rules for each. Like so:
KW_A = tok.KEYWORDS[_pass = (_1 == "A")];
KW_B = tok.KEYWORDS[_pass = (_1 == "B")];
KW_C = tok.KEYWORDS[_pass = (_1 == "C")];
This works well enough, but it's not case insensitive (and the grammar I'm trying to handle is!). I'd like to use boost::iequals, but attempts to convert _1 to an std::string result in the following error:
error: no viable conversion from 'const _1_type' (aka 'const actor<argument<0> >') to 'std::string' (aka 'basic_string<char>')
How can I treat these keywords as strings and ensure they're the expected text irrespective of case?
A little learning went a long way. I added the following to my lexer:
struct normalise_keyword_impl
{
template <typename Value>
struct result
{
typedef void type;
};
template <typename Value>
void operator()(Value const& val) const
{
// This modifies the original input string.
typedef boost::iterator_range<std::string::iterator> iterpair_type;
iterpair_type const& ip = boost::get<iterpair_type>(val);
std::for_each(ip.begin(), ip.end(),
[](char& in)
{
in = std::toupper(in);
});
}
};
boost::phoenix::function<normalise_keyword_impl> normalise_keyword;
// The rest...
};
And then used phoenix to bind the action to the keyword token in my constructor, like so:
this->self =
KEYWORD [normalise_keyword(_val)]
// The rest...
;
Although this accomplishes what I was after, It modifies the original input sequence. Is there some modification I could make so that I could use const_iterator instead of iterator, and avoid modifying my input sequence?
I tried returning an std::string copied from ip.begin() to ip.end() and uppercased using boost::toupper(...), assigning that to _val. Although it compiled and ran, there were clearly some problems with what it was producing:
Enter a sequence to be tokenised: select a from b
Input is 'select a from b'.
result is SELECT
Token: 0: KEYWORD ('KEYWOR')
Token: 1: REGULAR_IDENTIFIER ('a')
result is FROM
Token: 0: KEYWORD ('KEYW')
Token: 1: REGULAR_IDENTIFIER ('b')
Very peculiar, it appears I have some more learning to do.
Final Solution
Okay, I ended up using this function:
struct normalise_keyword_impl
{
template <typename Value>
struct result
{
typedef std::string type;
};
template <typename Value>
std::string operator()(Value const& val) const
{
// Copy the token and update the attribute value.
typedef boost::iterator_range<std::string::const_iterator> iterpair_type;
iterpair_type const& ip = boost::get<iterpair_type>(val);
auto result = std::string(ip.begin(), ip.end());
result = boost::to_upper_copy(result);
return result;
}
};
And this semantic action:
KEYWORD [_val = normalise_keyword(_val)]
With (and this sorted things out), a modified token_type:
typedef std::string::const_iterator base_iterator;
typedef boost::spirit::lex::lexertl::token<base_iterator, boost::mpl::vector<std::string> > token_type;
typedef boost::spirit::lex::lexertl::actor_lexer<token_type> lexer_type;
typedef type_system::Tokens<lexer_type> tokens_type;
typedef tokens_type::iterator_type iterator_type;
typedef type_system::Grammar<iterator_type> grammar_type;
// Establish our lexer and our parser.
tokens_type lexer;
grammar_type parser(lexer);
// ...
The important addition being boost::mpl::vector<std::string> >. The result:
Enter a sequence to be tokenised: select a from b
Input is 'select a from b'.
Token: 0: KEYWORD ('SELECT')
Token: 1: REGULAR_IDENTIFIER ('a')
Token: 0: KEYWORD ('FROM')
Token: 1: REGULAR_IDENTIFIER ('b')
I have no idea why this has corrected the problem so if someone could chime in with their expertise, I'm a willing student.

What this cast and assignment is all about?

I am reading Richard Stevens' Advance Programming in unix environment.
There is a code in thread synchronization category (chapter - 11).
This is code showing is showing how to avoid race conditions for many shared structure of same type.
This code is showing two mutex for synch.- one for a list fh (a list which keep track of all the foo structures) & f_next field and another for the structure foo
The code is:
#include <stdlib.h>
#include <pthread.h>
#include <stdio.h>
#include <unistd.h>
#define NHASH 29
#define HASH(fp) (((unsigned long)fp)%NHASH)
struct foo *fh[NHASH];
pthread_mutex_t hashlock = PTHREAD_MUTEX_INITIALIZER;
struct foo {
int f_count;
pthread_mutex_t f_lock;
struct foo *f_next; /* protected by hashlock */
int f_id;
/* ... more stuff here ... */
};
struct foo * foo_alloc(void) /* allocate the object */
{
struct foo *fp;
int idx;
if ((fp = malloc(sizeof(struct foo))) != NULL) {
fp->f_count = 1;
if (pthread_mutex_init(&fp->f_lock, NULL) != 0) {
free(fp);
return(NULL);
}
idx = HASH(fp);
pthread_mutex_lock(&hashlock);
///////////////////// HERE -----------------
fp->f_next = fh[idx];
fh[idx] = fp->f_next;
//////////////////// UPTO HERE -------------
pthread_mutex_lock(&fp->f_lock);
pthread_mutex_unlock(&hashlock);
/* ... continue initialization ... */
pthread_mutex_unlock(&fp->f_lock);
}
return(fp);
}
void foo_hold(struct foo *fp) /* add a reference to the object */
.......
The doubt is
1) What is HASH(fp) pre-processor doing?
I know that it is typecasting what is fp store and then taking its modulo. But, in the function foo_alloc we are just passing the address of newly allocated foo structure.
Why we are doing this I know that this will give me a integer between 0 and 28 - to store in array fh. But why are we taking modulo of an address. Why there is so much randomization?
2) Suppose i accept that, now after this what these two lines are doing (also highlighted in the code) :
fp->f_next = fh[idx];
fh[idx] = fp->f_next;
I hope initially fh[idx] has any garbage value which i assigned to the f_next field of foo and in the next line what is happening , again the same assignment but in opposite order.
struct foo *fh[NHASH] is a hash table, and use the HASH macro as the hash function.
1) HASH(fp) calculates the index to decide where the in the fh to store fp, and it uses the address of the fp and uses the address as key to calculate the index. We can easily typecast the address to the long type.
2) Use the linked list to avoid the hash collisions called separate chaining, and I think the following cod is right, and you can check it in the book :
fp->f_next = fh[idx];
fh[idx] = fp;
insert the fp element to the header of the linked list fh[idx], and the initial value of the fh[idx] is null.

Resources