Simple question, but I cannot seem to find the answer. Is it okay to use #define to define a negative number, as in:
#define kGravity -9.8
XCode is changing the 9.8 to the color of my numbers I have set (purple), but the - is being shown as the same color as the define statement (orange).
Is this legal? Will it compile?
Did you try it? It should work fine.
However, I would encourage you to not use pre-processor macros, but instead use real constants.
static const float kGravity = -9.8f;
Preprocessor directives are a bit frowned upon, in general. Here's some more info on the subject: #define vs const in Objective-C
It is absolutely legal to define negative constants with #define. What you discovered is most likely a bug in Xcode's code coloring, which will probably be fixed in one of the future revisions.
Related
Clang is good for keeping everyone honest regarding the company's coding standards, but it does not provide complete coverage of all cases and makes (IMO) bad choices instead of ignoring certain situations. For example (from another post with similar concerns):
z1 = sqrt(x*x + y*y);
gets "mangled" by clang-format into
z2 = sqrt(x * x + y * y);
Sure that follows the company standards, but the z1 expression is easier to recognize at a glance. I want clang-format to ignore (not add nor remove) spaces around binary operators. I don't see any setting for spaces around binary operators for that matter. It just does it whether I want it or not.
So, can I add the capability to handle a new parameter like
SpaceAroundBinaryOperator: true|false|ignore?
I.e., is the clang-format code accessible to an experienced C++ programmer without having to spend a week or more just figuring out the code? Any tips?
So, can I add the capability to handle a new parameter...
is the clang-format code accessible to an experienced C++ programmer without having to spend a week or more just figuring out the code?
There is this: https://clang.llvm.org/docs/ClangFormatStyleOptions.html#adding-additional-style-options, but there isn't much information there.
Also this: https://clang.llvm.org/docs/LibFormat.html
And maybe this: https://clang.llvm.org/docs/#design-documents
But I think you really would have to dive into the source code. There's a lot of code (since clang-format code is part of the whole clang C++ compiler and related tools LLVM project), so I think you'll want a week or more to figure things out. Just my guess though...
C functions like memcpy and memset are available as C functions as well as #define in iOS:
For example the #define memcpy, under the hood, is:
#define memcpy(dest, src, len) \
((__darwin_obsz0 (dest) != (size_t) -1) \
? __builtin___memcpy_chk (dest, src, len, __darwin_obsz0 (dest)) \
: __inline_memcpy_chk (dest, src, len))
I gather there is some memory checking here but can someone shed some additional details on why it is better than a memcpy alone (where is the value added)?
More importantly, when to use which?
Those names, such as __inline_memcpy_chk, are used by the compiler to help it optimize uses of memcpy. They are special names that corresponding to built-in features of the compiler. They assist it in converting certain uses of memcpy into code that is faster than calling the memcpy library routine. The result might be simple move instructions or, even more efficiently, simple changes of information inside the compiler, so that it knows a copy of a value is available in a register.
If you undefine the macro memcpy so that these built-in features are not used, which is permitted by the C standard, the memcpy routine will still work, but it may be less efficient than if you left it alone.
Generally, you should not try to call these internal names yourself. They have been designed and defined to make the normal use of memcpy efficient.
Unless you #undef the macro, or call it like this (memcpy)(args...), it will always use the macro variant.
I would personally just use the maco - it's intended to be fast and efficient, and will work as you expect.
To answer your questions,
1) I have no additional details, but peeking under the hood like that violates the abstraction the authors have provided for you. You want memcpy, you've got memcpy as they've provided it there, implemented with the snippet you're showing. If you're curious how it works, you can dig into it, but because you asked "when to use which" I suspect you're trying to figure out something that works in practice. Which gets to the answer to your second question...
2) You should use memcpy(dest, src, len). Don't hack around the #define and use the underlying code in a way that was not intended. You're provided with memcpy() as it is there; for you, that is memcpy.
What would be the difference between say doing this?
#define NUMBER 10
and
float number = 10;
In what circumstances should I use one over the other?
#define NUMBER 10
Will create a string replacement that will be executed by the preprocessor (i.e. during compilation).
float number = 10;
Will create a float in the data-segment of your binary and initialize it to 10. I.e. it will have an address and be mutable.
So writing
float a = NUMBER;
will be the same as writing
float a = 10;
whereas writing
float a = number;
will create a memory-access.
As Philipp says, the #define form creates a replacement in your code at the preprocessing stage, before compilation. Because the #define isn't a variable like number, your definition is hard-baked into your executable at compile time. This is desirable if the thing you are repesenting is a truly a constant that doesn't need to calculated or read from somewhere at runtime, and which doesn't change during runtime.
#defines are very useful for making your code more readable. Suppose you were doing physics calculations -- rather than just plonking 0.98f into your code everywhere you need to use the gravitational acceleration constant, you can define it in just one place and it increases your code readability:
#define GRAV_CONSTANT 0.98f
...
float finalVelocity = beginVelocity + GRAV_CONSTANT * time;
EDIT
Surprised to come back and find my answer and see I didn't mention why you shouldn't use #define.
Generally, you want to avoid #define and use constants that are actual types, because #defines don't have scope, and types are beneficial to both IDEs and compilers.
See also this question and accepted answer: What is the best way to create constants in Objective-C
"#Define" is actually a preprocessor macro which is run before the program starts and is valid for the entire program
Float is a data type defined inside a program / block and is valid only within the program / block.
I use m4 for a little text preprocessing here, and it behaves in a way I don't understand.
This is the portion in question:
ifdef(`TEST',
define(`O_EXT', `.obj'),
define(`O_EXT', `.o'))
This macro will always be expanded to .o, regardless whether TEST is defined (m4 -DTEST) or not.
What am I doing wrong?
You're not quoting the other arguments to ifdef. Try this:
ifdef(`TEST', `define(`O_EXT', `.obj')', `define(`O_EXT', `.o')')
i am coding opera recovery tool in my delphi
i am using c++ which is already exist
http://pastebin.com/ViPf0yn6
but i didnt get whats DES_KEY_SZ in that code .
i think they are present in des.h ,but i couldnt found same des.pas :(
can any one help me please
regards
Here we go: http://freebsd.active-venture.com/FreeBSD-srctree/newsrc/crypto/des/des.h.html
Apparently,
#define DES_KEY_SZ (sizeof(des_cblock))
where
typedef unsigned char des_cblock[8];
I am not a C programmer, but I think that this means that DES_KEY_SZ has the value 8.
Google Code Search finds many copies of des.h, where the DES_KEY_SZ macro is defined. It's the size of a des_cblock, which happens to be an array of eight unsigned chars.
In other words, DES_KEY_SZ = 8.
You're going to run into other problems beyond just that missing identifier, though. The code you showed calls a handful of DES functions, too. To unencrypt the data, try using DCPCrypt.