Why does print-circle default to nil? - printing

CLHS says
An attempt to print a circular structure with *print-circle* set
to nil may lead to looping behavior and failure to terminate.
And then there's this:
Why does this Lisp macro as a whole work, even though each piece doesn't work?
Apparently, having *print-circle* set to nil leads to surprises. Why is *print-circle* set to nil by default on many systems? What can go wrong if I set it to t globally right from the beginning of my code?

If you set *print-circle* to true, then all your output functions have to do cycle checking. That means they may slow down and take more memory.
If you don't actually use circular structures (and I'm not a Lisp pro, but I tend to avoid them like the plague), I wouldn't turn cycle checking on in production code.

Related

IfThen(Assigned(Widget), Widget.Description, 'No Widget') doesn't crash. Should it?

In code that I help maintain, I have found multiple examples of code that looks like this:
Description := IfThen(Assigned(Widget), Widget.Description, 'No Widget');
I expected this to crash when Widget was nil, but when I tested it, it worked perfectly.
If I recompile it with "Code inlining control" turned off in Project - Options - Compiler, I do get an Access Violation.
It seems that, because IfThen is marked as inline, the compiler is normally not evaluating Widget.Description if Widget is nil.
Is there any reason that the code should be "fixed", as it doesn't seem to be broken? They don't want the code changed unnecessarily.
Is it likely to bite them?
I have tested it with Delphi XE2 and XE6.
Personally, I hate to rely on a behavior that isn't contractual.
The inline directive is a suggestion to the compiler.
If I understand correctly what I read, your code would also crash if you build using runtime packages.
inlining never occurs across package boundaries
Like Uli Gerhardt commented, it could be considered a bug that it works in the first place. Since the behavior isn't contractual, it can change at any time.
If I was to make any recommendation, I would flag that as a low priority "fix". I'm pretty sure some would argue that if the code works, it doesn't need fixing, there is no bug. At that point, it becomes more of a philosophical question (If a tree falls in a forest and no one is around to hear it, does it make a sound?)
Is there any reason that the code should be "fixed", as it doesn't seem to be broken?
That's really a question that only you can answer. However, to answer it then you need to understand fully the implications of reliance on this behaviour. There are two main issues that I perceive:
Inlining of functions is not guaranteed. The compiler may choose not to inline, and in the case of runtime packages or DLLs, a function in another package cannot be inlined.
Skipping evaluation of an argument only occurs when the compiler is sure that there are no side effects associated with evaluation of the argument. For instance, if the argument involved a function call, the compiler will ensure that it is always evaluated.
To expand on point 2, consider the statement in your question:
Description := IfThen(Assigned(Widget), Widget.Description, 'No Widget');
Now, if Widget.Description is a field, or is a property with a getter that reads a field, then the compiler decides that evaluation has no side effects. This evaluation can safely be skipped.
On the other hand, if Widget.Description is a function, or property with a getter function, then the compiler determines that there may be side effects. And so it ensures that Widget.Description is evaluated exactly once.
So, armed with this knowledge, here are a couple of ways for your code to fail:
You move to runtime packages, or the compiler decides not to inline the function.
You change the Description property getter from a field getter to a function getter.
If it were me, I would not like to rely on this behaviour. But as I said right at the top, ultimately it is your decision.
Finally, the behaviour has been changed from XE7. All arguments to inline functions are evaluated exactly once. This is in keeping with other languages and means that observable behaviour is no longer affected by inlining decisions. I would regard the change in XE7 as a bug fix.
It already has been fixed - in XE7 and confirmed that this was supposed to be wrong behavior.
See https://quality.embarcadero.com/browse/RSP-11531

Why does VkAccessFlagBits include both read bits and write bits?

In vulkan.h, every instance of VkAccessFlagBits appears in a pair that contains a srcAccessMask and a dstAccessMask:
VkAccessFlags srcAccessMask;
VkAccessFlags dstAccessMask;
In every case, according to my understanding, the purpose of these masks is to help designate two sets of operations, such that results of operations in the first set will be visible to operations in the second set. For instance, write operations occurring prior to a barrier should not get hung up in caches but should instead propagate all the way to locations from which they can be read after the barrier. Or something like that.
The access flags come in both READ and WRITE forms:
/* ... */
VK_ACCESS_SHADER_READ_BIT = 0x00000020,
VK_ACCESS_SHADER_WRITE_BIT = 0x00000040,
VK_ACCESS_COLOR_ATTACHMENT_READ_BIT = 0x00000080,
VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT = 0x00000100,
/* ... */
But it seems to me that srcAccessMask should probably always be some sort of VK_ACCESS_*_WRITE_BIT combination, while dstAccessMask should always be a combination of VK_ACCESS_*_READ_BIT values. If that is true, then the READ/WRITE distinction is identical to and implicit in the src/dst distinction, and so it should be good enough to just have VK_ACCESS_SHADER_BIT etc., without READ_ or WRITE_ variants.
Why are there READ_ and WRITE_ variants, then? Is it ever useful to specify that some read operations must fully complete before some other operations have begun? Note that all operations using VkAccessFlagBits produce (I think) execution dependencies as well as memory dependencies. It seems to me that the execution dependencies should be good enough to prevent earlier reads from receiving values written by later writes.
While writing this question I encountered a statement in the Vulkan specification that provides at least part of an answer:
Memory dependencies are used to solve data hazards, e.g. to ensure that write operations are visible to subsequent read operations (read-after-write hazard), as well as write-after-write hazards. Write-after-read and read-after-read hazards only require execution dependencies to synchronize.
This is from the section 6.4. Execution And Memory Dependencies. Also, from earlier in that section:
The application must use memory dependencies to make writes visible before subsequent reads can rely on them, and before subsequent writes can overwrite them. Failure to do so causes the result of the reads to be undefined, and the order of writes to be undefined.
From this I surmise that, yes, the execution dependencies produced by the Vulkan commands that involve these access flags probably do free you from ever having to put a VK_ACCESS_*_READ_BIT into a srcAccessMask field--but that you might in fact want to have READ_ flags, WRITE_ flags, or both in some of your dstAccessMask fields, because apparently it's possible to use an explicit dependency to prevent read-after-write hazards in such a way that write-after-write hazards are NOT prevented. (And maybe vice-versa?)
Like, maybe your Vulkan will sometimes decide that a write does not actually need to be propagated all the way through a particular cache to its final specified destination for the sake of a subsequent read operation, IF Vulkan happens to know that that read operation will simply read from that same cache, saving some time? But then a second write might happen, and write to a different cache, and there'll be two caches left in a race (with the choice of winner undefined) to send their two values to the same spot. Or something? Maybe my mental model of these caches is entirely wrong.
It is fairly solidly established, at least, that memory barriers are confusing.
Let's go over all the possibilities:
read–read — well yeah that one is pretty useless. Khronos seems to agree #131 it is pointless value in src (basically equivalent to 0).
read–write — execution dependency should be sufficient to synchronize without this. Khronos seems to agree #131 it is pointless value in src (basically equivalent to 0).
write–read — that's the obvious and most common one.
write–write — similar reason to write–read above. Without it the order of the writes would be undefined. It is a bit pointless for most situations to write something you haven't even read in between. But hey, now you have a way to synchronize it.
You can provide bitmask of more of these masks to both src and dst. In which case it makes sense to have both masks for driver to sort the dependencies out for you. (I don't expect performance overhead from this on API level, so it is allowed as convenience)
From API design perspective, it could mean adding different enum for srcAccess. But perhaps _READ variants could just be forbidden in srcAccess through "Valid Usage", making this argument weak. The src == READ variant might have been kept, because it is benign.

HLSL: static-value optimization and dynamic-value optimization

I use an if-statement without else block. As the images below show, it seems like that the compiler has done some optimizations, making gDiffuseMap_NormalMapping null. However, when I replace the conditional TRUE with a variable of boolean type, which value is set by the application, the shader works. So, I think the differences between static-value optimization and dynamic-value optimization might be the reason why the shader couldn’t work correctly.
I would like it explained to me.
if-statement with else block:
if-statement without else block:

Delphi exceptions not letting me see local variables

When debugging in Delphi, an exception will correctly tell me the line of code causing the fault, but I cannot get access to any local variables. Is this a limitation in the debugger? Or am I missing something simple? At present, I have to mirror all local variables to a global on the line before the fault, recompile the program and hope to be able to repeat the same exception.
For example
MyArray[I]:=Foo(...);
If I is out of bounds (with bounds checking turned on), I cannot see what the variable I is, unless I mirrored it to a globally scoped debug variable on the previous line.
Or if I have
MyInteger:=Trunc(MyFloat),
and MyFloat is 6.1E+17, I have no idea what it's value is.
You can see the values of local variables when you select the proper line in the call stack window. It is usually one or two lines before the exception is raised.
I don't have the exact version at hand when this has been implemented, but it is definitely one of the newer versions.
The "problem" is caused by the compiler as far as I know. The optimization feature of the compiler acts like a garbage collector, it frees the variables declared within a function when not used any more.
To overcome the problem, write a exception handler and make a fake use of the variable within the exception catch block.

How to make Crash resistant ios apps

I'm now programming ios apps for a while. But still my apps crash regularly and it takes time to make them very stable. I find this very annoying.
So, are there any programming patterns regarding crash proof programming ios app?
Turn up the compiler warnings. Remove all warnings.
Run the static analyzer. Remove all warnings.
Run with GuardMalloc and/or Scribbling.
Remove all Leaks
Remove all Zombies
Write Unit Tests
Write a high level of live error detection (e.g. assertions)
Fix bugs before adding features.
Use source control and continuous integration.
Read. Improve.
1) Use ARC.
ARC has a small learning curve, and the big problem I read about on SO is that people instantiate objects but forget to assign the object to a strong ivar (or property), so the object mysteriously goes away. In any case the benefit is so compelling you should master this. When you do, most of all the memory management stuff you have to keep straight now goes away.
2) Build clean
You should never ever have warnings when you build. If you have a bunch, when a real problem comes up, it gets buried in the lines of cruft you got use to ignoring. Professional programmers build clean.
3) Use Analyze
Xcode/llvm has a phenomenal analyzer - use it - its under the Product menu item. Then clean up every single warning it gives you.
4) Use the right Configuration
I always have a Release (or Distribution) configuration, a ReleaseWithAsserts, and Debug. I only use Debug when using lldb, as the code is much bigger and it will execute differently than an optimized compile will. ReleaseWithAsserts is similar to Debug, but the Debug=1 preprocessor flag is removed, and the optimizer is set to -Os (the default for Release).
Finally, the Release/Distribution configuration has the following in the Preprocessing Macros:
NS_BLOCK_ASSERTIONS=1 NDEBUG
The first turns off all 'NSAssert()' lines, and the second all 'assert()' statements. This way no asserts are active in your shipped code
5) Use copious amounts of asserts.
Asserts are one of the best friends you can have as a programmer. I see sooooo many questions here that would never get written if programmers only would use them. The overhead is in typing them, since (in my usage) they are compiled away in the Release configuration.
Whenever you get an object from another source, assert its existence (ie:
MyObject *foo = [self someMethod];
assert(foo);
This is sooo easy to type, but will save you hours of debugging when a nil object causes problems thousands of instructions later. You can assert on class too:
MyObject *foo = [self someMethod];
assert([foo isMemberOfClass:[MyObject class]]);
This is particularly useful when you are pulling objects out of dictionaries.
You can put asserts at the start of methods, to verify that you received objects not nil too.
You can assert that some variable has a value:
assert(i>1 && i<5);
Again, in all but the Release/Distribution configuration, the assert is "live", but in that configuration they are compiled out.
NSAssert is similar - but you have to use different macros depending on the number of arguments.
NSAssert(foo, #"Foo was nil");
NSAssert1(foo, #"Foo was nil, and i=%d", i);
...
6) Use selective Logs to insure that things that should happen are
You can define your own Log macro, which gets compiled out for Release/Distribution. Add this to your pch file:
#ifndef NDEBUG
#define MYLog NSLog
#else
#define MYLog(format, ...)
#endif
This points up another point - you can use:
#ifndef NDEBUG
...
#endif
to block out a bit of code that does more complex checks - and its just active for your development builds, not for Release/Distribution.
You may already do these, but here's what I do:
Go to every screen in the app using the debugger/simulator, and select "Simulate Memory Warning". Go back to the previous screen. This causes existing UIView objects and variable assignments in viewDidLoad to be reassigned (which could result in bad pointer references).
Make sure to invalidate any running timers in the dealloc.
If an object A has made itself a delegate of object B, make sure to clear it in the dealloc for object A.
Make sure to clear any notification observers in the dealloc.

Resources