I am converting a low level C library to Delphi.
I find a lot of casts. I think it is normal in C World. I think I am safe in throwing them out. Integer is 32 bit only here. What do you think?
What could be the overhead of OOP if I convert it to objects etc?
Similarly I want to know the cost of try .. finally and exceptions.
Give me any tip that you think would be useful.
3 Similarly I want to know the cost of try .. finally and exceptions.
The cost of a try ... finally is negligable except in tight loops (put them around and not in the loop). Use them liberally to protect all your resources by balancing all instantiations / opens / allocations with free's / closes / de-allocations.
<code-to-open-a-file>
try
...
finally
<code-to-close-the-file>
end;
The cost of a try ... except is noticeably higher. Use them to respond to exceptions occuring, but only when you can actually take some meaningful action like counter acting the reason for the exception, logging some specific information that would be lost at a higher level in your app, etc. Otherwise let the exception propagate to the caller of your code so it can (eventually) be caught at a more general level.
Never let exceptions escape your application or library or any thread within it.
Never "eat" exceptions by having an empty except block:
try
...
except
end;
There really is only one type of situation where this makes sense: catching exceptions in code that logs exceptions... And then always add a comment to say why you are eating the exception.
You may find some helpful suggestions in the answers to this question on SO:
Best resources for converting C/C++ dll headers to Delphi?
You may also want to take a look at the C-To-Pas project which aims to automate much of the conversion from C to Delphi.
Moving from a procedural language to OOP is a big leap. There are many advantages of using OOP. OOPs are easier to code and maintain. Choosing Delphi PL is a good choice because it can also access at low level by inserting assembly codes. Try-catch is used to prevent program crashes at run time because of exceptions.
Related
I have inherited a large code base that is full of constructs like this:
try
DoWhatever;
finally
end;
Sometimes "DoWhatever" involves some fiddling with controls, and very often it's a post to a database, and there are many cases in the code where there IS something in the finally block.
But my understanding is, if there's nothing in the finally block, then the whole try...finally thing is pointless. The code is pretty noisy overall so I'm assuming it was just a placeholder for future code or a holdover from previous code. (There are many placeholders in the code.)
Is my understanding correct or is there some secret double-bluff reverso in Delphi try...finally blocks I've overlooked? (This is Delphi 2010.)
There is no functional purpose to it — it has no effect on the run-time behavior of the program. When control leaves the try section, the program will jump to the finally section, do nothing, and then continue to whatever the next instruction should be (according to how control reached the finally statement), the same as if the try-finally block hadn't been there at all. The only difference will be that the program spends an extra moment before and after the try-finally block as it sets up and clears the control structure form the stack.
It might be that the author had a template that included a try-finally block, but that some instances of that template ended up not needing any cleanup code. Having the code fit a consistent pattern might have made certain automated tasks easier.
The try-finally block might also provide a convenient place to put a breakpoint that triggers after the protected code runs, even in the event of an exception or other premature termination.
To find out why the code has such apparently useless constructs, ask the previous author, or check the commit history.
In code that I help maintain, I have found multiple examples of code that looks like this:
Description := IfThen(Assigned(Widget), Widget.Description, 'No Widget');
I expected this to crash when Widget was nil, but when I tested it, it worked perfectly.
If I recompile it with "Code inlining control" turned off in Project - Options - Compiler, I do get an Access Violation.
It seems that, because IfThen is marked as inline, the compiler is normally not evaluating Widget.Description if Widget is nil.
Is there any reason that the code should be "fixed", as it doesn't seem to be broken? They don't want the code changed unnecessarily.
Is it likely to bite them?
I have tested it with Delphi XE2 and XE6.
Personally, I hate to rely on a behavior that isn't contractual.
The inline directive is a suggestion to the compiler.
If I understand correctly what I read, your code would also crash if you build using runtime packages.
inlining never occurs across package boundaries
Like Uli Gerhardt commented, it could be considered a bug that it works in the first place. Since the behavior isn't contractual, it can change at any time.
If I was to make any recommendation, I would flag that as a low priority "fix". I'm pretty sure some would argue that if the code works, it doesn't need fixing, there is no bug. At that point, it becomes more of a philosophical question (If a tree falls in a forest and no one is around to hear it, does it make a sound?)
Is there any reason that the code should be "fixed", as it doesn't seem to be broken?
That's really a question that only you can answer. However, to answer it then you need to understand fully the implications of reliance on this behaviour. There are two main issues that I perceive:
Inlining of functions is not guaranteed. The compiler may choose not to inline, and in the case of runtime packages or DLLs, a function in another package cannot be inlined.
Skipping evaluation of an argument only occurs when the compiler is sure that there are no side effects associated with evaluation of the argument. For instance, if the argument involved a function call, the compiler will ensure that it is always evaluated.
To expand on point 2, consider the statement in your question:
Description := IfThen(Assigned(Widget), Widget.Description, 'No Widget');
Now, if Widget.Description is a field, or is a property with a getter that reads a field, then the compiler decides that evaluation has no side effects. This evaluation can safely be skipped.
On the other hand, if Widget.Description is a function, or property with a getter function, then the compiler determines that there may be side effects. And so it ensures that Widget.Description is evaluated exactly once.
So, armed with this knowledge, here are a couple of ways for your code to fail:
You move to runtime packages, or the compiler decides not to inline the function.
You change the Description property getter from a field getter to a function getter.
If it were me, I would not like to rely on this behaviour. But as I said right at the top, ultimately it is your decision.
Finally, the behaviour has been changed from XE7. All arguments to inline functions are evaluated exactly once. This is in keeping with other languages and means that observable behaviour is no longer affected by inlining decisions. I would regard the change in XE7 as a bug fix.
It already has been fixed - in XE7 and confirmed that this was supposed to be wrong behavior.
See https://quality.embarcadero.com/browse/RSP-11531
In vulkan.h, every instance of VkAccessFlagBits appears in a pair that contains a srcAccessMask and a dstAccessMask:
VkAccessFlags srcAccessMask;
VkAccessFlags dstAccessMask;
In every case, according to my understanding, the purpose of these masks is to help designate two sets of operations, such that results of operations in the first set will be visible to operations in the second set. For instance, write operations occurring prior to a barrier should not get hung up in caches but should instead propagate all the way to locations from which they can be read after the barrier. Or something like that.
The access flags come in both READ and WRITE forms:
/* ... */
VK_ACCESS_SHADER_READ_BIT = 0x00000020,
VK_ACCESS_SHADER_WRITE_BIT = 0x00000040,
VK_ACCESS_COLOR_ATTACHMENT_READ_BIT = 0x00000080,
VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT = 0x00000100,
/* ... */
But it seems to me that srcAccessMask should probably always be some sort of VK_ACCESS_*_WRITE_BIT combination, while dstAccessMask should always be a combination of VK_ACCESS_*_READ_BIT values. If that is true, then the READ/WRITE distinction is identical to and implicit in the src/dst distinction, and so it should be good enough to just have VK_ACCESS_SHADER_BIT etc., without READ_ or WRITE_ variants.
Why are there READ_ and WRITE_ variants, then? Is it ever useful to specify that some read operations must fully complete before some other operations have begun? Note that all operations using VkAccessFlagBits produce (I think) execution dependencies as well as memory dependencies. It seems to me that the execution dependencies should be good enough to prevent earlier reads from receiving values written by later writes.
While writing this question I encountered a statement in the Vulkan specification that provides at least part of an answer:
Memory dependencies are used to solve data hazards, e.g. to ensure that write operations are visible to subsequent read operations (read-after-write hazard), as well as write-after-write hazards. Write-after-read and read-after-read hazards only require execution dependencies to synchronize.
This is from the section 6.4. Execution And Memory Dependencies. Also, from earlier in that section:
The application must use memory dependencies to make writes visible before subsequent reads can rely on them, and before subsequent writes can overwrite them. Failure to do so causes the result of the reads to be undefined, and the order of writes to be undefined.
From this I surmise that, yes, the execution dependencies produced by the Vulkan commands that involve these access flags probably do free you from ever having to put a VK_ACCESS_*_READ_BIT into a srcAccessMask field--but that you might in fact want to have READ_ flags, WRITE_ flags, or both in some of your dstAccessMask fields, because apparently it's possible to use an explicit dependency to prevent read-after-write hazards in such a way that write-after-write hazards are NOT prevented. (And maybe vice-versa?)
Like, maybe your Vulkan will sometimes decide that a write does not actually need to be propagated all the way through a particular cache to its final specified destination for the sake of a subsequent read operation, IF Vulkan happens to know that that read operation will simply read from that same cache, saving some time? But then a second write might happen, and write to a different cache, and there'll be two caches left in a race (with the choice of winner undefined) to send their two values to the same spot. Or something? Maybe my mental model of these caches is entirely wrong.
It is fairly solidly established, at least, that memory barriers are confusing.
Let's go over all the possibilities:
read–read — well yeah that one is pretty useless. Khronos seems to agree #131 it is pointless value in src (basically equivalent to 0).
read–write — execution dependency should be sufficient to synchronize without this. Khronos seems to agree #131 it is pointless value in src (basically equivalent to 0).
write–read — that's the obvious and most common one.
write–write — similar reason to write–read above. Without it the order of the writes would be undefined. It is a bit pointless for most situations to write something you haven't even read in between. But hey, now you have a way to synchronize it.
You can provide bitmask of more of these masks to both src and dst. In which case it makes sense to have both masks for driver to sort the dependencies out for you. (I don't expect performance overhead from this on API level, so it is allowed as convenience)
From API design perspective, it could mean adding different enum for srcAccess. But perhaps _READ variants could just be forbidden in srcAccess through "Valid Usage", making this argument weak. The src == READ variant might have been kept, because it is benign.
i use this code to determine if a specific module has been injected to my application's process
(i use it to prevent some Packet Sniffer Softwares)
Var
H:Cardinal;
Begin
H:= GetModuleHandle('WSock32.dll');
if H >0 then FreeLibrary(H);
end;
the problem is when i call Freelibrary it do nothing !
i don't wanna show message then terminate the application i just want to unload the injected module silently
thanks in advance
Well, first of all I'll attempt to answer the question as asked. And then, I'll try to argue that you are asking the wrong question.
Modules are reference counted. It's possible that there are multiple references to this module. So, keep calling FreeLibrary:
procedure ForceRemove(const ModuleName: string);
var
hMod: HMODULE;
begin
hMod := GetModuleHandle(PChar(ModuleName));
if hMod=0 then
exit;
repeat
until not FreeLibrary(hMod);
end;
If you were paranoid you might choose to add an alternative termination of the loop to avoid looping indefinitely.
I don't really know that this will work in your scenario. For instance, it's quite plausible that your process links statically to WSock32. In which case no amount of calling FreeLibrary will kick it out. And even if you could kick it out, the fact that your process statically linked to it probably means it's going to fail pretty hard.
Even if you can kick it out, it seems likely that other code in your process will hold references to functions in the module. And so you'll just fail somewhere else. I can think of very few scenarios where it makes sense to kick a module out of your process with complete disregard for the other users of that module.
Now, let's step back and look at what you are doing. You are trying to remove a standard system DLL from your process because you believe that it is only present because your process is having its packets sniffed. That seems unlikely to be true.
Since you state that your process is subject to packet sniffing attack. That means that the process is communicating over TCP/IP. Which means that it probably uses system modules to carry out that communication. One of which is WSock32. So you very likely link statically to WSock32. How is your process going to work if you kill one of the modules used to supply its functionality?
Are you quite sure that the presence of WSock32 in your process indicates that your process is under attack? If a packet sniffer was going to inject a DLL into your process, why would it inject the WSock32 system DLL? Did you check whether or not your process, or one of its dependencies, statically links to WSock32?
I rather suspect that you've just mis-diagnosed what is happening.
Some other points:
GetModuleHandle returns, and FreeLibrary accepts an HMODULE. For 32 bit that is compatible with Cardinal, but not for 64 bit. Use HMODULE.
The not found condition for GetModuleHandle is that the return value is 0. Nowhere in the documentation is it stated that a value greater than 0 indicates success. I realise that Cardinal and HMODULE are unsigned, and so <>0 is the same as >0, but it really makes no sense to test >0. It leaves the programmer thinking, "what is so special about <0?"
This is a common question for other compilers (C#, VC++, GCC.) I would like to know the same thing for the Delphi compiler (any version; I'm currently using 2010 and XE2 and will use XE4 soon.)
I have a situation in high-performance code I'm writing where a condition has to be checked, but in most cases no action needs to be taken:
if UnlikelyCondition then
HandleUnlikelyCondition
else
HandleLikelyCondition
end;
Often nothing needs to be done for the likely case:
if UnlikelyCondition then
HandleUnlikelyCondition
else
Exit
end;
I would like to hint to the compiler that the second branch of the if statement is the one to optimize for. How can I do this in Delphi?
Current code
Currently, I have written my code assuming that the if statement's condition equalling true is the best thing to optimise for:
if LikelyCondition then
HandleLikelyCondition
else
HandleUnlikelyCondition
end;
or
if LikelyCondition then Exit;
HandleUnlikelyCondition;
In a test just now using the first of these two examples, I get a 50% extra performance boost restructuring my if statements like this, ie assuming the if statement's condition is true. Perhaps another way of phrasing the question is, is this the best I can do?
If you have not encountered branch misprediction before, this epic answer is an illuminating read.
There is nothing in the language or compiler that allows you to supply hints for branch prediction. And in any case, modern architectures would ignore those hints even if the compiler emitted object code that contained hints.