PTHREAD_MUTEX_ROBUST vs PTHREAD_MUTEX_ROBUST_NP - pthreads

I've written some code (to run under Linux) that uses pthread robust mutexes for deadlock recovery.
Under Centos 5 the mutex attr name is PTHREAD_MUTEX_ROBUST_NP. However under Fedora 16 the _NP suffix has been removed.
The Posix standard does not include the suffix. What does the suffix mean, when was it removed, and what is the proper way to get code to compile that uses either naming of the feature.
EDIT: So it appears that in latter pthreads the suffix was removed. However, defining _GNU_SOURCE redefines the '_np' versions so the source can compile under either.

As suggested already by cnicutar the _NP stands for non-portable and is appended by implementations that want to add functionality that is not (or not yet) in the standard. The standard will only consider including functions that are implemented in at least one major implementation and prove to be useful and not trivially achievable using existing standard functions.
Fedora generally uses more recent versions of libraries than RHEL (or CentOS) and probably removed the _np now that robust mutexes and the associated API have been accepted into the standard.

Related

In which operating systems must I use 'single dot followed by slash' when specifying a path name?

I know that in Windows I don't have to do so.
For example, ./dir/file.ext and dir/file.ext are equivalent.
Are these two forms possibly different on any other OS, e.g., Linux?
Or is it possibly application-dependent, in which case they might be treated differently even on Windows?
I am asking because I keep bumping into the usage of ./ at the beginning of path names (mostly but not only in NodeJS), and I would like to be sure that I can omit it safely (i.e., avoid turning my code platform-dependent).
The ./ you're referring to is specific to UNIX-like operating systems (OSX and Linux are the major examples) and not program specific. On those platforms, it is used to execute the file being referred to, while in Windows merely typing an executable's filename will execute it. I don't know if the code you're referring to determines which OS you're on, but if it's true that Windows ignores the period, then this is a useful cross-platform method to execute an executable.

How to obtain a file with the content of all include files explicitly included?

George "Mirage" Bakhtadze, the author of Cast II engine, has wrote about an include-based technique which can be used to create generic containers and algorithms. The source is avaiable from the repo at Github. For me, his include-based technique is very interesting and useful, because it can be used for older Delphi and it is compatible between Delphi and Free Pascal (and non-Windows OS ready).
It would be more useful for me if the _GenVector written in "gen_coll_vector.inc" has Sorted & Duplicates properties and related behaviors (behaving the same way as in TStringList).
However, it is less obvious for me to insert the code when there are many include directives (I wonder how George managed this in the first place). Therefore, I wonder whether it is possible to obtain a sample file with all include files explicitly included ? It might be more straightforward for me to start from there.
I mean that there is certain built-in pre-processor that works before the actual compiling and whether there is a way to keep these intermediate files ?
Delphi does not use a pre-processor. It is (and always has been, since Turbo Pascal days) a single-pass compiler. There is no intermediate step. When you {$I} to include files, they are inserted in place in memory during the compilation process. Therefore, there is no "intermediate file" that can be kept.

Detailed Explanation on "Re-Hosting" and "Retargeting" both for compilers and binary data (such as .exe or .obj)

Sometimes in s/w companies, customers provide data in multiple formats. There are linkable and executable data that are said to be "Rehosted" and compiled object files that are said to be "Retargeted". I am trying to understand what rehosting and retargeting mean in this area. Is it similar to the Bootstrap theory in computer science? I have the understanding of the following process (if not incorrect):
PROBLEM:
I need to write a compiler for a new language called "MyLang" to run on PowerPC
Solution:
1. I need to write a compiler for a language "MyLang-Mini"; a subset of "MyLang" to run on PowerPC.
2. I need to write a compiler for "MyLang" using "MyLang-Mini" to run on PowerPC.
3. I run the compiler obtained from no. 1 through the compiler obtained from no. 2 to
obtain the compiler for MyLang to run on PowerPC.
IN BESPOKE "T" DIAGRAM (...ISH):
MyLang PowerPC MyLang PowerPC
MyLangMini MyLangMini PowerPC PowerPC(instr.)
PowerPC(instr.)
What I am getting confused about is rehosting and retargeting. How are they coonected to this concept? What am I rehosting and retargeting if I have some binary data such as .exe or .obj? I would appreciate some detailed explanation if possible please!
I know that this will embark onto "CROSS-COMPILERS", but would prefer expert opinions to be sure.
Thanks in advance.
I now know that in s/w engineering:
REHOSTING - If you have a third-party application linkable/executable that requires usage on your host machine, you do rehosting. The target in this case are most often the same (OS platform, processor, etc.). In worst case, there is a virtualisation required. The rehosted application will run as if it was one of the application running in the host machine
RETARGETTING - If you have a third-party source code, you might need to recompile that to match with your target environment. It may also be that you have third-party .o or .obj compiled models and you want to link them with your source code (retargeted) in order to host it on a host machine. Just like REHOSTED application, it will be as if the application was installed on the host machine.
It will be good to know how this is similar to the compiler rehosting and retargeting. Sorry, I am a newbee is this area and will appreciate even a slap on the wrist.

Compile your lua files

How can I build and compile my own Lua files on Windows? And make them executable.
I am reading Beginning Lua programming, and I have Windows 7 and MacOS Lion both installed. I am having the hard time to follow the instructions. They do not work for me.
On MacOS I open the terminal and put these in:
export LUA_DIR=/usr/local/lib/lua/5.1
mkdir -p /usr/local/lib/lua/5.1 (it tells me, mkdir: illegal option) and I can not follow from here
SET LUA_DIR=”c:\program files\lua\5.1”
As for Windows I do this according to the book.
This what I see in my shell c:\Users\bd>
mkdir "c:\program files\utility" and it tells me access is denied
I have tried to right click on this folder and check off read only, but it does not work.
Any clues would be appreciated, this part has been really confusing for me.
To package your Lua files into an executable on Windows you have several options. There is srlua, there is wxLuaFreeze from wxLua (available as a binary for Windows), and there are more options in this SO answer.
Essentially, the main two options are: (1) append your Lua code to a precompiled exe file, such that it will be loaded and executed when that exe file is run, and (2) convert your Lua code into real executable by compiling it to bytecode, then to C, and then to your target platform.
As to your MacOS issue, mkdir -p means that mkdir is asked to create intermediate directories (for example, you asked to create /a/b/c, it will also create /a/b if those don't exist). As you don't say which version of MacOS you run, it's difficult to provide more detailed answer.
For now the standard distribution of Lua does not compile a script to native executable code; it execute your scripts by first compiling it to bytecode, then by interpreting the bytecode with a reasonnably fast static interpret (this also means that it is easily portable across native or virtual systems, and very resistant to attacks (that could be targetting bugs in the native compiler itself).
Also Lua still does not feature a runtime JIT compiler like Java and .Net: Lua still does not features a VM to produce a safe sandbox.
There exists Lua packages that convert your bytecode (or directly a source script) to a C source that can be used to convert a Lua library into native mode via the same C compiler used to compile the Lua engine itself (this is how the builtin libraries are produced, though they are slightly optimized manually in some time-critical parts).
However it is possible to compile Lua to a javascript source, and run it with fast performance using Javascript, because today's Javascript interprets do have good performance with their implemented VM featuring a JIT compiler for their own bytecodes.
It is also possible by converting it the Lua bytecode to a .Net or Java source that can then be executed directly from Lua (for that you need a version of Lua that has been ported to .Net or Java or Javascript, something that is not so complicate than developing in C/C++ directly a VM with a JIT compiler (a moderately complex part is the bytecode verifier, but the really complex part is the memory manager its garbage collector and its sandbox so that your Lua script will be fully isolated from the Lua engine itself for itw own memory, but the most complex part if the runtime optimizer and collection of profiling statistics: this has been done in the modern VMs for Java, .Net, Javascript, PHP/Zend, Python, Perl...).
I dont know which other language VM would offer the best performance to port Lua and implement on it a compiler to their own bytecode running at near native speed in their VM. But my own small experience with programs (in a much simpler language) self-generating a bytecode that they can run themselves, has always shown me Java winning in performance over .Net and Javascript. This is most probably because Java features an profiling-based dynamic code optimizer
(On the opposite the .Net optimizer runs only once during program installation, using some profiling data collected during the installation of the .Net VM itself, or at first instanciation of the script, without really knowing any profiling data collected during execution of the compiled program itself, and based on some cheked assumptions about the platform capabilities).
I also don't if would be faster in PHP, Python or Perl; the comparison with newer Javascript engines was never attempted though. Porting/compiling a Lua program to Javascript is relatively easy because it implements closures relatively easy for the resolution of linkages. Then the generated Javascript will compile to native code with the excellent Javascript's JIT compilers we have today (and never cease to improve in performance, so much that I've seen various appliactions running now faster in Javascript than before when they were written in C++ or plain C; as well the memory footprint has largely been reduced, we no longer have memory leaks, and even if there's a garbage collector, today's Javascript VM have a very efficient one, which is even better than the GC implemented in the native Lua).
But Lua remains useful as it is easy to secure and sandbox and offers various security benefits (but there are security issues in Lua as well for some kinds of applications, where Javascript offers some solutions, notably for side-channel attacks based on variation of time of execution; but these side-channel attacks are very hard to solve and can affect any system, any program, any programming language, and this starts becoming a critical issue because they are now more esily exploitable; the reason of that comes from hardware optimizations that we depend more and more today when we want to maximize the performances). And with Lua you may be more immune to these problems that a sandboxing sofware environment cannot solve alone.
Probably later we'll see a true VM implementation of Lua with a JIT and self-generating code and the possibility to instanciate new sandboxed VMs to run their self-generated code. It will take more time to generate an EXE file for distribution; notably because it generally requires adding also an installer and a distribution manager.
So for now we could imagine distributing Lua applications compiled to the bytecode of another JIT-capable VM: this generated bytecode would be faster than the Lua bytecode, and would then be extremely complex to reverse-engineer to the semantics of Lua because it would require two separate reverse engineering first from the bytecode of the other VM to the bytecode of Lua, both bytecodes loosing some easiy inferable rules and options tested and foll, and then again to sme Lua source
For the OSX terminal issue:
This command should work
export LUA_DIR=/usr/local/lib/lua/5.1
This command will probably give you permission problems:
mkdir -p /usr/local/lib/lua/5.1
You may try this to solve that. You will be prompted for your password:
sudo mkdir -p /usr/local/lib/lua/5.1
This command has nothing to do with OSX and will not work. This is a windows command:
SET LUA_DIR=”c:\program files\lua\5.1”
You have a permissions problem with Windows- try creating your cmd or PowerShell in Administrator mode. C:\Program Files is a protected directory that a regular user account doesn't have permission to write to.
As for the OS X issue, check out the mkdir OS X manual page to make sure you have the command correct.
So, if I understood your question correctly, you are trying to build Lua on Windows.
This is of course possible, but not easy for beginners. I would highly recommend you to use a binary distribution, which is much easier to install, unless you have special requirements.
Here are several Windows distributions :
Lua Binaries (Lua 5.1 and 5.2)
LuaForWindows (Lua 5.1)
LuaDist (Lua 5.2)

do not use com.sun.xml.internal.*?

Is this statement true:
com.sun.xml.internal package is an internal package as the name suggestes.
Users should not write code that depends on internal JDK implementation classes. Such classes are internal implementation details of the JDK and subject to change without notice
One of my colleagues used one of the classes in his code, which caused javac task in Ant fail to compile our project as the compiler couldn't find the class. Answer from Sun/Oracle says that this is expected behavior of the compiler as user shouldn't use the package.
Question is why the classes in the package made public in the first place?
Thanks,
Sarah
Sun classes in the JDK are prefixed sun.* and are not part of the public supported interface so should be used with care. From the Sun FAQ:
The classes that Sun includes with the
Java 2 SDK, Standard Edition, fall
into package groups java., javax.,
org.* and sun.. All but the sun.
packages are a standard part of the
Java platform and will be supported
into the future. In general, packages
such as sun., that are outside of the
Java platform, can be different across
OS platforms (Solaris, Windows, Linux,
Macintosh, etc.) and can change at any
time without notice with SDK versions
(1.2, 1.2.1, 1.2.3, etc). Programs
that contain direct calls to the sun.
packages are not 100% Pure Java. In
other words:
The java., javax. and org.* packages
documented in the Java 2 Platform
Standard Edition API Specification
make up the official, supported,
public interface.
If a Java program directly calls only
API in these packages, it will operate
on all Java-compatible platforms,
regardless of the underlying OS
platform.
The sun.* packages are not part of the
supported, public interface.
A Java program that directly calls
into sun.* packages is not guaranteed
to work on all Java-compatible
platforms. In fact, such a program is
not guaranteed to work even in future
versions on the same platform.
It's because Java visibility modifiers (especially at the type level, where there are only two options) don't currently have the granularity to achieve the sort of visibility you're hinting at. I don't know the specifics of the internal class or classes you're using, but basically making the classes private would have made them unfit for their intended purpose, so the only other choice was public.
Sadly JAXB (bundled with Java6) seems to rely on a non-public class "com.sun.xml.internal.bind.marshaller.NamespacePrefixMapper" to allow you to specify namespace prefixes when marshalling to xml.
You have to really go out of your way to get this compiling with ant:
http://pragmaticintegration.blogspot.com/
Summary:
Option 1.
Add jre libs as bootclasspathref
Add property: includeJavaRuntime="yes"
Option 2.
Use JAXB-RI libs - change property to "com.sun.xml.bind.marshaller.NamespacePrefixMapper"
Also mentioned here:
Define Spring JAXB namespaces without using NamespacePrefixMapper

Resources