I am using boost::interprocess to implement an IPC mechanism based on shared memory. I want this to be cross platform (including windows), and the build system is autotools. On some platforms boost::interprocess requires that you link with the library librt which implements some threading functions, but not on windows, or apple platforms (I think). See the following link:
http://www.boost.org/doc/libs/1_64_0/doc/html/interprocess.html#interprocess.intro.introduction_building_interprocess
Now, in autotools I can do things conditionally depending on the host being windows, but it would be better to test if the required functions are available using AC_SEARCH_LIBS. The AC_SEARCH_LIBS macro first checks if the functions are available with no libraries, then using a provided list of libraries in sequence.
To do the test, I need to know what functions to search for.
To be even more specific I need to use the following headers in my program:
#include <boost/interprocess/managed_shared_memory.hpp>
#include <boost/interprocess/sync/interprocess_mutex.hpp>
#include <boost/interprocess/sync/interprocess_condition.hpp>
#include <boost/interprocess/allocators/allocator.hpp>
#include <boost/interprocess/containers/vector.hpp>
So what functions in librt are required by these headers, or what is the easiest way to find this out?
Related
I'm running linux and I want to access /proc/kallsyms from outside the OS. I'm writing an SMM driver (EDK2) that's written in C, but I cannot use stdio functions like fopen, because compiler (GCC) says
undefined reference to "fopen"
even though I use #include <stdio.h> and other packages within EDK2 use stdio (go figure \o/)
So I'm wondering is there a way I can access /proc/kallsyms without using functions like fopen or open (which btw I tried to use, same error) ??
It'd need to be "pure C" -ish, if you know what I mean, given that it would not be executed on OS level. Any help or suggestion would be appreciated.
I'm compiling c++ to web assembly using clang --target=wasm32 --no-standard-libraries. Is there a way to convince clang to generate sqrt? It's not finding <math.h> with this target.
Do you already tried to compile without the flag --no-standard-libraries? If you remove it, the clang probably will find math.h library (because its a standard library).
This is because wasm32-unknown-unknown is a completely barebones targets in Clang, and doesn't have any standard library - that is, no math.h, no I/O functions, not even memcpy.
However, you can usually get away with using --target wasm32-wasi + WASI SDK instead: https://github.com/WebAssembly/wasi-sdk
It includes the whole standard library, including even functions for interacting with the filesystem via the WASI standard in compatible environments.
If your code doesn't depend on filesystem / clock / other I/O, then you can safely use WASI-SDK to get math.h, memcpy, malloc and other standard functions, and the resulting WebAssembly will be compatible with any non-WASI environments as well.
I've tried to formulate the problem in an abstract way, but anyway I give details about the actual libraries in the end.
Dynamic library Addon is statically linked against other library WebRTC which has some code in assembly and this code is linked into WebRTC as object files together with WebRTC's own object files. Lets call this assembly code VP8. Functions of VP8 are marked extern inside WebRTC. Some function Encode() from Addon calls functions of WebRTC which eventually calls functions from VP8.
Now, the application Firefox which is going to load library Addon is quite complex and has its own version (means statically linked) of library WebRTC(let's call it WebRTC2), but an older one.
So, here is a problem: if a call of Encode() is made from the application Firefox, WebRTC functions get called (not WebRTC2, which is correct) BUT when WebRTC tries to call VP8 functions, they get called from the WebRTC2 version (means application's version of WebRTC), but not from WebRTC.
Is there are way to force WebRTC make calls only from local copy of VP8?
Application Firefox is a Firefox browser, WebRTC is a WebRTC library, VP8 is a VP8 codec library (inside WebRTC) and Addon is my Firefox C++ add-on.
UPDATE - DETAILED DESCRIPTION
Here is "unabstract" description of the problem:
So there is a C++ XPCOM add-on which is statically linked against latest version of WebRTC library.
At some point inside add-on a call for encoding a frame is made (method Encode of VP8Encoder class) and it crashes in Firefox all the time, while continue to work well on test programs using gtest framework.
The problem is that at some point inside WebRTC there is VP8 assembly code which is get called for encoding and functions of this assembly code are declared as extern in implementation files. Actually, it crashes on vp8_intra_pred_y_ve_sse2 function.
I've compared three assembly codes of this function: one is from my version of WebRTC (used in add-on), second - where debugger crashed and the third one - from source code of Mozilla's WebRTC.
It turned out that for some weird reason, Mozilla's code get called instead of add-on's WebRTC (they both have same names of course) and as Mozilla's WebRTC code is outdated, it crashes with EXC_BAD_ACCESS.
This probably won't be much help to you, but since no one else has responded, here I go ...
You didn't mention if you're running on Linux, Windows, or something else. My answer is for Linux. I believe Windows loads dynamic libraries differently.
I think what's happening is that you've statically linked against the WebRTC library stub interfaces and those stub interfaces are dynamically linking to the actual implementation, and then getting the first instance of the WebRTC library loaded into firefox instead of the second. Be sure that you're really linking against a statically compiled version of the WebRTC library.
The linux dlopen(3) man page has an interesting flag listed that seems like it would help if your library was the one loading the WebRTC library:
void *dlopen(const char *filename, int flag);
dlopen()
The function dlopen() loads the dynamic library file named by the null-
terminated string filename and returns an opaque "handle" for the
dynamic library. [...]
RTLD_DEEPBIND (since glibc 2.3.4)
Place the lookup scope of the symbols in this library ahead of
the global scope. This means that a self-contained library will
use its own symbols in preference to global symbols with the
same name contained in libraries that have already been loaded.
This flag is not specified in POSIX.1-2001.
Unfortunately, firefox is the one loading your library.
If your library was dynamically linked against WebRTC (which it seems to be) and you explicitly loaded the WebRTC library you want using dl_open() with the RTLD_DEEPBIND flag, than that would solve your problem.
This won't help you much since its not your code that's binding to vp8_intra_pred_y_ve_sse2, but its worth pointing out that gcc also has the dlsym() function which can take a couple of special flags:
void *dlsym(void *handle, const char *symbol);
dlsym()
The function dlsym() takes a "handle" of a dynamic library returned by
dlopen() and the null-terminated symbol name, returning the address
where that symbol is loaded into memory. [...]
There are two special pseudo-handles, RTLD_DEFAULT and RTLD_NEXT. The
former will find the first occurrence of the desired symbol using the
default library search order. The latter will find the next occurrence
of a function in the search order after the current library. This
allows one to provide a wrapper around a function in another shared
library.
You could use that in your code to at least debug what's going on by having it print out the values it gets for a lookup of vp8_intra_pred_y_ve_sse2.
Finally, the same man page notes that linux has a dlvsm() that takes a version string argument as well, allowing the XPCOM code to specify which version of the function it wants:
#define _GNU_SOURCE /* See feature_test_macros(7) */
#include <dlfcn.h>
void *dlvsym(void *handle, char *symbol, char *version);
If it were me, and I was forced to dynamically link the stuff, I'd go with the brute force approach. Go into the libraries and change the function names (in both libraries). Its not elegant, and it will be a headache whenever a new version of either library comes out, but its simple and direct.
One feature I miss in Delphi (I hope it is at all possible) is that I cannot have Units automatically include their dependent units. This is possible in c++ headers.
For example, in c++:
dependentHeader.h:
#include "baseHeader.h"
Any headers included in baseHeader.h are available in dependentHeader.h. Another example is the precompiled header, whatever I include in the precompiled header is available to all header files in the project. This is very useful for including frequently used headers throughout a project.
Now back to Delphi:
I have a Unit called DebugService
In order to use it other units are required: DependentUnit1, DependentUnit2.
So in every Unit I use DebugService I have to manually add all the other dependent units: DependentUnit1, DependentUnit2.
What I want is just to be able to specify DebugService as a dependency and have all its dependencies come along?
So, in other words I want:
uses
DebugService;
and NOT:
uses
DebugService, DependentUnit1, DependentUnit2;
Is this at all possible?
Thank you!
Ironic that you would ask this, when a better question would be, "Why doesn't C++ have modules yet, in the year 2013".
Delphi's compilation units are not normally split into duplicate .h and .cpp files. You may have noticed that Delphi units have an Interface and Implementation section. This in turn becomes a true module system, compiled .DCU files differ significantly from C++/C compiler ".obj" files because just the interface area can be read, very quickly, by a compiler, when a "uses UnitX" is encountered.
Recently, CLANG/LLVM compiler developers at Apple started adding the rudiments of true module support to the latest CLANG/LLVM C and Objective-C compilers. This means that precompiled header support in XCode is no longer the preferred manner of doing things, because true modules are better than precompiled headers. You could say that a precompiled header system is like having one module, and only one module, as a shabby kludge that you are happy to have, when you cannot have the real thing, which is called Modules. You may say, you are a windows developer, what do you care about CLANG/LLVM? Just that it is evidence that the world is slowly giving up on precompilation, and moving eventually, to modules. The C++ standardization comittee, working at its current rate will certainly deliver you a working C++ standard (but not an implementation) by 2113, at the latest.
In short we might say your question might be asking, if the Horseless Carriage is going to gain features allowing it to accelerate the caching and rapid deployment of Oats to the Equine Power Units.
We don't need that here. We have a real compiler with real module support. End of story. You may notice that Modules (in clang/llvm) are faster than precompiled headers. They are also less of a source of problems, than precompiled headers which are a nearly endless source of crazy problems.
Pre-compiled headers don't have any semantic meaning that differs from standard headers. They are simply an optimisation to improve compilation times. Typically Delphi compilation much faster than C++ compilers and so the optimisation is not needed.
You cannot use unit A and transitively use all of unit A's dependencies. If you want to use definitions from a unit, it must be listed in the uses clause.
There is no equivalent to pre-compiled headers in Delphi. Adding the additional uses references is required if DebugService uses declarations from DependantUnit1 and DependentUnit2 in its own declarations of its interface section, and its declarations are then used by other units, thus they are dependant on those other units. If you can design your units to reduce interface dependencies, using dependent units only in the implementation section instead, then you won't have to include DependantUnit1 and DependantUnit2 in other units' uses clauses anymore. But I understand that is not always possible.
If you need to share code amongst multiple units, it is best to move that code to its own unit/package.
#include "baseHeader.h"
is equivalent to
{$I baseHeader.pas}
you can put anything you like into that file. Even the whole Interface section.
an other alternative to your problem is the use of conditional defines.
in main project file
{$DEFINE debugMyApp}
in each unit you use
use
abc
{$IFDEF debugMyApp}
, additionalUNit1
, additionalUNit2
, etc
{$ENDIF}
;
While discussing OpenCOBOL being utilized for FastCGI, I posted that replacing
#include <stdio.h>
with
#include <fcgi_stdio.h>
should exhibit no behaviour change for the vast majority of programs that don't care to call
FCGI_Accept()
Did I lie? Are there issues to consider? I'll admit to having not gone over sources yet, only docs from the website.
EDIT: 2013-03-08
I've done some experiments, and the statement is proving positive, but lack sufficient evidence to advertise the statement as true. I'd still appreciate any insider information.
As fcgi_stdio.h is redefining many stdio symbols to its own set of FCGI_* symbols, there will with certanity be some differences. Fastcgi also offers the possibility to #define NO_FCGI_DEFINES though, which lets you use both sets - although you'd have to be explicitly specifying the FCGI_ prefix.
I was just thinking about adding a way to determine which set is to use at runtime to be able to use the same binaries online and from cli, but thinking further about it I'll rather go with two make targets.
Also, compiling with libfcgi-dev v2.4.0 I seem to run into blank output in conjunction with -ldl / dlopen() although both binaries link to the same libfcgi.so.0...
--
tl;dr if you want to use dlopen() and see the output on stdout/stderr, don't #include <fcgi_stdio.h> (without defining NO_FCGI_DEFINES)