Using the minimum run-time packages to run a standalone GTK+ windows application - libraries

consider the simplest gtk+ application:
#include <gtk/gtk.h>
int main( int argc, char *argv[])
{
GtkWidget *window;
gtk_init(&argc, &argv);
window = gtk_window_new(GTK_WINDOW_TOPLEVEL);
gtk_widget_show(window);
gtk_main();
return 0;
}
I compile it succesfully on Windows using this:
gcc -o hello.exe hello.c -mms-bitfields -IC:/gtk/include/gtk-2.0 -IC:/gtk/lib/gtk-2.0
/include -IC:/gtk/include/atk-1.0 -IC:/gtk/include/cairo -IC:/gtk/include/gdk-pixbuf-2.0 -IC:/gtk/include/pang
o-1.0 -IC:/gtk/include/glib-2.0 -IC:/gtk/lib/glib-2.0/include -IC:/gtk/include -IC:/gtk/include/freetype2 -IC:
/gtk/include/libpng14 -LC:/gtk/lib -lgtk-win32-2.0 -lgdk-win32-2.0 -latk-1.0 -lgio-2.0 -lpangowin32-1.0 -lgdi
32 -lpangocairo-1.0 -lgdk_pixbuf-2.0 -lpango-1.0 -lcairo -lgobject-2.0 -lgmodule-2.0 -lgthread-2.0 -lglib-2.0
-lintl
In order to run hello.exe as a standalone program I include whith it all the GLib, cairo, Pango, ATK, gdk-pixbuf, gettext-runtime, fontconfig, freetype, expat, libpng and zlib run-time packagages. The program runs very well, the problem is that all this run-time stuff takes about 40 MB of disk space.
¿Do this simple program need all the files in ./share/locale (25 MB)? ¿Is there any way to use the minimum run-time packages to run my application on Windows?
Thanks

./share/locale/* includes the translations, so you have to choose yourself. Do you want to have translations or do you want to save disk space?
Depending on how you ship your application (for example simply packed in a .rar or with an installer), you can install those translations conditionally.
The only required files are the libraries, like libgtk-x11-2.0.dll, libglib-2.0.dll, …

Related

Using Clang with built libstdc++ produces undefined symbol _ZSt15__once_callable

I have built libstdc++ with no modifications yet:
cd gccsrcdir/libstdc++-v3/build
../configure --prefix=$PWD/../install
make && make install
I am using Ubuntu 21.10 and I set the following environment variables:
export LIBRARY_PATH=gccsrcdir/libstdc++-v3/install/lib
export LD_LIBRARY_PATH=gccsrcdir/libstdc++-v3/install/lib
export CPLUS_INCLUDE_PATH=gccsrcdir/libstdc++-v3/install/include/c++/13.0.0
When I then use the system's GCC, I get no problems. When I use the system's Clang, it produces a symbol lookup error - even with no parameters:
clang++
clang++: symbol lookup error: /lib/x86_64-linux-gnu/libicuuc.so.67: undefined symbol: _ZSt15__once_callable, version GLIBCXX_3.4.11
In fact I only need to update LD_LIBRARY_PATH to arrive here. What am I doing wrong?
The symbol -- std::__once_callable is defined in your system libstdc++.so.6 (it has version GLIBCXX_3.4.11 in my build, which means it was added in GCC-4.4.0).
Your build of libstdc++.so.6 should define this symbol as well, but for some reason does not. That is a problem -- any binary which uses this symbol will fail at runtime when using your build of libstdc++.so.6 (which is happening because you've pointed LD_LIBRARY_PATH to it).
Note: in your case it's the clang++ binary that is failing to run -- any flags you add to it (such as -femulated-tls) are irrelevant -- they only affect the binary that would have been generated IF clang++ itself didn't fail.
I just repeated your configure && make steps, and the library built this way also doesn't define this symbol.
I then repeated the configure && make, but starting from top-level GCC directory, and libstdc++.so.6 built that way does define the symbol.
Conclusion: libstdc++ is configured differently during "normal" GCC build.
The definition comes from mutex.o, which is built from ./libstdc++-v3/src/c++11/mutex.cc, and which has this chunk of code:
#ifdef _GLIBCXX_HAS_GTHREADS
namespace std _GLIBCXX_VISIBILITY(default)
{
_GLIBCXX_BEGIN_NAMESPACE_VERSION
#ifdef _GLIBCXX_HAVE_TLS
__thread void* __once_callable;
__thread void (*__once_call)();
...
So it sounds like either _GLIBCXX_HAS_GTHREADS or _GLIBCXX_HAVE_TLS is not defined when doing configure && make in the libstdc++-v3 directly.
Digging further, I see that libstdc++-v3 determines _GLIBCXX_HAS_GTHREADS by trying to compile #include "gthr.h", and that file is available in libgcc/gthr.h, but not in "standard" installed GCC.
../libstdc++-v3/configure && grep _GLIBCXX_HAS_GTHREADS config.h
/* #undef _GLIBCXX_HAS_GTHREADS */
TL;DR: correctly configuring libstdc++.so is complicated, and you will be better off building complete GCC.
Once you have a complete build, you will have a libstdc++-v3 directory properly configured, and can just rebuilt in that directory:
grep _GLIBCXX_HAS_GTHREADS ./x86_64-pc-linux-gnu/libstdc++-v3/config.h
#define _GLIBCXX_HAS_GTHREADS 1

Debug printf performance when compiled with clang cfi

Setup
I have a simple helloworld program:
// content of main.c
#include <stdio.h>
#include <limits.h>
int main() {
for (int i = 0; i < INT_MAX; ++i) {
printf("simply helloworld!\n");
}
return 0;
}
I compile a baseline version with clang 13.0.0 using clang -flto=thin -fvisibility=hidden -fuse-ld=lld main.c
To experiment with CFI, I compile another version using clang -flto=thin -fsanitize=cfi -fsanitize-cfi-cross-dso -fno-sanitize-cfi-canonical-jump-tables -fsanitize-trap=cfi -fvisibility=hidden -fuse-ld=lld main.c
Expectation
I am expecting negligible performance overhead as I am only calling into a shared library that I expect will run the same code for both. The disassembly for main function for both binaries look the same.
Reality
The baseline version completes execution in ~27s while the cfi version completes execution in ~32s. Using perf stat -e instructions <binary> I can see that the cfi version runs ~100,000,000,000 more instructions. With perf record then perf diff, I can see that the difference is primarily in two functions _pthread_cleanup_push_defer and _pthread_cleanup_pop_restore that the cfi version runs. Using gdb, these functions are called as the call stack of printf gets deeper.
Question
How do I begin to explain the performance difference between these two binaries? What makes a simple call to printf call two different versions of itself for two different binaries?

How to print something to the console from OpenMP source Code?

I am modifying OpenMP source Code, and I want to make sure that it is indeed working.
For example the following code:
#include "omp.h"
int main()
{
int i=0;
#pragma omp parallel for schedule(dynamic,4)
for(i=0;i<1000;++i)
{
int x = 4+i;
}
}
Should call __kmpc_dispatch_init_4()
And I have verified this by using emit-llvm option in clang.
To double check I added the following print statement:
void __kmpc_dispatch_init_4(ident_t *loc, kmp_int32 gtid,
enum sched_type schedule, kmp_int32 lb,
kmp_int32 ub, kmp_int32 st, kmp_int32 chunk) {
KMP_DEBUG_ASSERT(__kmp_init_serial);
printf("%s\n", "Hello OpenMP");
#if OMPT_SUPPORT && OMPT_OPTIONAL
OMPT_STORE_RETURN_ADDRESS(gtid);
#endif
__kmp_dispatch_init<kmp_int32>(loc, gtid, schedule, lb, ub, st, chunk, true);
}
But when I am compiling the code, I am not getting this output on terminal.
I am compiling the code like this:
llvm_build/bin/clang sc.c -L/usr/local/lib -fopenmp
And after building the openmp source code the output of sudo make install is this:
Install the project...
-- Install configuration: "Release"
-- Up-to-date: /usr/local/lib/libomp.so
-- Up-to-date: /usr/local/include/omp.h
-- Up-to-date: /usr/local/include/omp-tools.h
-- Up-to-date: /usr/local/include/ompt.h
-- Up-to-date: /usr/local/lib/libomptarget.so
-- Up-to-date: /usr/local/lib/libomptarget.rtl.x86_64.so
To cross check if this is used I used:
ldd a.out
And the output is:
linux-vdso.so.1 (0x00007fff25bb6000)
libomp.so => /usr/local/lib/libomp.so (0x00007f75a52c6000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f75a50a7000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f75a4cb6000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f75a4ab2000)
/lib64/ld-linux-x86-64.so.2 (0x00007f75a558a000)
So I am assuming it should be using the code that I have modified.
But not able to see the output of the print statement.
I have also tried using fprintf(stderr, ....)
But doesn't work.
Thank you in advance.
First, you should flush the output using fflush(stdout) after the printf to enforce the line to be printed in the console.
Moreover, you should also ensure that /usr/local/lib/libomp.so is the good one by just checking the modification date of the file (with the stat command).
If the loaded file is not the good one you can force it with the LD_LIBRARY_PATH, LIBRARY_PATH and LD_PRELOAD environment variables.
If this is not sufficient, you can use the nm or objdump tools to check the dynamic symbols: the function should be undefined in your program and provided by the modified LLVM OpenMP runtime.
PS: If none of this works, I strongly advise you to build your program and the OpenMP runtime library with debugging (and tracing) information so you can use debuggers like gdb in order to track function calls.

Clang showing compiler error with fuzzer argument

I am trying to experiment with libFuzzer library and going through the toy-example[1].
keep-learnings-MacBook-Pro:Ccodeanalysis keep_learning$ cat Fuzzme.cpp
#include <stdint.h>
#include <stddef.h>
extern "C" int LLVMFuzzerTestOneInput(const uint8_t *data, size_t size) {
if (size > 0 && data[0] == 'H')
if (size > 1 && data[1] == 'I')
if (size > 2 && data[2] == '!')
__builtin_trap();
return 0;
}
keep-learnings-MacBook-Pro:Ccodeanalysis keep_learning$ clang++ -fsanitize=address,fuzzer Fuzzme.cpp
ld: file not found: /Library/Developer/CommandLineTools/usr/lib/clang/10.0.1/lib/darwin/libclang_rt.fuzzer_osx.a
clang: error: linker command failed with exit code 1 (use -v to see invocation)
keep-learnings-MacBook-Pro:Ccodeanalysis keep_learning$ clang++ --version
Apple LLVM version 10.0.1 (clang-1001.0.46.4)
Target: x86_64-apple-darwin18.7.0
Thread model: posix
InstalledDir: /Library/Developer/CommandLineTools/usr/bin
A quick Google search showed me this [2], but other than that I could not find any meaningful information to resolve it, hence posting here. Could some one please tell me how to solve this ? Thanks in advance.
[1] http://llvm.org/docs/LibFuzzer.html#toy-example
[2] https://bugs.llvm.org/show_bug.cgi?id=39794
As you have noticed, there is no fuzzer runtime shipped with Apple developer tools. So you'd either report this issue to Apple folks, or build the runtime library by yourself from the sources (or both).
As Anton stated, Apple Developer Tools do not include the fuzzer library, leaving you to compile from source, or asking Apple.
It turns out LLVM also hosts pre-compiled binaries for some releases on their downloads page:
https://releases.llvm.org/download.html.
On that page, find your LLVM version (eg "Download LLVM 10.0.0"), and go a bit further until you see Pre-Built Binaries. Don't see binaries for your LLVM version? Pick the nearest lower version. The OP and I both have clang++ 10.0.1, so we'd pick 10.0.0.
Click the macOS link to download, pop into the Terminal to untar and copy the libraries, and you're done. I did it with a few environment variables (those paths are killer!), and a cp -n to preserve existing files.
export CLANG_ROOT=clang+llvm-10.0.0-x86_64-apple-darwin/lib/clang/10.0.0
export XCODE_ROOT=/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/clang/10.0.1
tar xvf clang+llvm-10.0.0-x86_64-apple-darwin.tar.xz $CLANG_ROOT/include/fuzzer $CLANG_ROOT/lib/darwin
sudo cp -rn $CLANG_ROOT/include/fuzzer $XCODE_ROOT/include
sudo cp -n $CLANG_ROOT/lib/darwin/* $XCODE_ROOT/lib/darwin
I did exactly the above, and my code compiled and linked right away.

Magick++ NoDecodeDelegateForThisImageFormat

I've googled my problem and found many pages, but none of them have exactly the same flavour and I can't get my problem to go away.
I have a program which uses Magick++ and works fine on my PC, but it fails on another computer where I'm trying to run the code. A minimal example is this:
#include <iostream>
#include <Magick++.h>
int main(){
Magick::Image im;
im.read( "/fullpathtoimage.jpg" );
std::cout<< im.columns() <<"\n";
return 0;
}
(where of course "/fullpathtoimage.jpg" is a valid image).
This raises an exception:
terminate called after throwing an instance of 'Magick::ErrorMissingDelegate'
what(): ImageMagick: NoDecodeDelegateForThisImageFormat `/fullpathtoimage.jpg' # constitute.c/ReadImage/503
Aborted
Other reports of this problem basically say that they don't have delegates for jpeg, and should install libjpeg etc, but they were getting the same errors when running 'convert'. However, when I do
./convert /fullpathtoimage.jpg temp.png
It runs perfectly. Doing
./identify -list configure
lists, amongst others
DELEGATES bzlib mpeg fontconfig freetype jng jpeg pango png ps x xml zlib
LIBS -lfreetype -ljpeg -lpng12 -lfontconfig -lXext -lXt -lSM -lICE -lX11 -lbz2 -lpangocairo-1.0 -lpango-1.0 -lcairo -lgobject-2.0 -lgmodule-2.0 -lglib-2.0 -lxml2 -lz -lm
and
./identify -list format | grep JPEG
gives
JNG* rw- JPEG Network Graphics
JPEG* rw- Joint Photographic Experts Group JFIF format (62)
PJPEG* rw- Joint Photographic Experts Group JFIF format (62)
So, all seems fine, all binaries work, just my code doesn't.
The version I installed is ImageMagick-6.8.7-8, I built it from sources as I don't have root access at the machine, and I installed it to a location within my home (using ./configure --prefix=/home/... ). I checked if maybe there's an issue with linking my program against wrong ImageMagick (as there is an older system one too), but ldd reveals all is fine i.e. the program is linked against the one I installed, as I wanted, and running ldd on convert shows it is linked against exactly the same libraries.

Resources