Recently I enabled /W4 warnings (MSVC) to clean up a bit in my project and noticed that GLM uses non-standard compiler extension guarded by #define GLM_HAS_ANONYMOUS_UNION, that causes a very long warning spew.
There seems to be compiler feature detection mechanism, but I can't disable compiler extensions entirely because of Windows SDK dependencies and the /Za is discouraged as buggy anyway. So what is the proper way to disable that particular thing in GLM?
I could slap an #undef everywhere i use GLM but is there a "proper" place to configure these things, like a separate config file or something? I'm upgrading GLM from time to time so I wouldn't want to modify that define in the GLM's code.
I ran into the same problem as you.
GLM will try to use all capabilities of your compiler and if it detects VS it will use nonstandard extensions in order to do some fancy things.
If you want these non-standard things to go away (e.g. nameless unions/structs)
you can switch GLM into standard mode by using
#define GLM_FORCE_CXX11
just before you include any glm header.
I plugged this info from the manual at:
http://glm.g-truc.net/0.9.7/glm-0.9.7.pdf
Alternatively you can look into disabling this very specific warning via pragma warning push
#pragma warning(push)
#pragma warning(disable:4201) // suppress even more warnings about nameless structs
#include<glm/glm.hpp>
#pragma warning pop
more info at https://msdn.microsoft.com/en-us/library/aa273936(v=vs.60).aspx
Related
I'm compiling c++ to web assembly using clang --target=wasm32 --no-standard-libraries. Is there a way to convince clang to generate sqrt? It's not finding <math.h> with this target.
Do you already tried to compile without the flag --no-standard-libraries? If you remove it, the clang probably will find math.h library (because its a standard library).
This is because wasm32-unknown-unknown is a completely barebones targets in Clang, and doesn't have any standard library - that is, no math.h, no I/O functions, not even memcpy.
However, you can usually get away with using --target wasm32-wasi + WASI SDK instead: https://github.com/WebAssembly/wasi-sdk
It includes the whole standard library, including even functions for interacting with the filesystem via the WASI standard in compatible environments.
If your code doesn't depend on filesystem / clock / other I/O, then you can safely use WASI-SDK to get math.h, memcpy, malloc and other standard functions, and the resulting WebAssembly will be compatible with any non-WASI environments as well.
In ios apps the default behaviour appears to be to fail for Test Compilation.
Why would I want that to be default? Surely, at worst, I would want Debug to have it enabled? What changes does Enabling Testability actually make?
I happened upon this while tracking down another issue. But perhaps I can give provide a scenario. Why would you ever not want to enable testability?
-fvisibility=hidden.
If you want to use the GCC_SYMBOLS_PRIVATE_EXTERN (aka Symbols Hidden By Default), enable testability has higher precedence and will override this.
In my case, I have a configuration which is copied from Debug and hence has Enable Testability == YES. I have an external static library which was built with -fvisibility=hidden which is used to build one of my own static libs (built with Xcode). However when building my debug builds, I get errors such as (I redacted the function names and paths)
Showing All Messages : Direct access in function ... means the weak
symbol cannot be overridden at runtime. This was likely caused by
different translation units being compiled with different visibility
settings.
From the Apple doc here:
https://developer.apple.com/library/archive/documentation/DeveloperTools/Conceptual/testing_with_xcode/chapters/04-writing_tests.html
When you set the Enable Testability build setting to Yes, which is
true by default for test builds in new projects, Xcode includes the
-enable-testing flag during compilation. This makes the Swift entities declared in the compiled module eligible for a higher level of access.
It would seem that it is also there for Swift accessibility. So if you are not using Xcode's testing and Swift, it would also seem like you could do without this.
I have just finished porting a decent amount of c-sources to the iOS platform and packaged them as a universal static framework. I, then, added the framework (not the project) to a sample iOS app in order to test linkage and proper function. That's when I ran into a humbling problem.
In my attempt to solve the problem described here, I also came across some symbols that are composed through the heavy use of macros (i HATE those). Some of those macros use function attributes that are really extensions of gcc rather than of standard C.
Of course I can always add -std=gnu89, but even then, I am not sure it will resolve the original problem of undefined symbols in the static library.
Not only that, I am now worried that my port to iOS of those sources may not be an accurate port and may result in the type of bugs/issues that maybe related to compiler's codeine and/or optimization policies.
If you can share some of your experience/advice in how best to go about that port, I would really appreciate it.
Thanks!
From manual testing with clang 8.0, it seems that both __attribute__((constructor)) and __attribute__((__constructor__)) work for your purpose.
While discussing OpenCOBOL being utilized for FastCGI, I posted that replacing
#include <stdio.h>
with
#include <fcgi_stdio.h>
should exhibit no behaviour change for the vast majority of programs that don't care to call
FCGI_Accept()
Did I lie? Are there issues to consider? I'll admit to having not gone over sources yet, only docs from the website.
EDIT: 2013-03-08
I've done some experiments, and the statement is proving positive, but lack sufficient evidence to advertise the statement as true. I'd still appreciate any insider information.
As fcgi_stdio.h is redefining many stdio symbols to its own set of FCGI_* symbols, there will with certanity be some differences. Fastcgi also offers the possibility to #define NO_FCGI_DEFINES though, which lets you use both sets - although you'd have to be explicitly specifying the FCGI_ prefix.
I was just thinking about adding a way to determine which set is to use at runtime to be able to use the same binaries online and from cli, but thinking further about it I'll rather go with two make targets.
Also, compiling with libfcgi-dev v2.4.0 I seem to run into blank output in conjunction with -ldl / dlopen() although both binaries link to the same libfcgi.so.0...
--
tl;dr if you want to use dlopen() and see the output on stdout/stderr, don't #include <fcgi_stdio.h> (without defining NO_FCGI_DEFINES)
I have a chunk of code that crashes unless I build with optimizations off. I'm building with LLVM compiler 2.0
I would like to turn off optimizations by wrapping the offending code with a #pragma compiler directive; or turn off optimizations for an entire file.
I've been digging in the clang manual and code; but nothing jumps out at me.
Does anyone know how to change the optimizations for a single CU (as opposed to for the entire app)?
You can set per-file compiler flags in Xcode. In Xcode 4 (which I assume you're using because of the LLVM 2.0 reference), first select the project in the left-hand project browser. Go to the Build Phases tab and expand the Compile Sources build phase.
In there, you can set per-file compiler flags, so you could try going to the offending file and entering in -O0 as a flag to try and disable optimizations for just that file.
GCC has some attributes you can set for this, as pointed out by Johannes in his answer here, but these might not be in LLVM. Also, from the comments there, it appears that these are not even in Apple's customized GCC used for building iOS applications.