puts(NULL) - why doesn't WP+RTE complain? - standard-library

Consider this small C file:
#include <stdio.h>
void f(void) {
puts(NULL);
}
I'm running the WP and RTE plugins of Frama-C like this:
frama-c-gui puts.c -wp -rte -wp-rte
I would expect this code to generate a proof obligation of valid_read_string(NULL); or similar, which would be obviously unprovable. However, to my surprise, no such thing happens. Is this a deficiency in the ACSL specification of the standard library?

Basically yes. You can see in the version of stdio.h that is bundled with Frama-C that the specification for puts is
/*# assigns *stream \from s[..]; */
extern int fputs(const char * restrict s,
FILE * restrict stream);
i.e. the bare minimum, an assigns clause (plus a from clause for Eva). Preconditions on s and stream. Adding a precondition on s would be easy; things are more complex for stream since you need a model for the various objects of type FILE.

Related

Getting bison parser to divulge debug information

I am having trouble writing a bison parser, and unexpectedly ran into difficulties getting the parser to print debug information. I found two solutions on the web, but neither seems to work.
This advocates to put this code in the main routine:
extern int yydebug;
yydebug = 1;
Unfortunately the C++ compiler detects an undefined reference to `yydebug'.
This suggests putting
#if YYDEBUG == 1
extern yydebug;
yydebug = 1;
#endif
into the grammar file. It compiles but does not produce output.
What does work is to edit the parser file itself, replacing
int yydebug;
by
int yydebug = 1;
The big disadvantage is that I have to redo this every time I change the grammar file, which during debugging would happen constantly. Is there any other way I can provoke the parser into coughing up its secret machinations?
I am using bison v2.4.1 to generate the parser, with the following command-line options:
bison -ldv -p osil -o $(srcdir)/OSParseosil.tab.cpp OSParseosil.y
Although the output is a C++ file, I am using the standard C skeleton.
With bison and the standard C skeleton, to enable debug support you need to do one of the following:
Use the -t (Posix) or --debug (Bison extension) command-line option when you create your grammar. (bison -t ...)
Use the -DYYDEBUG=1 command-line option (gcc or clang, at least) when you compile the generated grammar (gcc -DYYDEBUG=1 parser.tab.c ...`).
Add the %debug directive to your bison source
Put #define YYDEBUG 1 in the prologue in your bison source (the part of the file between %{ and %}.
I'd use -t in the bison command line. It's simple, and since it is Posix standard it probably will also work on other derived parser generators. However, adding %debug to the bison source is also simple; while it is not as portable, it works in bison 2.4.
Once you've done that, simply setting yydebug to a non-zero value is sufficient to produce debug output.
If you want to set yydebug in some translation unit other than the generated parser itself, you need to be aware of the parser prefix you declared in the bison command line. (In the parser itself, yydebug is #defined to the prefixed name.) And you need to declare the debug variable (with the correct prefix) as extern. So in your main, you probably want to use:
extern int osildebug;
// ...
int main(int argc, char** argv) {
osildebug = 1;
// ...
}
If you're using bison, your best place to find information is the bison manual; most of the above answer will be found in that page.

What is the use of # symbol in c language

The symbol # was seen in one the program ,But i could not find why it is used .
The syntax is
const unsigned char Array_name[] #(INFO_Array+1) = {................};
The meaning of # operator can be different for the particular compiler in which the code is compiled.
For example, in IAR Embedded Workbench's C/C++ compiler the # operator can be used for placing global and static variables at absolute addresses.
If you are using IAR C/C++ compiler, the compiler will place Array_name in the address (INFO_Array+1).
# operator can also be used to place a variable or object in a particular section of the object file:
uint32_t CTRL_OFFSET_x86 # "MY_RAM_SECTION";
The above line will place CTRL_OFFSET_x86 in the object file section MY_RAM_SECTION.
#pragma location can also be used for this purpose.
To me, it looks like a compiler flag to disable interpreting the string "INFO_Array+1" as an expression. In C# for example, you can use the #-Operator to tell the compiler to use the following expression as String without trying to evaluate it.
A quick googling showed:
For example, this line will fail to compile:
int new = 1776; // 'new' is a keyword
However, this line compiles without error:
int #new = 1776;

JEDI JCL runtime compiler error E2040 when using JclWin32.hpp

I have installed the current stable JEDI Code library in C++ Builder XE3 on Windows 7 x32. It works fine, but only as long as I don't include files like JclFileUtils.hpp which are including JclWin32.hpp. Then I get always the compiler error E2040: "Declaration terminated incorrectly" (in file JclWin32.hpp, line 682, second line in the following code snippet):
#define NetApi32 L"netapi32.dll"
static const System::Int8 CSIDL_PROGRAM_FILESX86 = System::Int8(0x2a);
#define RT_MANIFEST (System::WideChar *)(0x18)
I neither have an idea were this error comes from, nor could I found any hints to this. What could be the cause? Thanks in advance.
I got help and the solution for this problem. Just replace the static const declaration:
static const System::Int8 CSIDL_PROGRAM_FILESX86 = System::Int8(0x2a);
with this macro definition:
#define CSIDL_PROGRAM_FILESX86 0x2a
This is a bug in JclWin32.pas.
In C/C++, the Win32 API declares CSIDL values in Microsoft's shlobj.h header using preprocessor #define statements, eg:
#define CSIDL_PROGRAM_FILESX86 0x002a
After the preprocessor is run and performs #define symbol replacements, the compiler ends up seeing the following invalid declaration in JclWin32.hpp:
static const System::Int8 0x002a = System::Int8(0x2a);
JCL should not be re-declaring CSIDL_PROGRAM_FILESX86 (or any other CSIDL value) at all. It should be either:
using Delphi's own Winapi.ShlObj unit, which already declares CSIDL values.
if not using the Winapi.ShlObj unit, then it should at least be declaring its manual CSIDL values as {$EXTERNALSYM} so they do not appear in the generated JclWin32.hpp file. If needed, JCL can include an {$HPPEMIT '#include <shlobj.h>'} statement to pull in the existing Win32 API declarations for C/C++ projects to use.

Parsing Command Line Parameters in C++. I'm having a strange error

I'm having a strange error when I try parsing command line parameters. Why do I call it strange? Well, that's because I've done a lot of research about command line parsing in c++ before hand, and nobody's test code works on my visual studio 2010 IDE. When I use the debugger, I find I always get a FALSE returned when I try to check for the parameters. In the example below, it's when I do a if (argv[1] == "-in"). I tried testing it several different ways in the watch window. And I tried passing it to a string first. Or using single quotes. Then I searched around the internet and used other people's code who supposedly got it working. What am I doing wrong? Is it a setting I have set wrong in my Visual Studio environment?
This is what I had originally
#include <iostream>
#include <stdlib.h>
#include <sstream>
#include <fstream>
using namespace std;
int main(int argc, char * argv []) //Usage FileSplitter -in [filename] -sz [chunk size]
{
if (argc==5)
{
string strTest = argv[1];
if ((argv[1] == "-in") && (argv[3] == "-sz"))
{
//Code here
}
}
}
Anyways that was my original code. I've tried tweaking it several times and I've tried using the code from the following websites...
http://www.cplusplus.com/forum/articles/13355/
He has some examples of comparing argv[1] with a string... and he says it works.
http://www.cplusplus.com/forum/unices/26858/
Also here a guy posted some code about a comparison.. Under Ryan Caywood's post.
They won't work for me when I try to do a comparison. I am thinking about just doing a legit strcmp, but I want to know WHY my visual studio environment is not compiling like it is on everybody else's system?
Also, during debugging, I input the command line parameters in the debug section of the project properties. I don't know if that would have affected anything. I also tried building and running the project, but alas, all to no avail. Thanks in advance to anyone who can give me some good advice.
Arguments are passed in through c strings, and so if I recall correctly, comparing them using == will just compare the pointers to them. Try using strcmp() to compare two c strings, or convert both to c++ strings and compare them that way, if you must.
You are doing the string compare incorrectly.
either do it C-style using strcmp() or (like suggested in the links you mention), convert to a C++ style string first.
if (string(argv[i]) == "stuff") { ... }

How does CLR match the exported names during P/Invoke?

I work on a project that requires .Net interoperability with unmanaged code. I started to work with .Net a couple of weeks ago, though I have a lot of experience with C/C++, and I am surprised how CLR deals with P/Invoke. Here are the details. My colleague wrote this function
__declspec(dllexport) int __stdcall ReadIPWSensor(unsigned int deviceClassId, void *buffer) {...}
and I had to call it from C# module. I imported the function as
[DllImport("ipw", CallingConvention = CallingConvention.StdCall)]
extern static int ReadIPWSensor(uint deviceClassId, IntPtr buffer);
just to find out an exception (System.EntryPointNotFoundException, Unable to find an entry point named 'ReadIPWSensor' in DLL 'ipw'). I used DependencyWalker tool and found that the function was exported as ?ReadIPWSensor##YGHIPAX#Z (my colleague forgot to export it in the DEF file). Just for the quick test (the unmanaged DLL compiles very slowly) I changed my import definition to:
[DllImport("ipw", EntryPoint = "#22", CallingConvention = CallingConvention.StdCall)]
extern static int ReadIPWSensor(uint deviceClassId, IntPtr buffer);
as the ordinal was 22. The test passed successfully with the new import definition.
My first question is: What are the good practices when dealing the mangled function exports? Is it a good practice to use the export ordinals?
In my case I had access to the C++ source code and the DEF file so I added the export and changed back the import definition to
[DllImport("ipw", CallingConvention = CallingConvention.StdCall)]
extern static int ReadIPWSensor(uint deviceClassId, IntPtr buffer);
I knew there is another function we already use in our project and wanted to compare my code with the existing one. The function is defined as
extern "C" __declspec(dllexport) int __stdcall LoadIPWData(void
*buffer)
and is imported as
[DllImport("ipw", CallingConvention = CallingConvention.StdCall)]
extern static int LoadIPWData(IntPtr buffer);
To my surprise DependencyWalker tool shows that the function is exported as _LoadIPWData#4 (my coworker forgot to export it in the DEF file again). However with this function there is no System.EntryPointNotFoundException error. Obviously, the CLR somehow managed to resolve the right name. It seems there is some sort of fallback mechanism that allows CLR to find the right function. I can easily imagine the it sums the sizes of the parameters and is looking for "function_name#the_sum_of_all_parameter_sizes" though it seems quite simplistic.
My second question is: How does CLR match the exported function names during P/Invoke?
In this scenario I think CLR is so clever that it actually hides a bug - LoadIPWData function should be accessible by its name from other unmanaged modules. Maybe I am a bit of paranoid but I prefer to know how actually CLR works. Unfortunately all my google searches on that topic were fruitless.
The pinvoke marshaller has built-in knowledge of a few common DLL export naming schemes. It knows that __cdecl functions often have a leading underscore and that __stdcall in 32-bit mode is commonly decorated with a leading underscore and a trailing #x where x is the size in bytes of the arguments passed on the stack. It also knows that winapi functions are exported with a trailing extra A or W, a naming scheme to distinguish functions that accept strings and for which there's both an ansi and a Unicode version. The corresponding [DllImport] property is CharSet. It just tries them all until it finds a match.
It doesn't know anything about C++ compiler name decoration rules (aka mangling) so that's why you have to use extern "C" to suppress that by hand.

Resources