Reading memory protection - memory

I would like to know if there is for linux a way of retrieving the protection of the memory. Like, I want to restore the protection that existed after changing it with mprotect.

Based on the answer by davidg, here's the function unsigned int read_mprotection(void* addr):
re_mprot.c, re_mprot.h from the reDroid project at github (the code works on Android, it should be portable to other linuxes).

The file /proc/self/maps on Linux contains information about the current layout of virtual memory, what each segment is and the memory protection of that segment. Changes made using mprotect will cause the file to be updated appropriately.
By parsing /proc/self/maps before you start modifying it with mprotect, you should have enough information to restore the previous layout.
The following example shows the contents of /proc/self/maps in three scenarios:
Prior to any operation being performed;
After a mmap (which shows one additional entry in the file); and finally
After a mprotect (which shows the permission bits changing in the file).
(Tested with 32-bit Linux 2.6).
#include <sys/mman.h>
#include <stdio.h>
#include <errno.h>
#define PAGE_SIZE 4096
void show_mappings(void)
{
int a;
FILE *f = fopen("/proc/self/maps", "r");
while ((a = fgetc(f)) >= 0)
putchar(a);
fclose(f);
printf("-----------------------------------------------\n");
}
int main(void)
{
void *mapping;
/* Show initial mappings. */
show_mappings();
/* Map in some pages. */
mapping = mmap(NULL, 16 * PAGE_SIZE, PROT_READ, MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
printf("*** Returned mapping: %p\n", mapping);
show_mappings();
/* Change the mapping. */
mprotect(mapping, PAGE_SIZE, PROT_READ | PROT_WRITE);
show_mappings();
return 0;
}
As far as I know, there is not mechanism other than the /proc/ interface that Linux provides to allow you to determine the layout of your virtual memory. Thus, parsing this file is about the best you can do.

Related

Possible runtime error with while loop-Polyspace

I am working with Embedded C language and recently run the MathWorks Polyspace Code Prover (Dynamic analysis) for the whole project to check for critical runtime errors. It found one bug (Red warning) at While loop where I am copying some ROM data into RAM via memory registers.
The code is working fine and as expected but I would like to ask if there is any solution to safely remove this warning. Please find the code example below:
register int32 const *source;
uint32 i=0;
uint32 *dest;
source= (int32*)&ADDR_SWR4_BEGIN;
dest = (uint32*)&ADDR_ARAM_BEGIN;
if ( source != NULL )
{
while ( i < 2048 )
{
dest[i] = (uint32)source[i];
i++;
}
}
My guess is that ADDR_SWR4_BEGIN and ADDR_ARAM_BEGIN is defined in linker script and polyspace didn't compile and link the project that is why it is complaining about the possible run time error or infinite loop.
ADDR_SWR4_BEGIN and ADDR_ARAM_BEGIN are defined as extern in the respective header file.
extern uint32_t ADDR_SWR4_BEGIN;
extern uint32_t ADDR_ARAM_BEGIN;
The warning is red and exact warning is as follow:
Check: Non-terminating Loop
Detail: The Loop is infinite or contains a run-time error
Severity: Unset
Any suggestions would be appreciated.
The code is overall quite fishy.
Bugs
if ( source != NULL ). You just set this pointer to point at an address, so it will obviously not point at NULL. This line is superfluous.
You aren't using volatile when accessing registers/memory, so if this code is executed multiple times, the compiler might make all kinds of strange assumptions. This might be the cause of the diagnostic message.
Bad style/code smell (should be fixed)
Using the register keyword is fishy. This was once a thing in the 1980s when compilers were horrible and couldn't optimize code properly. Nowadays they can do this, and far better than the programmer, so any presence of register in new source code is fishy.
Accessing a register or memory location as int32 and then casting this to unsigned type doesn't make any sense at all. If the data isn't signed, then why are you using a signed type in the first place.
Using home-brewed uint32 types instead of stdint.h is poor style.
Nit-picks (minor remarks)
The (int32*) cast should be const qualified.
The loop is needlessly ugly, could be replaced with a for loop:
for(uint32_t i=0; i<2048; i++)
{
dest[i] = source[i];
}
If PolySpace does not know the value ADDR_ARAM_BEGIN it will assume it could be NULL (or any other value value for its type). While you explicitly test for source being NULL, you do not do the same for dest.
Since both source and dest are assigned from linker constants and in normal circumstances neither should be NULL it is unnecessary to explicitly test for NULL in the control flow and an assert() would be preferable - PolySPace recognises assertions, and will apply the constraint in subsequent analysis, but assert() resolves to nothing when NDEBUG is defined (normally in release builds), so does not impose unnecessary overhead:
const uint32_t* source = (const uint32_t*)&ADDR_SWR4_BEGIN ;
uint32_t* dest = (uint32_t*)&ADDR_ARAM_BEGIN;
// PolySpace constraints asserted
assert( source != NULL ) ;
assert( dest != NULL ) ;
for( int i = 0; i < 2048; i++ )
{
dest[i] = source[i] ;
}
An alternative is to provide PolySpace with a "forced-include" (-include option) to provide explicit definitions so that PolySpace will not consider all possible values to be valid in its analysis. That will probably have the effect of speeding analysis also.
the reason why Polyspace is giving a red error here is that source and dest are pointers to a uint32. Indeed, when you write:
source= (int32*)&ADDR_SWR4_BEGIN
you take the address of the variable ADDR_SWR4_BEGIN and assign it to source.
Hence both pointers are pointing to a buffer of 4 bytes only.
It is then not possible to use these pointers like arrays of 2048 elements.
You should also see an orange check on source[i] giving you information on what's happening with the pointer source.
It seems that ADDR_SWR4_BEGIN and ADDR_SWR4_BEGIN are actually containing addresses.
And in this case, the code should be:
source = (uint32*)ADDR_SWR4_BEGIN;
dest = (uint32*)ADDR_ARAM_BEGIN;
If you do this change in the code, the red error disappears.

How to exculde build-in / system function during function analysis by clang libtooling

I'm trying to analyze functions by using clang libtooling.
Here is the source code that I want to analyze:
#include <stdio.h>
int main(){
int a = 100;
printf("a==%d", a);
}
when I run my tool to get all the function decl in above files, I found there are a lot of build-in / system functions, like:
decls:
_IO_cookie_init
__underflow
__uflow
__overflow
_IO_getc
_IO_putc
_IO_feof
_IO_ferror
_IO_peekc_locked
_IO_flockfile
_IO_funlockfile
_IO_ftrylockfile
_IO_vfscanf
_IO_vfprintf
_IO_padn
_IO_sgetn
_IO_seekoff
_IO_seekpos
_IO_free_backup_area
remove
rename
renameat
tmpfile
tmpfile64
tmpnam
tmpnam_r
tempnam
fclose
fflush
fflush_unlocked
fcloseall
fopen
(I think they are introduced by the header file "stdio.h" )
my question is:
How can I get rid of all these built-in/system functions from the "stdio.h" file, or other (system) header files?
Thanks in advance!!!
When you visit a function, check if its location (startLoc or endLoc) is in system header using SourceManagers api 'isInSystemHeader(loc)'
e.g.:
Bool VisitFunctionDecl(FunctionDecl * D)
{
If(sourceManager.isInSystemHeader(D->getLocStart()))
return true;
}
Thanks,
Hemant

Where are the parameters for Indy's TIdCompressorZLib.CompressStream Method documented?

The TIdComproessorZLib component is used for compression and decompression in the Delphi/C++ Builder Indy library. The CompressStream Method has the following definition:
public: virtual __fastcall CompressStream(TStream AInStream, TStream AOutStream, const TIdCompressionLevel ALevel, const int AWindowBits, const int AMemLevel, const int AStrategy);
The complete description of those parameters in the help file is:
CompressStream is a public overridden procedure. that implements the
abstract the virtual method declared in the ancestor class.
AInStream is the stream containing the uncompressed contents used in
the compression operation.
AOutStream is the stream used to store the compressed contents from
the compression operation. AOutStream is cleared prior to outputting
the compressed contents from the operation. When AOutStream is
omitted, the stream in AInStream is cleared and reused for the output
from the compression operation.
Use ALevel to indicate the desired compression level for the
operation.
Use AWindowsBits and AMemLevel to control the memory footprint
required to perform in-memory compression using the ZLib library.
Use AStrategy to control the RLE-encoding strategy used in the
compression operation.
ALevel's values defined on the help page for TIdCompressionLevel, but I cannot find any indication of what values should be used for AWindowBits, AMemLevel, or AStrategy, which are just integers.
I looked in the source code, but CompressStream just delegates to IndyCompressStream, which is listed in the help file as:
IndyCompressStream(TStream InStream, TStream OutStream, const int level = Z_DEFAULT_COMPRESSION, const int WinBits = MAX_WBITS, const int MemLevel = MAX_MEM_LEVEL, const int Stratagy = Z_DEFAULT_STRATEGY);
The help for IndyCompressStream doesn't even list the minimal description of the parameters that CompressStream does.
I tracked down the file where (I think) those default constants mentioned in IndyCompressStream live, source\Indy10\Protocols\IdZLibHeaders.pas, and they are
Z_DEFAULT_STRATEGY = 0;
Z_DEFAULT_COMPRESSION = -1;
MAX_WBITS = 15; { 32K LZ77 window }
MAX_MEM_LEVEL = 9;
However, the value given for Z_DEFAULT_COMPRESSION is not even a legal value for that parameter according to the documentation for TIdCompressionLevel
Is there some documentation somewhere about what AWindowBits, AMemLevel, and AStrategy mean to this component, and what values are reasonable to use for them? Are the values listed above the actual recommended defaults? Also, the source files include "indy", "Indy10", and "indyimpl" directories. Which of those should we be using to find the source for the current Indy components?
Thanks!
You will need to look to the zlib documentation in zlib.h. In particular, the parameters to deflateInit2().
In nearly all cases, the only ones you should mess with are the compression level and the window bits. For window bits, you would normally leave the window size at 32K (15), but either add 16 for the gzip format (31), or negate (-15) to get the raw deflate format with no header or trailer. For some special kinds of data, you may get an improvement with a different compression strategy, e.g. image or other numerical arrays of data.
Thank you for the comments and answers, especially Remy and Mark. I had not realized that the Indy units were wrappers around zlib, and that the parameters were defined in the zlib library.
I was trying to create a gzip format stream for uploading to a server that was expecting gzip.
Here is the working code for gzip compression and decompression:
void __fastcall TForm1::Button1Click(TObject *Sender)
{
TStringStream* streamIn = new TStringStream(String("This is some data to compress"));
TMemoryStream* streamCompressed = new TMemoryStream;
TStringStream* streamOut = new TStringStream;
/* this also works to compress to gzip format, but you must #include <IdZlib.hpp>
CompressStreamEx(streamIn, streamCompressed, Idzlib::clDefault, zsGZip); */
// NOTE: according to docs, you can leave outstream null, and instream
// will be replaced and reused, but I could not get that to work
IdCompressorZLib1->CompressStream(
streamIn, // System::Classes::TStream* AInStream,
streamCompressed, // System::Classes::TStream* AOutStream,
1, // const Idzlibcompressorbase::TIdCompressionLevel ALevel,
15 + 16, // const int AWindowBits, -- add 16 to get gzip format
8, // const int AMemLevel, -- see note below
0); // const int AStrategy);
streamCompressed->Position = 0;
IdCompressorZLib1->DecompressGZipStream(streamCompressed, streamOut);
String out = streamOut->DataString;
ShowMessage(out);
}
In particular, note that passing -1 for ALevel produces ZLib Error -2, Z_STREAM_ERROR which means invalid parameter, in spite of the defaults I had found. Also, AWindowBits normally ranges from 8 to 15, but adding 16 gives you a gzip format, and negative numbers give you a raw format, as described in the zlib documentation referenced by Mark Adler, one of the authors of the zlib library. I changed AMemLevel from Indy's default per Mark Adler's comment.
Also, as noted the CompressStreamEx function will produce gzip compression using the parameters included in the comments above.
The above was tested in RAD Studio XE3. Thanks again for your help!

C++ Builder 2010 Strange Access Violations

I've got a program that is to become part of an already existing, larger product which is built using C++ Builder 2010.
The smaller program does not (yet) depend on C++ Builder. It works fine in MS Visual Studio, but with C++ Builder it produces strange access violations.
Please let me explain this.
Depending on the code and on compiler settings, access violations happen or do not happen. The access violations are reproducible: When the program is built then the access violation does never occur or it does always occur at the same place. If the program is rebuilt with the same settings, it will show the same behavior. (I'm really glad about that).
The access violation happens at places where the delete operator is called. This can happen (depending on compiler settings and exact code) inside certain destructors, including destructors of own classes and inside the destructor of std::string.
The following things make the access violation less likely:
Build with "Debug" settings (instead of "Release").
No compiler optimizations.
Compiler switch "Slow exception epilogues".
Static RTL instead of dynamic.
Derive exceptions from std::exception instead of Borland's Exception class.
Use less "complicated" expressions (e.g. use "string s = "..." + "..."; throw SomeException(s);" instead of "throw
SomeException(string("...") + "...");")
Use try... __finally with manual cleanup instead of automatic variables with destructors.
Use a small console application instead a VCL windows application.
The program makes use of several C++ features, including exceptions, STL, move constructors etc. and it of course uses the heap.
I already tried some tools, none of them reported problems:
Borland's CodeGuard.
Microsoft Application Verifyer.
pageheap/gflags.
As already mentioned, there is absolutely no problem when building with MS Visual Studio.
Use of precompiled headers and incremental linking (which both seem to me are prone to errors) are disabled.
Neither the C++ Builder compiler ("enable all warnings") nor the one of Visual Studio (/W4) produces a warning that might be related to this issue.
I do not have access to another version of C++ Builder.
As the program will become part of a larger product, it is not an option to switch to a different compiler, and it is not an option to tune the compiler settings until the access violation does no longer happen. (I fear if this really should a compiler bug, the bug might show up again.)
Putting this together, I'm guessing this might result from heap corruption that is related to some compiler bug. However, I was not able to find a bug on qc.embarcadero.com. I'm guessing further this is related to cleanup code that is executed upon stack rewinding when an exception has been thrown. But, well, maybe it's only a stupid code bug.
Currently, I do not have any idea how to proceed. Any help appreciated. Thank you in advance!
tl;dr I believe the bug is that code is generated to delete the std::string from both branches of the ternary operator during stack unwinding, however only one of them was actually created of course.
Here is a simpler MCVE, which shows the problem via outputs in XE5:
#include <vcl.h>
#include <tchar.h>
#include <stdio.h>
using namespace std;
struct S
{
S() { printf("Create: %p\n", this); }
S(S const &) { printf("Copy: %p\n", this); }
void operator=(S const &) { printf("Assign: %p\n", this); }
~S() { printf("Destroy: %p\n", this); }
char const *c_str() { return "xx"; }
};
S rX() { return S(); }
int foo() { return 2; }
#pragma argsused
int _tmain(int argc, _TCHAR* argv[])
{
try
{
throw Exception( (foo() ? rX() : rX()).c_str() );
}
catch (const Exception& e)
{
}
getchar();
return 0;
}
This version shows the problem via output strings on the console. Check the edit history for this post to see a version that uses std::string and causes the segfault instead.
My output is:
Create: 0018FF38
Destroy: 0018FF2C
Destroy: 0018FF38
In the original code, the segfault comes from the bogus Destroy ending up calling delete on the bogus value it obtains by trying to retrieve the internal data pointer for a std::string which was actually never created at that location.
My conjecture is that the code generation for stack unwinding is bugged and tries to delete the temporary string from both branches of the ternary operator. The presence of the temporary UnicodeString does have something to do with it; as the bug did not occur in any variations where I tried to avoid that temporary.
In the debugger you can see the call stack and it is during global stack unwinding that this happens.
Phew, that was so simple that it took me some time:
#include <vcl.h>
#include <tchar.h>
#include <string>
using namespace std;
struct B
{
B(const char* c) { }
string X() const { return "xx"; }
int Length() const { return 2; }
};
struct C
{
void ViolateAccess(const B& r)
{
try
{
throw Exception(string("aoei").c_str());
}
catch (const Exception&) { }
throw Exception(((string) "a" + (r.Length() < 10 ? r.X() : r.X() + "...") + "b").c_str());
}
};
#pragma argsused
int _tmain(int argc, _TCHAR* argv[])
{
try
{
C c;
c.ViolateAccess("11");
}
catch (const Exception& e) { }
return 0;
}
(Preemptive comment: No, this code does not make any sense.)
Create a new console application and make sure to use the VCL. It might depend on the project settings whether there will be an access violation or not; my debug builds always crashed, release builds didn't.
Crashes with C++ Builder 2010 and XE3 trial.
Thus, bug in compiler or in VCL or in STL or whatever.

Erlang C node related question

In the tutorial provided at:
http://www.erlang.org/doc/tutorial/cnode.html
There is the following example:
/* cnode_s.c */
#include
#include
#include
#include
#include "erl_interface.h"
#include "ei.h"
#define BUFSIZE 1000
int main(int argc, char **argv) {
int port; /* Listen port number */
int listen; /* Listen socket */
int fd; /* fd to Erlang node */
ErlConnect conn; /* Connection data */
int loop = 1; /* Loop flag */
int got; /* Result of receive */
unsigned char buf[BUFSIZE]; /* Buffer for incoming message */
ErlMessage emsg; /* Incoming message */
ETERM *fromp, *tuplep, *fnp, *argp, *resp;
int res;
port = atoi(argv[1]);
erl_init(NULL, 0);
if (erl_connect_init(1, "secretcookie", 0) == -1)
erl_err_quit("erl_connect_init");
/* Make a listen socket */
if ((listen = my_listen(port))
I suspect that erl_receive_msg is a blocking call, and I don't know how to overcome this. In C network programming there is the "select" statement but in the Erlang EI API I don't know whether there is such a statement.
Basically I want to build a C node, that continuously sends messages to Erlang nodes. For simplicity suppose there is only one Erlang node.
The Erlang node has to process the messages it receives from the C node. The Erlang node is not supposed to ensure that it has received the message, not does it have to reply with the result of processing. Therefore once the message is sent I don't care about it faith.
One might think that one could modify the code as:
...
if (emsg.type == ERL_REG_SEND) {
...
while(1) {
//generate tuple
erl_send(fd, fromp, tuple);
//free alloc resources
}
...
}
This will produce an infinite loop in which we produce and consume (send) messages.
But there is an important problem: if I do this, then the C node might send too many messages to the Erlang node (so there should be a way to send a message from the Erlang node to the C node to slow down), or the Erlang node might think that the C node is down.
I know that the questions must be short an suite (this is long and ugly), but summing up:
What mechanism (procedure call, algorithm) one might use to develop an eager producer in C for a lazy consumer in Erlang, such that both parties are aware of the underlying context ?
I use Port Drivers myself for the case you are describing (haven't touched the C nodes because I'd rather have more decoupling).
Have a look at the Port Driver library for Erlang: EPAPI. There is a project that leverages this library: Erland DBus.
Did you check the ei_receive_msg_tmo function? I suppose it works similar to the receive after construct of Erlang, so if you set timeout to 0, it will be non-blocking.
I believe erl_interface is deprecated, and ei should be used instead. This might be a complete misinformation though...
you need to take a closer look at the tutorial link that you posted. (search for "And finally we have the code for the C node client.") You will see that the author provided a client cnode implementation. It looks rational.

Resources