Why doesn't coverity show CHECKED_RETURN in my program? - return

#include<stdio.h>
#include <stdlib.h>
int test() {
const char* s = getenv("CNU");
if (s!=NULL)
return 1;
else
return -1;
}
int main() {
test();
// some C code..
return 0;
}
Commands that I use for coverity analysis:
cov-build --dir Cov.build gcc test.c
cov-analyze --dir Cov.build --aggressiveness-level high --enable-callgraph-metrics --all
report:
Analysis summary report:
------------------------
Files analyzed : 1
Total LoC input to cov-analyze : 10926
Functions analyzed : 2
Paths analyzed : 6
Time taken by analysis : 00:00:01
Defect occurrences found : 0
About CHECKED_RETURN:
https://ondemand.coverity.com/reference/7.6.1/en/coverity

The CHECKED_RETURN checker is a statistical checker - it looks for examples where the return value is checked, and if a statistically significant (configurable) threshold is reached, defects will be issued for locations where you fail to check the return value.
If you want it to always issue a defect whenever you fail to check the return value, then you need to add __coverity_always_check_return__(), as shown in the example in the documentation you linked:
int always_check_me(void) {
__coverity_always_check_return__();
return rand() % 2;
}
int main(int c, char **argv) {
always_check_me(); #defect#checked_return
// the statement above is a defect because the value is not checked
cout << "Hello world" << endl;
}
For obvious reasons, you'll need to also create a function stub for this for the source to compile (also mentioned in the documentation). If you want to make the code live only for Coverity, you can guard it with #if __COVERITY__.

Yes, CHECKED_RETURN is a statistical checker error. If you check the return value of test() in 10 places, then in 11th place if you miss to check the return value of test(), coverity will return CHECKED_RETURN error. Ans show the statistics on how many places it checked out of total usage.

Related

mpi & number of processor

hello i'm using windows and i use both codeblock & visual studio 2019 to programm using mpi.
but after i use this code
int main(int argc, char *argv[])
{
int rank, size;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
printf("Hello world! I am %d of %d\n", rank, size);
MPI_Finalize();
return 0;
}
it return me 1 processor while if use echo %NUMBER_OF_PROCESSORS%
on normal cmd it return me 4 .
i was using MPI_SEND & MPI_Recv and i had a A FATAL ERROR AFTER USING THIS CODE
#include <stdio.h>
#include "mpi.h"
int main(int argc, char* argv[])
{
int menum, nproc;
int n, tag, num;
MPI_Status info;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &menum);
MPI_Comm_size(MPI_COMM_WORLD, &nproc);
printf("menum:%d nproc :%d", menum, nproc);
if (menum == 0) {
printf("\ninsert number: ");
scanf_s(" %d", &n);
tag = 10;
MPI_Send(&n, 1, MPI_INT, 1, tag, MPI_COMM_WORLD);
}
else {
tag = 10;
MPI_Recv(&n, 1, MPI_INT, 0, tag, MPI_COMM_WORLD, &info);
}
MPI_Get_count(&info, MPI_INT, &num);
MPI_Finalize();
return 0;
}
i searched online and as i could understand it's like mpi cant find another processor to send the value.
how can i make it??
is there a way i could use mpi with all my processor with visual studio or codeblock?
job aborted:
[ranks] message
[0] fatal error
Fatal error in MPI_Send: Invalid rank, error stack:
MPI_Send(buf=0x0056F9F0, count=1, MPI_INT, dest=1, tag=10, MPI_COMM_WORLD) failed
Invalid rank has value 1 but must be nonnegative and less than 1
If you write an MPI program and just execute it directly you get what's called singleton init -- The MPI library will start itself up and you get one process in MPI_COMM_WORLD.
To launch a parallel MPI program on Windows, you'll need to consult your MPI implementation. For MS-MPI, you will use the HPC Job Manager (https://learn.microsoft.com/en-us/powershell/high-performance-computing/overview?view=hpc19-ps) .
For Intel-MPI you go through a similar process constructing a host file and registering your windows credentials with each machine (https://software.intel.com/content/www/us/en/develop/documentation/mpi-developer-guide-windows/top.htm)
I don't have any experience with MPI on Windows machines so this is as much help as I can give you. Hope it gets you pointed in the right direction.

Why do builds for various projects fail with ‘Operation not permitted’ using iOS on-device compiler/toolchain?

I am an intermediately skilled Linux/Unix user trying to compile software for an iPad on a (jailbroken) iPad.
Many builds (for example, make and tex-live) fail with some Operation not permitted error. This will either look like Can't exec "blah": Operation not permitted or execvp: blah: Operation not permitted where blah is aclocal, a configure script, libtool, or just about anything. Curiously, finding the offending line in a Makefile or configure script and prefixing it with sudo -u mobile -E will solve the error for that line, only for it to reappear for on a later line or in another file. Since I am running the build scripts as mobile, I do not understand how this could possibly fix the issue, yet it does. I have confirmed that making these changes does actually allow for the script to work successfully up to that point. Running the build script with sudo or sudo -u mobile -E and/or running the entire build as root does not solve the issue; with either, I still must edit build scripts to add sudo’s.
I would like to know why this is happening, and if possible how I could address the issue without editing build scripts. Any information about these types of errors would be interesting to me even if they do not solve my problem. I am aware that the permissions/security/entitlements system is unusual on iOS and would like to learn more about how it works.
I am using an iPad Pro 4 on jailbroken iOS 13.5 with the build tools from sbingner’s and MCApollo’s repos (repo.bingner.com and mcapollo.github.io/Public). In particular, I am using a build of LLVM 5 (manually installed from sbingner’s old debs), Clang 10, Darwin CC tools 927 and GNU Make 4.2.1. I have set CC, CXX, CFLAGS, etc. to point to clang-10 and my iOS 13.5 SDK with -isysroot and have confirmed that these settings are working. I would like to replace these with updated versions, but I cannot yet build these tools for myself due to this issue and a few others. I do have access to a Mac for cross-compilation if necessary, but I would rather use only my iPad because I like the challenge.
I can attach any logs necessary or provide more information if that would be useful; I do not know enough about this issue to know what information is useful. Thanks in advance for helping me!
For anyone who ends up needing to address this issue on a jailbreak that does not have a fix for this issue, I have written (pasted below) a userland hook based on the posix_spawn implementation from the source of Apple’s xnu kernel.
Compile it with Theos, and inject it into all processes spawned by your shell by setting environment variable DYLD_INSERT_LIBRARIES to the path of the resulting dylib. Note: some tweak injectors (namely libhooker, see here) reset DYLD_INSERT_LIBRARIES, so if you notice this behavior, be sure to inject only your library.
Because the implementation of the exec syscalls in iOS call out to posix_spawn, this hook fixes all of the exec-related issue’s I’ve run into so far.
#include <stdio.h>
#include <string.h>
#include <errno.h>
#include <spawn.h>
// Copied from bsd/kern/kern_exec.c
#define IS_WHITESPACE(ch) ((ch == ' ') || (ch == '\t'))
#define IS_EOL(ch) ((ch == '#') || (ch == '\n'))
// Copied from bsd/sys/imgact.h
#define IMG_SHSIZE 512
// Here, we provide an alternate implementation of posix_spawn which correctly handles #!.
// This is based on the implementation of posix_spawn in bsd/kern/kern_exec.c from Apple's xnu source.
// Thus, I am fairly confident that this posix_spawn has correct behavior relative to macOS.
%hookf(int, posix_spawn, pid_t *pid, const char *orig_path, const posix_spawn_file_actions_t *file_actions, const posix_spawnattr_t *attrp, char *const orig_argv[], char *const envp[]) {
// Call orig before checking for anything.
// This mirrors the standard implementation of posix_spawn because it first checks if we are spawning a binary.
int err = %orig;
// %orig returns EPERM when spawning a script.
// Thus, if err is anything other than EPERM, we can just return like normal.
if (err != EPERM)
return err;
// At this point, we do not need to check for exec permissions or anything like that.
// because posix_spawn would have returned that error instead of EPERM.
// Now we open the file for reading so that we can check if it's a script.
// If it turns out not to be a script, the EPERM must be from something else
// so we just return err.
FILE *file = fopen(orig_path, "r");
if (file == NULL) {
return err;
}
if (fseek(file, 0, SEEK_SET)) {
return err;
}
// In exec_activate_image, the data buffer is filled with the first PAGE_SIZE bytes of the file.
// However, in exec_shell_imgact, only the first IMG_SHSIZE bytes are used.
// Thus, we read IMG_SHSIZE bytes out of our file.
// The buffer is filled with newlines so that if the file is not IMG_SHSIZE bytes,
// the logic reads an IS_EOL.
char vdata[IMG_SHSIZE] = {'\n'};
if (fread(vdata, 1, IMG_SHSIZE, file) < 2) { // If we couldn't read at least two bytes, it's not a script.
fclose(file);
return err;
}
// Now that we've filled the buffer, we don't need the file anymore.
fclose(file);
// Now we follow exec_shell_imgact.
// The point of this is to confirm we have a script
// and extract the usable part of the interpreter+arg string.
// Where they return -1, we don't have a shell script, so we return err.
// Where they return an error, we return that same error.
// We don't bother doing any SUID stuff because SUID scripts should be disabled anyway.
char *ihp;
char *line_startp, *line_endp;
// Make sure we have a shell script.
if (vdata[0] != '#' || vdata[1] != '!') {
return err;
}
// Try to find the first non-whitespace character
for (ihp = &vdata[2]; ihp < &vdata[IMG_SHSIZE]; ihp++) {
if (IS_EOL(*ihp)) {
// Did not find interpreter, "#!\n"
return ENOEXEC;
} else if (IS_WHITESPACE(*ihp)) {
// Whitespace, like "#! /bin/sh\n", keep going.
} else {
// Found start of interpreter
break;
}
}
if (ihp == &vdata[IMG_SHSIZE]) {
// All whitespace, like "#! "
return ENOEXEC;
}
line_startp = ihp;
// Try to find the end of the interpreter+args string
for (; ihp < &vdata[IMG_SHSIZE]; ihp++) {
if (IS_EOL(*ihp)) {
// Got it
break;
} else {
// Still part of interpreter or args
}
}
if (ihp == &vdata[IMG_SHSIZE]) {
// A long line, like "#! blah blah blah" without end
return ENOEXEC;
}
// Backtrack until we find the last non-whitespace
while (IS_EOL(*ihp) || IS_WHITESPACE(*ihp)) {
ihp--;
}
// The character after the last non-whitespace is our logical end of line
line_endp = ihp + 1;
/*
* Now we have pointers to the usable part of:
*
* "#! /usr/bin/int first second third \n"
* ^ line_startp ^ line_endp
*/
// Now, exec_shell_imgact copies the interpreter into another buffer and then null-terminates it.
// Then, it copies the entire interpreter+args into another buffer and null-terminates it for later processing into argv.
// This processing is done in exec_extract_strings, which goes through and null-terminates each argument.
// We will just do this all at once since that's much easier.
// Keep track of how many arguments we have.
int i_argc = 0;
ihp = line_startp;
while (true) {
// ihp is on the start of an argument.
i_argc++;
// Scan to the end of the argument.
for (; ihp < line_endp; ihp++) {
if (IS_WHITESPACE(*ihp)) {
// Found the end of the argument
break;
} else {
// Keep going
}
}
// Null terminate the argument
*ihp = '\0';
// Scan to the beginning of the next argument.
for (; ihp < line_endp; ihp++) {
if (!IS_WHITESPACE(*ihp)) {
// Found the next argument
break;
} else {
// Keep going
}
}
if (ihp == line_endp) {
// We've reached the end of the arg string
break;
}
// If we are here, ihp is the start of an argument.
}
// Now line_startp is a bunch of null-terminated arguments possibly padded by whitespace.
// i_argc is now the count of the interpreter arguments.
// Our new argv should look like i_argv[0], i_argv[1], i_argv[2], ..., orig_path, orig_argv[1], orig_argv[2], ..., NULL
// where i_argv is the arguments to be extracted from line_startp;
// To allocate our new argv, we need to know orig_argc.
int orig_argc = 0;
while (orig_argv[orig_argc] != NULL) {
orig_argc++;
}
// We need space for i_argc + 1 + (orig_argc - 1) + 1 char*'s
char *argv[i_argc + orig_argc + 1];
// Copy i_argv into argv
int i = 0;
ihp = line_startp;
for (; i < i_argc; i++) {
// ihp is on the start of an argument
argv[i] = ihp;
// Scan to the next null-terminator
for (; ihp < line_endp; ihp++) {
if (*ihp == '\0') {
// Found it
break;
} else {
// Keep going
}
}
// Go to the next character
ihp++;
// Then scan to the next argument.
// There must be another argument because we already counted i_argc.
for (; ihp < line_endp; ihp++) {
if (!IS_WHITESPACE(*ihp)) {
// Found it
break;
} else {
// Keep going
}
}
// ihp is on the start of an argument.
}
// Then, copy orig_path into into argv.
// We need to make a copy of orig_path to avoid issues with const.
char orig_path_copy[strlen(orig_path)+1];
strcpy(orig_path_copy, orig_path);
argv[i] = orig_path_copy;
i++;
// Now, copy orig_argv[1...] into argv.
for (int j = 1; j < orig_argc; i++, j++) {
argv[i] = orig_argv[j];
}
// Finally, add the null.
argv[i] = NULL;
// Now, our argv is setup correctly.
// Now, we can call out to posix_spawn again.
// The interpeter is in argv[0], so we use that for the path.
return %orig(pid, argv[0], file_actions, attrp, argv, envp);
}

Saxon-C CentOS8 Compile

I am trying to evaluate Saxon-C 1.2.1 HE on CentOS8 and installation seems to have gone ok. Trying out the samples by cd samples/cppTests && build64-linux.sh though leads to a myriad of compilation errors to the tune of the following:
../../Saxon.C.API/SaxonProcessor.h:599:32: error: division ‘sizeof (JNINativeMethod*) / sizeof (JNINativeMethod)’ does not compute the number of array elements [-Werror=sizeof-pointer-div]
gMethods, sizeof(gMethods) / sizeof(gMethods[0]));
Before I summarily and trustfully switched off -Werror=sizeof-pointer-div i checked the source code and what's going on there do seem dubious.
bool registerCPPFunction(char * libName, JNINativeMethod * gMethods=NULL){
if(libName != NULL) {
setConfigurationProperty("extc", libName);
}
if(gMethods == NULL && nativeMethodVect.size()==0) {
return false;
} else {
if(gMethods == NULL) {
//copy vector to gMethods
gMethods = new JNINativeMethod[nativeMethodVect.size()];
}
return registerNativeMethods(sxn_environ->env, "com/saxonica/functions/>
gMethods, sizeof(gMethods) / sizeof(gMethods[0]));
}
return false;
}
more specifically sizeof(gMethods) / sizeof(gMethods[0]) would not seem to calculate anything useful by any margin. The intention was probably rather to output some code that would arrive at the same value as nativeMethodVect.size() but seeing this project's source for the very first time i might be mistaking and the division is in fact intentional ?
I am inclined to guess the intention was in fact closer to b than to a in the following example:
#include <cstdio>
struct test
{
int x, y, z;
};
int main()
{
test *a = new test[32], b[32];
printf("%d %d\n", sizeof(a)/sizeof(a[0]), sizeof(b)/sizeof(b[0]));
return 0;
}
which output 0 32 which is expected as the sizeof(a) gives the size of a pointer not the size of an array's memory region.
That bit of code is to support the feature of user defined extension functions in XSLT stylesheets and XQuery queries. If a user is not using these features then they don't need that bit of code. In fact User defined extension functions is only available in Saxon-PE/C and Saxon-EE/C so it should not be in the Saxon-HE/C code base. I have created the following bug issue to investigate the error above and to https://saxonica.plan.io/issues/4477
I would think the workaround would be to either remove the code in question if the extension function feature is not used or remove the compile flag -Werror=sizeof-pointer-div.
The intent was code is as follows:
jobject JNICALL cppNativeCall(jstring funcName, jobjectArray arguments, jobjectArray argTypes){
//native call code here
}
JNINativeMethod cppMethods[] =
{
{
fname,
funcParameters,
(void *)&cppNativeCall
}
};
bool nativeFound = processor->registerNativeMethods(env, "NativeCall",
cppMethods, sizeof(cppMethods) / sizeof(cppMethods[0]));

Why does NodeHandle hang?

I'm having an issue trying to create a subscriber in Indigo. I have a shared_ptr within a class to hold the NodeHandle object. I do this so that NodeHandle can be used in other class members. The problem is when the thread starts, it seems to hang on the make_shared call to the NodeHandle object within the MyClass constructor as it never reaches the next line after.
class MyClass
{
private:
std::shared_ptr<ros::NodeHandle> nh;
std::map<std::string, std::string> remap;
// ...
};
MyClass::MyClass()
{
// remap is empty
ros::init(remap, "my_node");
nh = make_shared<ros::NodeHandle>();
cout << "does not reach this line" << endl;
}
int MyClass::run()
{
// ...
}
I start the thread liks so ...
{
// ...
myobj = make_shared<MyClass>();
my_thread = thread(&MyClass::run, myobj);
// ...
}
Thoughts?
It appears that the problem was due to having my own boost logging system in place and not using the ROS logger (which made it difficult to find as it seemingly has nothing to do with ros::NodeHandle, but probably does underneath). I would comment out my entire code base and start adding to see when ros::NodeHandle would run and at the point of removing my logger and adding it back, I would see the difference between it running and hanging.
Well, here's an example of using Boost::make_shared for a nodehandle.
Note that it makes use of ros::NodeHandlePtr, an already existant Boost shared pointer not using the "std::make_shared" one.
This maybe does not really answer the question but I am suggesting another way around using the boost library.
#include <ros/ros.h>
#include <std_msgs/Empty.h>
#include <boost/thread/thread.hpp>
void do_stuff(int* publish_rate)
{
ros::NodeHandlePtr node = boost::make_shared<ros::NodeHandle>();
ros::Publisher pub_b = node->advertise<std_msgs::Empty>("topic_b", 10);
ros::Rate loop_rate(*publish_rate);
while (ros::ok())
{
std_msgs::Empty msg;
pub_b.publish(msg);
loop_rate.sleep();
}
}
int main(int argc, char** argv)
{
int rate_b = 1; // 1 Hz
ros::init(argc, argv, "mt_node");
// spawn another thread
boost::thread thread_b(do_stuff, &rate_b);
ros::NodeHandlePtr node = boost::make_shared<ros::NodeHandle>();
ros::Publisher pub_a = node->advertise<std_msgs::Empty>("topic_a", 10);
ros::Rate loop_rate(10); // 10 Hz
while (ros::ok())
{
std_msgs::Empty msg;
pub_a.publish(msg);
loop_rate.sleep();
// process any incoming messages in this thread
ros::spinOnce();
}
// wait the second thread to finish
thread_b.join();
return 0;
}
In case you get trouble with the CMakeLists, here it is :
cmake_minimum_required(VERSION 2.8.3)
project(test_thread)
find_package(catkin REQUIRED COMPONENTS
roscpp
rospy
)
find_package(Boost COMPONENTS thread REQUIRED)
include_directories(${Boost_INCLUDE_DIR})
catkin_package(CATKIN_DEPENDS roscpp rospy std_msgs)
include_directories(include ${catkin_INCLUDE_DIRS})
add_executable(thread src/thread_test.cpp)
target_link_libraries(thread ${catkin_LIBRARIES} ${BOOST_LIBRARIES})
Hope that helps !
Cheers,

Memory leak in a Tcl wrapper

I read all I could find about memory management in the Tcl API, but haven't been able to solve my problem so far. I wrote a Tcl extension to access an existing application. It works, except for a serious issue: memory leak.
I tried to reproduce the problem with minimal code, which you can find at the end of the post. The extension defines a new command, recordings, in namespace vtcl. The recordings command creates a list of 10000 elements, each element being a new command. Each command has data attached to it, which is the name of a recording. The name subcommand of each command returns the name of the recording.
I run the following Tcl code with tclsh to reproduce the problem:
load libvtcl.so
for {set ii 0} {$ii < 1000} {incr ii} {
set recs [vtcl::recordings]
foreach r $recs {rename $r ""}
}
The line foreach r $recs {rename $r ""} deletes all the commands at each iteration, which frees the memory of the piece of data attached to each command (I can see that in gdb). I can also see in gdb that the reference count of variable recs goes to 0 at each iteration so that the contents of the list is freed. Nonetheless, I see the memory of the process running tclsh going up at each iteration.
I have no more idea what else I could try. Help will be greatly appreciated.
#include <stdio.h>
#include <string.h>
#include <tcl.h>
static void DecrementRefCount(ClientData cd);
static int ListRecordingsCmd(ClientData cd, Tcl_Interp *interp, int objc,
Tcl_Obj *CONST objv[]);
static int RecordingCmd(ClientData cd, Tcl_Interp *interp, int objc,
Tcl_Obj *CONST objv[]);
static void
DecrementRefCount(ClientData cd)
{
Tcl_Obj *obj = (Tcl_Obj *) cd;
Tcl_DecrRefCount(obj);
return;
}
static int
ListRecordingsCmd(ClientData cd, Tcl_Interp *interp, int objc,
Tcl_Obj *CONST objv[])
{
char name_buf[20];
Tcl_Obj *rec_list = Tcl_NewListObj(0, NULL);
for (int ii = 0; ii < 10000; ii++)
{
static int obj_id = 0;
Tcl_Obj *cmd;
Tcl_Obj *rec_name;
cmd = Tcl_NewStringObj ("rec", -1);
Tcl_AppendObjToObj (cmd, Tcl_NewIntObj (obj_id++));
rec_name = Tcl_NewStringObj ("DM", -1);
snprintf(name_buf, sizeof(name_buf), "%04d", ii);
Tcl_AppendStringsToObj(rec_name, name_buf, (char *) NULL);
Tcl_IncrRefCount(rec_name);
Tcl_CreateObjCommand (interp, Tcl_GetString (cmd), RecordingCmd,
(ClientData) rec_name, DecrementRefCount);
Tcl_ListObjAppendElement (interp, rec_list, cmd);
}
Tcl_SetObjResult (interp, rec_list);
return TCL_OK;
}
static int
RecordingCmd(ClientData cd, Tcl_Interp *interp, int objc, Tcl_Obj *CONST objv[])
{
Tcl_Obj *rec_name = (Tcl_Obj *)cd;
char *subcmd;
subcmd = Tcl_GetString (objv[1]);
if (strcmp (subcmd, "name") == 0)
{
Tcl_SetObjResult (interp, rec_name);
}
else
{
Tcl_Obj *result = Tcl_NewStringObj ("", 0);
Tcl_AppendStringsToObj (result,
"bad command \"",
Tcl_GetString (objv[1]),
"\"",
(char *) NULL);
Tcl_SetObjResult (interp, result);
return TCL_ERROR;
}
return TCL_OK;
}
int
Vtcl_Init(Tcl_Interp *interp)
{
#ifdef USE_TCL_STUBS
if (Tcl_InitStubs(interp, "8.5", 0) == NULL) {
return TCL_ERROR;
}
#endif
if (Tcl_PkgProvide(interp, "vtcl", "0.0.1") != TCL_OK)
return TCL_ERROR;
Tcl_CreateNamespace(interp, "vtcl", (ClientData) NULL,
(Tcl_NamespaceDeleteProc *) NULL);
Tcl_CreateObjCommand(interp, "::vtcl::recordings", ListRecordingsCmd,
(ClientData) NULL, (Tcl_CmdDeleteProc *) NULL);
return TCL_OK;
}
The management of the Tcl_Obj * reference counts looks absolutely correct, but I do wonder whether you're freeing all the other resources associated with a particular instance in your real code. It might also be something else entirely; your code is not the only thing in Tcl that allocates memory! Furthermore, the default memory allocator in Tcl does not actually return memory to the OS, but instead holds onto it until the process ends. Figuring out what is wrong can be tricky.
You can try doing a build of Tcl with the --enable-symbols=mem passed to configure. That makes Tcl build in an extra command, memory, which allows more extensive checking of memory management behaviour (it also does things like ensure that memory is never written to after it is freed). It's not enabled by default because it has a substantial performance hit, but it could well help you track down what's going on. (The memory info subcommand is where to get started.)
You could also try adding -DPURIFY to the CFLAGS when building; it completely disables the Tcl memory allocator (so memory checking tools like — commercial — Purify and — OSS — Electric Fence can get accurate information, instead of getting very confused by Tcl's high-performance thread-aware allocator) and may allow you to figure out what is going on.
I found where the leak is. In function ListRecordingsCmd, I replaced line
Tcl_AppendObjToObj (cmd, Tcl_NewIntObj (obj_id++));
with
Tcl_Obj *obj = Tcl_NewIntObj (obj_id++);
Tcl_AppendObjToObj (cmd, obj);
Tcl_DecrRefCount(obj);
The memory allocated to store the object id was not released. The memory used by the tclsh process is now stable.

Resources