Why do builds for various projects fail with ‘Operation not permitted’ using iOS on-device compiler/toolchain? - ios

I am an intermediately skilled Linux/Unix user trying to compile software for an iPad on a (jailbroken) iPad.
Many builds (for example, make and tex-live) fail with some Operation not permitted error. This will either look like Can't exec "blah": Operation not permitted or execvp: blah: Operation not permitted where blah is aclocal, a configure script, libtool, or just about anything. Curiously, finding the offending line in a Makefile or configure script and prefixing it with sudo -u mobile -E will solve the error for that line, only for it to reappear for on a later line or in another file. Since I am running the build scripts as mobile, I do not understand how this could possibly fix the issue, yet it does. I have confirmed that making these changes does actually allow for the script to work successfully up to that point. Running the build script with sudo or sudo -u mobile -E and/or running the entire build as root does not solve the issue; with either, I still must edit build scripts to add sudo’s.
I would like to know why this is happening, and if possible how I could address the issue without editing build scripts. Any information about these types of errors would be interesting to me even if they do not solve my problem. I am aware that the permissions/security/entitlements system is unusual on iOS and would like to learn more about how it works.
I am using an iPad Pro 4 on jailbroken iOS 13.5 with the build tools from sbingner’s and MCApollo’s repos (repo.bingner.com and mcapollo.github.io/Public). In particular, I am using a build of LLVM 5 (manually installed from sbingner’s old debs), Clang 10, Darwin CC tools 927 and GNU Make 4.2.1. I have set CC, CXX, CFLAGS, etc. to point to clang-10 and my iOS 13.5 SDK with -isysroot and have confirmed that these settings are working. I would like to replace these with updated versions, but I cannot yet build these tools for myself due to this issue and a few others. I do have access to a Mac for cross-compilation if necessary, but I would rather use only my iPad because I like the challenge.
I can attach any logs necessary or provide more information if that would be useful; I do not know enough about this issue to know what information is useful. Thanks in advance for helping me!

For anyone who ends up needing to address this issue on a jailbreak that does not have a fix for this issue, I have written (pasted below) a userland hook based on the posix_spawn implementation from the source of Apple’s xnu kernel.
Compile it with Theos, and inject it into all processes spawned by your shell by setting environment variable DYLD_INSERT_LIBRARIES to the path of the resulting dylib. Note: some tweak injectors (namely libhooker, see here) reset DYLD_INSERT_LIBRARIES, so if you notice this behavior, be sure to inject only your library.
Because the implementation of the exec syscalls in iOS call out to posix_spawn, this hook fixes all of the exec-related issue’s I’ve run into so far.
#include <stdio.h>
#include <string.h>
#include <errno.h>
#include <spawn.h>
// Copied from bsd/kern/kern_exec.c
#define IS_WHITESPACE(ch) ((ch == ' ') || (ch == '\t'))
#define IS_EOL(ch) ((ch == '#') || (ch == '\n'))
// Copied from bsd/sys/imgact.h
#define IMG_SHSIZE 512
// Here, we provide an alternate implementation of posix_spawn which correctly handles #!.
// This is based on the implementation of posix_spawn in bsd/kern/kern_exec.c from Apple's xnu source.
// Thus, I am fairly confident that this posix_spawn has correct behavior relative to macOS.
%hookf(int, posix_spawn, pid_t *pid, const char *orig_path, const posix_spawn_file_actions_t *file_actions, const posix_spawnattr_t *attrp, char *const orig_argv[], char *const envp[]) {
// Call orig before checking for anything.
// This mirrors the standard implementation of posix_spawn because it first checks if we are spawning a binary.
int err = %orig;
// %orig returns EPERM when spawning a script.
// Thus, if err is anything other than EPERM, we can just return like normal.
if (err != EPERM)
return err;
// At this point, we do not need to check for exec permissions or anything like that.
// because posix_spawn would have returned that error instead of EPERM.
// Now we open the file for reading so that we can check if it's a script.
// If it turns out not to be a script, the EPERM must be from something else
// so we just return err.
FILE *file = fopen(orig_path, "r");
if (file == NULL) {
return err;
}
if (fseek(file, 0, SEEK_SET)) {
return err;
}
// In exec_activate_image, the data buffer is filled with the first PAGE_SIZE bytes of the file.
// However, in exec_shell_imgact, only the first IMG_SHSIZE bytes are used.
// Thus, we read IMG_SHSIZE bytes out of our file.
// The buffer is filled with newlines so that if the file is not IMG_SHSIZE bytes,
// the logic reads an IS_EOL.
char vdata[IMG_SHSIZE] = {'\n'};
if (fread(vdata, 1, IMG_SHSIZE, file) < 2) { // If we couldn't read at least two bytes, it's not a script.
fclose(file);
return err;
}
// Now that we've filled the buffer, we don't need the file anymore.
fclose(file);
// Now we follow exec_shell_imgact.
// The point of this is to confirm we have a script
// and extract the usable part of the interpreter+arg string.
// Where they return -1, we don't have a shell script, so we return err.
// Where they return an error, we return that same error.
// We don't bother doing any SUID stuff because SUID scripts should be disabled anyway.
char *ihp;
char *line_startp, *line_endp;
// Make sure we have a shell script.
if (vdata[0] != '#' || vdata[1] != '!') {
return err;
}
// Try to find the first non-whitespace character
for (ihp = &vdata[2]; ihp < &vdata[IMG_SHSIZE]; ihp++) {
if (IS_EOL(*ihp)) {
// Did not find interpreter, "#!\n"
return ENOEXEC;
} else if (IS_WHITESPACE(*ihp)) {
// Whitespace, like "#! /bin/sh\n", keep going.
} else {
// Found start of interpreter
break;
}
}
if (ihp == &vdata[IMG_SHSIZE]) {
// All whitespace, like "#! "
return ENOEXEC;
}
line_startp = ihp;
// Try to find the end of the interpreter+args string
for (; ihp < &vdata[IMG_SHSIZE]; ihp++) {
if (IS_EOL(*ihp)) {
// Got it
break;
} else {
// Still part of interpreter or args
}
}
if (ihp == &vdata[IMG_SHSIZE]) {
// A long line, like "#! blah blah blah" without end
return ENOEXEC;
}
// Backtrack until we find the last non-whitespace
while (IS_EOL(*ihp) || IS_WHITESPACE(*ihp)) {
ihp--;
}
// The character after the last non-whitespace is our logical end of line
line_endp = ihp + 1;
/*
* Now we have pointers to the usable part of:
*
* "#! /usr/bin/int first second third \n"
* ^ line_startp ^ line_endp
*/
// Now, exec_shell_imgact copies the interpreter into another buffer and then null-terminates it.
// Then, it copies the entire interpreter+args into another buffer and null-terminates it for later processing into argv.
// This processing is done in exec_extract_strings, which goes through and null-terminates each argument.
// We will just do this all at once since that's much easier.
// Keep track of how many arguments we have.
int i_argc = 0;
ihp = line_startp;
while (true) {
// ihp is on the start of an argument.
i_argc++;
// Scan to the end of the argument.
for (; ihp < line_endp; ihp++) {
if (IS_WHITESPACE(*ihp)) {
// Found the end of the argument
break;
} else {
// Keep going
}
}
// Null terminate the argument
*ihp = '\0';
// Scan to the beginning of the next argument.
for (; ihp < line_endp; ihp++) {
if (!IS_WHITESPACE(*ihp)) {
// Found the next argument
break;
} else {
// Keep going
}
}
if (ihp == line_endp) {
// We've reached the end of the arg string
break;
}
// If we are here, ihp is the start of an argument.
}
// Now line_startp is a bunch of null-terminated arguments possibly padded by whitespace.
// i_argc is now the count of the interpreter arguments.
// Our new argv should look like i_argv[0], i_argv[1], i_argv[2], ..., orig_path, orig_argv[1], orig_argv[2], ..., NULL
// where i_argv is the arguments to be extracted from line_startp;
// To allocate our new argv, we need to know orig_argc.
int orig_argc = 0;
while (orig_argv[orig_argc] != NULL) {
orig_argc++;
}
// We need space for i_argc + 1 + (orig_argc - 1) + 1 char*'s
char *argv[i_argc + orig_argc + 1];
// Copy i_argv into argv
int i = 0;
ihp = line_startp;
for (; i < i_argc; i++) {
// ihp is on the start of an argument
argv[i] = ihp;
// Scan to the next null-terminator
for (; ihp < line_endp; ihp++) {
if (*ihp == '\0') {
// Found it
break;
} else {
// Keep going
}
}
// Go to the next character
ihp++;
// Then scan to the next argument.
// There must be another argument because we already counted i_argc.
for (; ihp < line_endp; ihp++) {
if (!IS_WHITESPACE(*ihp)) {
// Found it
break;
} else {
// Keep going
}
}
// ihp is on the start of an argument.
}
// Then, copy orig_path into into argv.
// We need to make a copy of orig_path to avoid issues with const.
char orig_path_copy[strlen(orig_path)+1];
strcpy(orig_path_copy, orig_path);
argv[i] = orig_path_copy;
i++;
// Now, copy orig_argv[1...] into argv.
for (int j = 1; j < orig_argc; i++, j++) {
argv[i] = orig_argv[j];
}
// Finally, add the null.
argv[i] = NULL;
// Now, our argv is setup correctly.
// Now, we can call out to posix_spawn again.
// The interpeter is in argv[0], so we use that for the path.
return %orig(pid, argv[0], file_actions, attrp, argv, envp);
}

Related

Saxon-C CentOS8 Compile

I am trying to evaluate Saxon-C 1.2.1 HE on CentOS8 and installation seems to have gone ok. Trying out the samples by cd samples/cppTests && build64-linux.sh though leads to a myriad of compilation errors to the tune of the following:
../../Saxon.C.API/SaxonProcessor.h:599:32: error: division ‘sizeof (JNINativeMethod*) / sizeof (JNINativeMethod)’ does not compute the number of array elements [-Werror=sizeof-pointer-div]
gMethods, sizeof(gMethods) / sizeof(gMethods[0]));
Before I summarily and trustfully switched off -Werror=sizeof-pointer-div i checked the source code and what's going on there do seem dubious.
bool registerCPPFunction(char * libName, JNINativeMethod * gMethods=NULL){
if(libName != NULL) {
setConfigurationProperty("extc", libName);
}
if(gMethods == NULL && nativeMethodVect.size()==0) {
return false;
} else {
if(gMethods == NULL) {
//copy vector to gMethods
gMethods = new JNINativeMethod[nativeMethodVect.size()];
}
return registerNativeMethods(sxn_environ->env, "com/saxonica/functions/>
gMethods, sizeof(gMethods) / sizeof(gMethods[0]));
}
return false;
}
more specifically sizeof(gMethods) / sizeof(gMethods[0]) would not seem to calculate anything useful by any margin. The intention was probably rather to output some code that would arrive at the same value as nativeMethodVect.size() but seeing this project's source for the very first time i might be mistaking and the division is in fact intentional ?
I am inclined to guess the intention was in fact closer to b than to a in the following example:
#include <cstdio>
struct test
{
int x, y, z;
};
int main()
{
test *a = new test[32], b[32];
printf("%d %d\n", sizeof(a)/sizeof(a[0]), sizeof(b)/sizeof(b[0]));
return 0;
}
which output 0 32 which is expected as the sizeof(a) gives the size of a pointer not the size of an array's memory region.
That bit of code is to support the feature of user defined extension functions in XSLT stylesheets and XQuery queries. If a user is not using these features then they don't need that bit of code. In fact User defined extension functions is only available in Saxon-PE/C and Saxon-EE/C so it should not be in the Saxon-HE/C code base. I have created the following bug issue to investigate the error above and to https://saxonica.plan.io/issues/4477
I would think the workaround would be to either remove the code in question if the extension function feature is not used or remove the compile flag -Werror=sizeof-pointer-div.
The intent was code is as follows:
jobject JNICALL cppNativeCall(jstring funcName, jobjectArray arguments, jobjectArray argTypes){
//native call code here
}
JNINativeMethod cppMethods[] =
{
{
fname,
funcParameters,
(void *)&cppNativeCall
}
};
bool nativeFound = processor->registerNativeMethods(env, "NativeCall",
cppMethods, sizeof(cppMethods) / sizeof(cppMethods[0]));

Debugging Fatal Error - alloc: invalid block: 0000000001F00AEF0: 0 0

I have a GUI written in R that utilizes Tcl/TK package as well a C .dll that also uses Tcl library. I have done some research on this issue, and it seems to be memory related. I am an inexperienced programmer, so I am not sure where I should be looking for this memory issue. Each call of malloc() has a matching free(), and same with the analogous Tcl_Alloc() and Tcl_Free(). This error is very hard to reproduce as well, thus I am afraid I cannot provide a reproducible example as it is seemingly random in nature. One pattern is however that it seems to only happen upon closure of the program, though this is very inconsistent.
By making this post, I am hoping to gain a logical process that one should take in an attempt to debug this problem in a general context under Tcl/Tk - C - R applications. I am not looking for a solution specific to my code, but rather what an individual should think about when encountering this problem.
The message comes from the function Ptr2Block() in tclThreadAlloc.c (or there's something else about which produces the same error message; possible but unlikely) which is Tcl's thread-specific memory allocator (which is used widely inside Tcl to reduce the number of times global locks are hit). Specifically, it's this bit:
if (blockPtr->magicNum1 != MAGIC || blockPtr->magicNum2 != MAGIC) {
Tcl_Panic("alloc: invalid block: %p: %x %x",
blockPtr, blockPtr->magicNum1, blockPtr->magicNum2);
}
The problem? Those zeroes should be MAGIC (which is equal to 0xEF). This indicates that something has overwritten the memory block's metadata — which also should include the size of the block, but that is now likely hot garbage — and program memory integrity can no longer be trusted. Alas, at this point we're now dealing with a program in a broken state where the breakage happened some time previously; the place where the panic happened is merely where detection of the bug happened, not the actual location of the bug.
Debugging further is usually done by building a version of everything with fancy memory allocators turned off (in Tcl's code, this is done by defining the PURIFY symbol when building) and then running the resulting code — which hopefully still has the bug — with a tool like electricfence or purify (hence the special symbol name) to see what sort of out-of-bounds errors are found; they're very good at hunting down this sort of issue.
I would advise you to start by having a closer look to the sizeof() values provided to your Tcl_Alloc() calls in this C .dll.
I'm writing myself a Tcl binding for a C library and I faced recently exactly the same problem and therefore I'm assuming you may have the same error than me in your code.
Here below a minimal example that reproduces the problem:
#include <tcl.h>
#include <stdlib.h> // malloc
static unsigned int dataCtr;
struct tDataWrapper {
const char *str; // Tcl_GetCommandName(interp, cmd)
unsigned int n; // dataCtr value
void *data; // pointer to wrapped object
};
static void wrapDelCmd(ClientData clientData)
{
struct tDataWrapper *wrap = (struct tDataWrapper *) clientData;
if (wrap != NULL) {
/* with false sizeof value provided while creating the wrapper
* (see above), this data pointer would overwrite the
* overhead section of the allocated tcl memory block
* from what I understood and this is what can be causing
* the panic with message like following one when the
* memory is freed with ckfree (here after calling unload)
* alloc: invalid block: 0000018F2624E760: 0 0 */
printf("DEBUG: #%s(%s) &wrap->data #%p\n",
__func__, wrap->str, &wrap->data);
if (wrap->data != NULL) {
// call your wrapped API to deinstantiate the object
}
ckfree(wrap);
}
}
static int wrapCmd(ClientData clientData, Tcl_Interp *interp,
int objc, Tcl_Obj *const objv[])
{
struct tDataWrapper *wrap = (struct tDataWrapper *) clientData;
if (wrap == NULL)
return TCL_ERROR;
else if (wrap->data != NULL) {
// call your wrapped API to do something with instantiated object
return TCL_OK;
} else {
Tcl_Obj *obj = Tcl_ObjPrintf("wrap: {str=\"%s\", n=%u, data=%llx}",
wrap->str, wrap->n, (unsigned long long) wrap->data);
if (obj != NULL) {
Tcl_SetObjResult(interp, obj);
return TCL_OK;
} else
return TCL_ERROR;
}
}
static int newCmd(ClientData clientData, Tcl_Interp *interp,
int objc, Tcl_Obj *const objv[])
{
struct tDataWrapper *wrap;
Tcl_Obj *obj;
Tcl_Command cmd;
// 3) this is correct
// if ((wrap = attemptckalloc(sizeof(struct tDataWrapper))) == NULL)
// 2) still incorrect but GCC gives more warning regarding the inconsistent pointer handling
// if ((wrap = malloc(sizeof(struct tDataWrapper *))) == NULL)
// 1) this is incorrect
if ((wrap = attemptckalloc(sizeof(struct tDataWrapper *))) == NULL)
Tcl_Panic("%s:%u: attemptckalloc failed\n", __func__, __LINE__);
else if ((obj = Tcl_ObjPrintf("data%u", dataCtr+1)) == NULL)
Tcl_Panic("%s:%u: Tcl_ObjPrintf failed\n", __func__, __LINE__);
else if ((cmd = Tcl_CreateObjCommand(interp, Tcl_GetString(obj),
wrapCmd, (ClientData) wrap, wrapDelCmd)) == NULL)
Tcl_Panic("%s:%u: Tcl_CreateObjCommand failed\n", __func__, __LINE__);
else {
wrap->str = Tcl_GetCommandName(interp, cmd);
wrap->n = dataCtr;
wrap->data = NULL; // call your wrapped API to instantiate an object
dataCtr++;
Tcl_SetObjResult(interp, obj);
}
return TCL_OK;
}
int Allocinvalidblock_Init(Tcl_Interp *interp)
{
dataCtr = 0;
return (Tcl_CreateObjCommand(interp, "new",
newCmd, (ClientData) NULL, NULL)
== NULL) ? TCL_ERROR : TCL_OK;
}
int Allocinvalidblock_Unload(Tcl_Interp *interp, int flags)
{
Tcl_Namespace *ns = Tcl_GetGlobalNamespace(interp);
Tcl_Obj *obj;
Tcl_Command cmd;
unsigned int i;
for(i=0; i<dataCtr; i++) {
if ((obj = Tcl_ObjPrintf("data%u", i+1)) != NULL) {
if ((cmd = Tcl_FindCommand(interp,
Tcl_GetString(obj), ns, TCL_GLOBAL_ONLY)) != NULL)
Tcl_DeleteCommandFromToken(interp, cmd);
Tcl_DecrRefCount(obj);
}
}
return TCL_OK;
}
Once built (for example with Code::Blocks as shared library project linking against C:/msys64/mingw64/lib/libtcl.dll.a), the error can be triggered when more than a data object is created and the library immediately unloaded:
load bin/Release/libAllocInvalidBlock.dll
new
new
unload bin/Release/libAllocInvalidBlock.dll
If used otherwise the crash may even be not triggered... Anyway, such an error in the C code is not particularly obvious to identify (although easy to fix) because the compilation is running without any warning (although -Wall compiler flag is set).

iOS Mach-O – Make __TEXT segment temporarily writable

I've tried a lot to finally get this working, but it still doesn't work yet.
Im trying to change some variables in the __TEXT section, which is read-only by default, like changing the cryptid (and other stuff)
It kind of worked a while ago, back on 32 bit devices. But somehow, it always fails after I used the 64bit commands.
It currently crashes if I hit the following lines:
tseg->maxprot = tseg->initprot = VM_PROT_READ | VM_PROT_EXECUTE
or
crypt->cryptid = 1.
struct mach_header_64* mach = (struct mach_header_64*) _dyld_get_image_header(0);
uint64_t header_size = 0;
struct encryption_info_command_64 *crypt;
struct segment_command_64 *tseg;
struct dylib_command *protector_cmd;
// clean up some commands
void *curloc = (void *)mach + sizeof(struct mach_header);
for (int i=0;i<mach->ncmds;i++) {
struct load_command *lcmd = curloc;
if (lcmd->cmd == LC_ENCRYPTION_INFO_64) {
// save crypt cmd
crypt = curloc;
} else if (lcmd->cmd == LC_SEGMENT_64) {
struct segment_command_64 *seg = curloc;
if (seg->fileoff == 0 && seg->filesize != 0) {
header_size = seg->vmsize;
tseg = curloc;
}
}
if(i == mach->ncmds-1){
protector_cmd = curloc;
}
curloc += lcmd->cmdsize;
}
kern_return_t err;
// make __TEXT temporarily writable
err = vm_protect(mach_task_self(), (vm_address_t)mach, (vm_size_t)header_size, false, VM_PROT_ALL);
if (err != KERN_SUCCESS) exit(1);
// modify the load commands
// change protection of __TEXT segment
tseg->maxprot = tseg->initprot = VM_PROT_READ | VM_PROT_EXECUTE;
// change cryptid
crypt->cryptid = 1;
There's no point in changing the load command. The load commands were already processed when the program was loaded (which must be before this code of yours can run). They have no further effect on the protection of pages.
You're apparently already aware of the vm_protect() function. So why aren't you using that to make the text segment itself writable rather than trying to make the load commands writable?
And it's surely simpler to use getsegmentdata() to locate the segment in memory than looking at the load commands (to which you'd have to add the slide).
Beyond that, I would be surprised if iOS lets you do that. There's a general prohibition against run-time modifiable code (with very narrow exceptions).

Lua 'require' but files are only in memory

Setting: I'm using Lua from a C/C++ environment.
I have several lua files on disk. Those are read into memory and some more memory-only lua files become available during runtime. Think e.g. of an editor, with additional unsaved lua files.
So, I have a list<identifier, lua_file_content> in memory. Some of these files have require statements in them. When I try to load all these files to a lua instance (currently via lua_dostring) I get attempt to call global require (a nil value).
Is there a possibility to provide a require function, which replaces the old one and just uses the provided in memory files (those files are on the C side)?
Is there another way of allowing require in these files without having the required files on disk?
An example would be to load the lua stdlib from memory only without altering it. (This is actually my test case.)
Instead of replacing require, why not add a function to package.loaders? The code is nearly the same.
int my_loader(lua_State* state) {
// get the module name
const char* name = lua_tostring(state);
// find if you have such module loaded
if (mymodules.find(name) != mymodules.end())
{
luaL_loadbuffer(state, buffer, size, name);
// the chunk is now at the top of the stack
return 1;
}
// didn't find anything
return 0;
}
// When you load the lua state, insert this into package.loaders
http://www.lua.org/manual/5.1/manual.html#pdf-package.loaders
A pretty straightforward C++ function that would mimic require could be: (pseudocode)
int my_require(lua_State* state) {
// get the module name
const char* name = lua_tostring(state);
// find if you have such module loaded
if (mymodules.find(name) != mymodules.end())
luaL_loadbuffer(state, buffer, size, name);
// the chunk is now at the top of the stack
lua_call(state)
return 1;
}
Expose this function to Lua as require and you're good to go.
I'd also like to add that to completely mimic require's behaviour, you'd probably need to take care of package.loaded, to avoid the code to be loaded twice.
There is no package.loaders in lua 5.2
It called package.searchers now.
#include <stdio.h>
#include <string>
#include <lua.hpp>
std::string module_script;
int MyLoader(lua_State *L)
{
const char *name = luaL_checkstring(L, 1); // Module name
// std::string result = SearchScript(name); // Search your database.
std::string result = module_script; // Just for demo.
if( luaL_loadbuffer(L, result.c_str(), result.size(), name) )
{
printf("%s", lua_tostring(L, -1));
lua_pop(L, 1);
}
return 1;
}
void SetLoader(lua_State* L)
{
lua_register(L, "my_loader", MyLoader);
std::string str;
// str += "table.insert(package.loaders, 2, my_loader) \n"; // Older than lua v5.2
str += "table.insert(package.searchers, 2, my_loader) \n";
luaL_dostring(L, str.c_str());
}
void SetModule()
{
std::string str;
str += "print([[It is add.lua]]) \n";
str += "return { func = function() print([[message from add.lua]]) end } \n";
module_script=str;
}
void LoadMainScript(lua_State* L)
{
std::string str;
str += "dev = require [[add]] \n";
str += "print([[It is main.lua]]) \n";
str += "dev.func() \n";
if ( luaL_loadbuffer(L, str.c_str(), str.size(), "main") )
{
printf("%s", lua_tostring(L, -1));
lua_pop(L, 1);
return;
}
}
int main()
{
lua_State* L = luaL_newstate();
luaL_openlibs(L);
SetModule(L); // Write down module in memory. Lua not load it yet.
SetLoader(L);
LoadMainScript(L);
lua_pcall(L,0,0,0);
lua_close(L);
return 0;
}

Bad file descriptor on pthread_detach

My pthread_detach calls fail with a "Bad file descriptor" error. The calls are in the destructor for my class and look like this -
if(pthread_detach(get_sensors) != 0)
printf("\ndetach on get_sensors failed with error %m", errno);
if(pthread_detach(get_real_velocity) != 0)
printf("\ndetach on get_real_velocity failed with error %m", errno);
I have only ever dealt with this error when using sockets. What could be causing this to happen in a pthread_detach call that I should look for? Or is it likely something in the thread callback that could be causing it? Just in case, the callbacks look like this -
void* Robot::get_real_velocity_thread(void* threadid) {
Robot* r = (Robot*)threadid;
r->get_real_velocity_thread_i();
}
inline void Robot::get_real_velocity_thread_i() {
while(1) {
usleep(14500);
sensor_packet temp = get_sensor_value(REQUESTED_VELOCITY);
real_velocity = temp.values[0];
if(temp.values[1] != -1)
real_velocity += temp.values[1];
} //end while
}
/*Callback for get sensors thread*/
void* Robot::get_sensors_thread(void* threadid) {
Robot* r = (Robot*)threadid;
r->get_sensors_thread_i();
} //END GETSENSORS_THREAD
inline void Robot::get_sensors_thread_i() {
while(1) {
usleep(14500);
if(sensorsstreaming) {
unsigned char receive;
int read = 0;
read = connection.PollComport(port, &receive, sizeof(unsigned char));
if((int)receive == 19) {
read = connection.PollComport(port, &receive, sizeof(unsigned char));
unsigned char rest[54];
read = connection.PollComport(port, rest, 54);
/* ***SET SENSOR VALUES*** */
//bump + wheel drop
sensor_values[0] = (int)rest[1];
sensor_values[1] = -1;
//wall
sensor_values[2] = (int)rest[2];
sensor_values[3] = -1;
...
...
lots more setting just like the two above
} //end if header == 19
} //end if sensors streaming
} //end while
} //END GET_SENSORS_THREAD_I
Thank you for any help.
The pthread_* functions return an error code; they do not set errno. (Well, they may of course, but not in any way that is documented.)
Your code should print the value returned by pthread_detach and print that.
Single Unix Spec documents two return values for this function: ESRCH (no thread by that ID was found) and EINVAL (the thread is not joinable).
Detaching threads in the destructor of an object seems silly. Firstly, if they are going to be detached eventually, why not just create them that way?
If there is any risk that the threads can use the object that is being destroyed, they need to be stopped, not detached. I.e. you somehow indicate to the threads that they should shut down, and then wait for them to reach some safe place after which they will not touch the object any more. pthread_join is useful for this.
Also, it is a little late to be doing that from the destructor. A destructor should only be run when the thread executing it is the only thread with a reference to that object. If threads are still using the object, then you're destroying it from under them.

Resources