Format function does not work well for BCC32C - c++builder

Another difference between BCC32 and BCC32C. Format function does not show correctly for BCC32C.
In the following example, BCC32 shows "7 test test 7" (correct), but BCC32C shows "7 test test".
I am using Rad Studio 10.1. You can replicate it creating a empty project, add a button and then the following code:
void __fastcall TForm1::Button2Click(TObject *Sender)
{
int i=7;
String str="test";
ShowMessage(Format("%d %s %s %d", ARRAYOFCONST((i, str, str, i))));
}
Is there a workaround for this? I am starting to think that BCC32C is not ready for production. I am finding a lot of problems.

I believe the correct way is to do this:
void __fastcall TForm1::Button2Click(TObject *Sender)
{
int i=7;
String str="test";
TVarRec Args[] = {i, str, str, i};
ShowMessage(Format("%d %s %s %d", Args, 3));
}
Sam

After researching, it seems I need to wait for a new release of Rad Studio.
Meanwhile, I have created a workaround that for now it seems to work:
#if defined(__clang__)
#define VRARRAY(...) (TVarRec[]){__VA_ARGS__}, sizeof((TVarRec[]){__VA_ARGS__})/sizeof(TVarRec)
#else
#define VRARRAY(...) System::OpenArray<System::TVarRec>(__VA_ARGS__), sizeof(System::OpenArrayCounter<System::TVarRec>::Count (__VA_ARGS__))-1
#endif
Then, I can use:
ShowMessage(Format("%d %s %s %d", VRARRAY(i, str, str, i)));

Related

Saxon-C CentOS8 Compile

I am trying to evaluate Saxon-C 1.2.1 HE on CentOS8 and installation seems to have gone ok. Trying out the samples by cd samples/cppTests && build64-linux.sh though leads to a myriad of compilation errors to the tune of the following:
../../Saxon.C.API/SaxonProcessor.h:599:32: error: division ‘sizeof (JNINativeMethod*) / sizeof (JNINativeMethod)’ does not compute the number of array elements [-Werror=sizeof-pointer-div]
gMethods, sizeof(gMethods) / sizeof(gMethods[0]));
Before I summarily and trustfully switched off -Werror=sizeof-pointer-div i checked the source code and what's going on there do seem dubious.
bool registerCPPFunction(char * libName, JNINativeMethod * gMethods=NULL){
if(libName != NULL) {
setConfigurationProperty("extc", libName);
}
if(gMethods == NULL && nativeMethodVect.size()==0) {
return false;
} else {
if(gMethods == NULL) {
//copy vector to gMethods
gMethods = new JNINativeMethod[nativeMethodVect.size()];
}
return registerNativeMethods(sxn_environ->env, "com/saxonica/functions/>
gMethods, sizeof(gMethods) / sizeof(gMethods[0]));
}
return false;
}
more specifically sizeof(gMethods) / sizeof(gMethods[0]) would not seem to calculate anything useful by any margin. The intention was probably rather to output some code that would arrive at the same value as nativeMethodVect.size() but seeing this project's source for the very first time i might be mistaking and the division is in fact intentional ?
I am inclined to guess the intention was in fact closer to b than to a in the following example:
#include <cstdio>
struct test
{
int x, y, z;
};
int main()
{
test *a = new test[32], b[32];
printf("%d %d\n", sizeof(a)/sizeof(a[0]), sizeof(b)/sizeof(b[0]));
return 0;
}
which output 0 32 which is expected as the sizeof(a) gives the size of a pointer not the size of an array's memory region.
That bit of code is to support the feature of user defined extension functions in XSLT stylesheets and XQuery queries. If a user is not using these features then they don't need that bit of code. In fact User defined extension functions is only available in Saxon-PE/C and Saxon-EE/C so it should not be in the Saxon-HE/C code base. I have created the following bug issue to investigate the error above and to https://saxonica.plan.io/issues/4477
I would think the workaround would be to either remove the code in question if the extension function feature is not used or remove the compile flag -Werror=sizeof-pointer-div.
The intent was code is as follows:
jobject JNICALL cppNativeCall(jstring funcName, jobjectArray arguments, jobjectArray argTypes){
//native call code here
}
JNINativeMethod cppMethods[] =
{
{
fname,
funcParameters,
(void *)&cppNativeCall
}
};
bool nativeFound = processor->registerNativeMethods(env, "NativeCall",
cppMethods, sizeof(cppMethods) / sizeof(cppMethods[0]));

Why doesn't coverity show CHECKED_RETURN in my program?

#include<stdio.h>
#include <stdlib.h>
int test() {
const char* s = getenv("CNU");
if (s!=NULL)
return 1;
else
return -1;
}
int main() {
test();
// some C code..
return 0;
}
Commands that I use for coverity analysis:
cov-build --dir Cov.build gcc test.c
cov-analyze --dir Cov.build --aggressiveness-level high --enable-callgraph-metrics --all
report:
Analysis summary report:
------------------------
Files analyzed : 1
Total LoC input to cov-analyze : 10926
Functions analyzed : 2
Paths analyzed : 6
Time taken by analysis : 00:00:01
Defect occurrences found : 0
About CHECKED_RETURN:
https://ondemand.coverity.com/reference/7.6.1/en/coverity
The CHECKED_RETURN checker is a statistical checker - it looks for examples where the return value is checked, and if a statistically significant (configurable) threshold is reached, defects will be issued for locations where you fail to check the return value.
If you want it to always issue a defect whenever you fail to check the return value, then you need to add __coverity_always_check_return__(), as shown in the example in the documentation you linked:
int always_check_me(void) {
__coverity_always_check_return__();
return rand() % 2;
}
int main(int c, char **argv) {
always_check_me(); #defect#checked_return
// the statement above is a defect because the value is not checked
cout << "Hello world" << endl;
}
For obvious reasons, you'll need to also create a function stub for this for the source to compile (also mentioned in the documentation). If you want to make the code live only for Coverity, you can guard it with #if __COVERITY__.
Yes, CHECKED_RETURN is a statistical checker error. If you check the return value of test() in 10 places, then in 11th place if you miss to check the return value of test(), coverity will return CHECKED_RETURN error. Ans show the statistics on how many places it checked out of total usage.

Memory leak in a Tcl wrapper

I read all I could find about memory management in the Tcl API, but haven't been able to solve my problem so far. I wrote a Tcl extension to access an existing application. It works, except for a serious issue: memory leak.
I tried to reproduce the problem with minimal code, which you can find at the end of the post. The extension defines a new command, recordings, in namespace vtcl. The recordings command creates a list of 10000 elements, each element being a new command. Each command has data attached to it, which is the name of a recording. The name subcommand of each command returns the name of the recording.
I run the following Tcl code with tclsh to reproduce the problem:
load libvtcl.so
for {set ii 0} {$ii < 1000} {incr ii} {
set recs [vtcl::recordings]
foreach r $recs {rename $r ""}
}
The line foreach r $recs {rename $r ""} deletes all the commands at each iteration, which frees the memory of the piece of data attached to each command (I can see that in gdb). I can also see in gdb that the reference count of variable recs goes to 0 at each iteration so that the contents of the list is freed. Nonetheless, I see the memory of the process running tclsh going up at each iteration.
I have no more idea what else I could try. Help will be greatly appreciated.
#include <stdio.h>
#include <string.h>
#include <tcl.h>
static void DecrementRefCount(ClientData cd);
static int ListRecordingsCmd(ClientData cd, Tcl_Interp *interp, int objc,
Tcl_Obj *CONST objv[]);
static int RecordingCmd(ClientData cd, Tcl_Interp *interp, int objc,
Tcl_Obj *CONST objv[]);
static void
DecrementRefCount(ClientData cd)
{
Tcl_Obj *obj = (Tcl_Obj *) cd;
Tcl_DecrRefCount(obj);
return;
}
static int
ListRecordingsCmd(ClientData cd, Tcl_Interp *interp, int objc,
Tcl_Obj *CONST objv[])
{
char name_buf[20];
Tcl_Obj *rec_list = Tcl_NewListObj(0, NULL);
for (int ii = 0; ii < 10000; ii++)
{
static int obj_id = 0;
Tcl_Obj *cmd;
Tcl_Obj *rec_name;
cmd = Tcl_NewStringObj ("rec", -1);
Tcl_AppendObjToObj (cmd, Tcl_NewIntObj (obj_id++));
rec_name = Tcl_NewStringObj ("DM", -1);
snprintf(name_buf, sizeof(name_buf), "%04d", ii);
Tcl_AppendStringsToObj(rec_name, name_buf, (char *) NULL);
Tcl_IncrRefCount(rec_name);
Tcl_CreateObjCommand (interp, Tcl_GetString (cmd), RecordingCmd,
(ClientData) rec_name, DecrementRefCount);
Tcl_ListObjAppendElement (interp, rec_list, cmd);
}
Tcl_SetObjResult (interp, rec_list);
return TCL_OK;
}
static int
RecordingCmd(ClientData cd, Tcl_Interp *interp, int objc, Tcl_Obj *CONST objv[])
{
Tcl_Obj *rec_name = (Tcl_Obj *)cd;
char *subcmd;
subcmd = Tcl_GetString (objv[1]);
if (strcmp (subcmd, "name") == 0)
{
Tcl_SetObjResult (interp, rec_name);
}
else
{
Tcl_Obj *result = Tcl_NewStringObj ("", 0);
Tcl_AppendStringsToObj (result,
"bad command \"",
Tcl_GetString (objv[1]),
"\"",
(char *) NULL);
Tcl_SetObjResult (interp, result);
return TCL_ERROR;
}
return TCL_OK;
}
int
Vtcl_Init(Tcl_Interp *interp)
{
#ifdef USE_TCL_STUBS
if (Tcl_InitStubs(interp, "8.5", 0) == NULL) {
return TCL_ERROR;
}
#endif
if (Tcl_PkgProvide(interp, "vtcl", "0.0.1") != TCL_OK)
return TCL_ERROR;
Tcl_CreateNamespace(interp, "vtcl", (ClientData) NULL,
(Tcl_NamespaceDeleteProc *) NULL);
Tcl_CreateObjCommand(interp, "::vtcl::recordings", ListRecordingsCmd,
(ClientData) NULL, (Tcl_CmdDeleteProc *) NULL);
return TCL_OK;
}
The management of the Tcl_Obj * reference counts looks absolutely correct, but I do wonder whether you're freeing all the other resources associated with a particular instance in your real code. It might also be something else entirely; your code is not the only thing in Tcl that allocates memory! Furthermore, the default memory allocator in Tcl does not actually return memory to the OS, but instead holds onto it until the process ends. Figuring out what is wrong can be tricky.
You can try doing a build of Tcl with the --enable-symbols=mem passed to configure. That makes Tcl build in an extra command, memory, which allows more extensive checking of memory management behaviour (it also does things like ensure that memory is never written to after it is freed). It's not enabled by default because it has a substantial performance hit, but it could well help you track down what's going on. (The memory info subcommand is where to get started.)
You could also try adding -DPURIFY to the CFLAGS when building; it completely disables the Tcl memory allocator (so memory checking tools like — commercial — Purify and — OSS — Electric Fence can get accurate information, instead of getting very confused by Tcl's high-performance thread-aware allocator) and may allow you to figure out what is going on.
I found where the leak is. In function ListRecordingsCmd, I replaced line
Tcl_AppendObjToObj (cmd, Tcl_NewIntObj (obj_id++));
with
Tcl_Obj *obj = Tcl_NewIntObj (obj_id++);
Tcl_AppendObjToObj (cmd, obj);
Tcl_DecrRefCount(obj);
The memory allocated to store the object id was not released. The memory used by the tclsh process is now stable.

Open CV Image editing library for windows 8 and windows phone 8

Is there any support of OpenCV graphics library is available for Windows Phone 8 and Windows 8. I made a search on Google but didn't find any resource related with OpenCV to connect with Windows Phone 8 / Windows 8. If any of you know more about this please help me, and provide some link to reach the library.
This is the latest information what I get from OpenCV team.
OpenCV development team is working on port for Windows RT. Here is current development branch for WinRT(https://github.com/asmorkalov/opencv/tree/winrt). You can build it for ARM using Visual Studio Express for Windows 8 and Platform SDK.
Open Visual Studio development console.
Setup environment for cross compilation by command "C:\Program Files(x86)\Microsoft
Visual Studio 11.0\VC\bin\x86_arm\vcvarsx86_arm.bat"
cd <opencv_source_dir>/platforms/winrt/
run scripts/cmake_winrt.cmd
run ninja
Alternatively you can use nmake instead ninja. You need to edit cmake_winrt.cmd and change project generator fro -GNinja to -G "NMake Makefiles". Only algorithmic part of the library is supported now, no tbb, no UI, no video IO.
Please check the below given URL from more details.
http://answers.opencv.org/question/9847/opencv-for-windows-8-tablet/?answer=9851#post-id-9851
By windows-8, I guess you mean winRT ? AFAIK, there is no official port to winRT. You need to compile it by yourself as a Win8 Store DLL for instance, so that you can reference it from a Win8 Store Application.
Just start by opencv-core, then add the lib you need, one by one, because all the components will not be able to compile (for instance, opencv-highgui is highly dependant on Windows API which is not fully compatible with Win8 Store Apps).
You'll also need to code by yourself some Win32 methods used by OpenCV and not accessible from Win8 App like GetSystemInfo(), GetTempPathA(), GetTempFileNameA() and all methods related to thread local storage (TLS).
I've been able to use a small subset of OpenCV in WinRT by compiling opencv_core, opencv_imgproc and zlib, as 3 seperate static libs. I've added one another, called opencv_winrt, that contains only the two following files:
opencv_winrt.h
#pragma once
#include "combaseapi.h"
void WINAPI GetSystemInfo(
_Out_ LPSYSTEM_INFO lpSystemInfo
);
DWORD WINAPI GetTempPathA(
_In_ DWORD nBufferLength,
_Out_ char* lpBuffer
);
UINT WINAPI GetTempFileNameA(
_In_ const char* lpPathName,
_In_ const char* lpPrefixString,
_In_ UINT uUnique,
_Out_ char* lpTempFileName
);
DWORD WINAPI TlsAlloc();
BOOL WINAPI TlsFree(
_In_ DWORD dwTlsIndex
);
LPVOID WINAPI TlsGetValue(
_In_ DWORD dwTlsIndex
);
BOOL WINAPI TlsSetValue(
_In_ DWORD dwTlsIndex,
_In_opt_ LPVOID lpTlsValue
);
void WINAPI TlsShutdown();
# define TLS_OUT_OF_INDEXES ((DWORD)0xFFFFFFFF)
opencv_winrt.cpp
#include "opencv_winrt.h"
#include <vector>
#include <set>
#include <mutex>
#include "assert.h"
void WINAPI GetSystemInfo(LPSYSTEM_INFO lpSystemInfo)
{
GetNativeSystemInfo(lpSystemInfo);
}
DWORD WINAPI GetTempPathA(DWORD nBufferLength, char* lpBuffer)
{
return 0;
}
UINT WINAPI GetTempFileNameA(const char* lpPathName, const char* lpPrefixString, UINT uUnique, char* lpTempFileName)
{
return 0;
}
// Thread local storage.
typedef std::vector<void*> ThreadLocalData;
static __declspec(thread) ThreadLocalData* currentThreadData = nullptr;
static std::set<ThreadLocalData*> allThreadData;
static DWORD nextTlsIndex = 0;
static std::vector<DWORD> freeTlsIndices;
static std::mutex tlsAllocationLock;
DWORD WINAPI TlsAlloc()
{
std::lock_guard<std::mutex> lock(tlsAllocationLock);
// Can we reuse a previously freed TLS slot?
if (!freeTlsIndices.empty())
{
DWORD result = freeTlsIndices.back();
freeTlsIndices.pop_back();
return result;
}
// Allocate a new TLS slot.
return nextTlsIndex++;
}
_Use_decl_annotations_ BOOL WINAPI TlsFree(DWORD dwTlsIndex)
{
std::lock_guard<std::mutex> lock(tlsAllocationLock);
assert(dwTlsIndex < nextTlsIndex);
assert(find(freeTlsIndices.begin(), freeTlsIndices.end(), dwTlsIndex) == freeTlsIndices.end());
// Store this slot for reuse by TlsAlloc.
try
{
freeTlsIndices.push_back(dwTlsIndex);
}
catch (...)
{
return false;
}
// Zero the value for all threads that might be using this now freed slot.
for each (auto threadData in allThreadData)
{
if (threadData->size() > dwTlsIndex)
{
threadData->at(dwTlsIndex) = nullptr;
}
}
return true;
}
_Use_decl_annotations_ LPVOID WINAPI TlsGetValue(DWORD dwTlsIndex)
{
ThreadLocalData* threadData = currentThreadData;
if (threadData && threadData->size() > dwTlsIndex)
{
// Return the value of an allocated TLS slot.
return threadData->at(dwTlsIndex);
}
else
{
// Default value for unallocated slots.
return nullptr;
}
}
_Use_decl_annotations_ BOOL WINAPI TlsSetValue(DWORD dwTlsIndex, LPVOID lpTlsValue)
{
ThreadLocalData* threadData = currentThreadData;
if (!threadData)
{
// First time allocation of TLS data for this thread.
try
{
threadData = new ThreadLocalData(dwTlsIndex + 1, nullptr);
std::lock_guard<std::mutex> lock(tlsAllocationLock);
allThreadData.insert(threadData);
currentThreadData = threadData;
}
catch (...)
{
if (threadData)
delete threadData;
return false;
}
}
else if (threadData->size() <= dwTlsIndex)
{
// This thread already has a TLS data block, but it must be expanded to fit the specified slot.
try
{
std::lock_guard<std::mutex> lock(tlsAllocationLock);
threadData->resize(dwTlsIndex + 1, nullptr);
}
catch (...)
{
return false;
}
}
// Store the new value for this slot.
threadData->at(dwTlsIndex) = lpTlsValue;
return true;
}
// Called at thread exit to clean up TLS allocations.
void WINAPI TlsShutdown()
{
ThreadLocalData* threadData = currentThreadData;
if (threadData)
{
{
std::lock_guard<std::mutex> lock(tlsAllocationLock);
allThreadData.erase(threadData);
}
currentThreadData = nullptr;
delete threadData;
}
}
And I modify the file cvconfig.h: I've commented out every #define, except PACKAGE* and VERSION, and I added #include "opencv_winrt.h" at the end.
Just a hint - there is a C# wrapper for OpenCV called EmguCV (http://www.emgu.com/wiki/index.php/Main_Page), by looking at the forum posts I see that there is some activity towards using it on Windows 8 but it's hard to tell if it's now working since the posts claiming issues are quite old. I'd suggest you just give it a try and see if this C# wrapper is able to run on Windows Phone 8, I think it should definitely run on Windows 8.

Redirecting output of an external application started with glib

I'm trying to use vala to start an external application using GLib with spawn_command_line_sync().
According to the documentation (http://valadoc.org/#!api=glib-2.0/GLib.Process.spawn_sync) you can pass a string to store the output of the external application.
While this works fine when starting a script which prints a couple of lines, I need to call a program which will print the content of a binary file.
(for example "cat /usr/bin/apt-get")
Is there any way how I can receive the output of the external program not in a string, but in a DataStream or something like that ?
I'm planning to write the ouput of the external program to a file, so just calling "cat /usr/bin/apt-get > outputfile" would be an alternative (not as nice), but it doesn't seem to work.
Anyway I would prefer it to get some kind of Output Stream.
I would appreciate any help.
Code im using:
using GLib;
static void main(string[] args) {
string execute = "cat /usr/bin/apt-get";
string output = "out";
try {
GLib.Process.spawn_command_line_sync(execute, out output);
} catch (SpawnError e) {
stderr.printf("spawn error!");
stderr.printf(e.message);
}
stdout.printf("Output: %s\n", output);
}
GLib.Process.spawn_async_with_pipes will let you do just that. It spawns the processes and returns a file descriptor for each of stdout, stderr, and stdin. There's a sample of code in the ValaDoc on how to set up IOChannels to monitor the output.
Thanks for that, I must have overread spawn_async_with_pipes() returning ints and not strings.
Is there anything wrong with doing it this way ?
(besides the buffer size of 1)
using GLib;
static void main(string[] args) {
string[] argv = {"cat", "/usr/bin/apt-get"};
string[] envv = Environ.get();
int child_pid;
int child_stdin_fd;
int child_stdout_fd;
int child_stderr_fd;
try {
Process.spawn_async_with_pipes(
".",
argv,
envv,
SpawnFlags.SEARCH_PATH,
null,
out child_pid,
out child_stdin_fd,
out child_stdout_fd,
out child_stderr_fd);
} catch (SpawnError e) {
stderr.printf("spawn error!");
stderr.printf(e.message);
return;
}
FileStream filestream1 = FileStream.fdopen(child_stdout_fd, "r");
FileStream filestream2 = FileStream.open("./stdout", "w");
uint8 buf[1];
size_t t;
while ((t = filestream1.read(buf, 1)) != 0) {
filestream2.write(buf, 1);
}
}

Resources