In a first phase, i collect a list of constraints. Then, i would like to store this "session", i.e. All the constraints but all the associated variables as well in a file so that I can, in a second phase, read back the constraints and assert them, or even negate some of them before asserting.
What is the best way (fast and reliable) to store such a "session" in a file, and read it back ? Would the Z3_parse_smtlib2_file() API be the right way ? I have tried the Z3_open_log() API, but I don't find the API to read the log file generated by Z3_open_log(). And what about z3_log_replay(). This API does not seem to be exposed yet.
Thanks in advance.
AG
The log file created by Z3_open_log() can be replayed with Z3.exe (stand alone interpreter, not the lib) through the command line option /log myfile. As of today, I haven't seen any API in Z3 library that allows such a replay. For the time being, I have understood that the replay is deemed for debug analysis.
However, you can hack the library (just expose the z3_replayer class in z3_replayer.h) and use it to replay any log file, it is quite easy. The source code of my little feasibility-proof is given below, and is working fine as far as I know. I think it is very nice to be able to do that because sometimes I need to replay a session for debugging purpose. It is good to be able to replay it from a file, rather than from my whole program which is a bit heavy.
Any feedback would be very welcome. Also I would be interested to know whether this functionality could be integrated in the lib, or not.
AG.
#include <fstream>
#include <iostream>
#include "api/z3_replayer.h"
int main(int argc, char * argv[])
{
const char * filename = argv[1];
std::ifstream in(filename);
if (in.bad() || in.fail()) {
std::cerr << "Error: failed to open file: " << filename << "\n";
exit(EXIT_FAILURE);
}
z3_replayer r(in);
r.parse();
Z3_context ctx = reinterpret_cast<Z3_context>(r.get_obj(0));
check(ctx,Z3_L_TRUE); // this function is taken from the c examples
return 0;
}
Related
This question came up into my mind when I was thinking about ways of server logging yesterday.
Normally, we open a terminal connected to local computer or remote server, run an executable, and print (printf, cout) some debug/log information in the terminal.
But for those processes/executables/scripts running on the server which are not connected to a terminal, what are the standard input and output?
For example:
Suppose I have a crontab task, running a program on the server many times a day. If I write something like cout << "blablabla" << endl; in the program. What's gonna happen? Where those output will flow into?
Another example I came up and wanted to know is, if I write a CGI program (use C or C++) for let's say a Apache web server, what is the standard input and output of my CGI program ? (According to this C++ CGI tutorial, I guess the standard input and output of the CGI program are in some ways redirected to the Apache server. Because it's using cout to output the html contents, not by return. )
I've read this What is “standard input”? before asking, which told me standard input isn't necessary to be tied to keyboard while standard output isn't necessary to be tied to a terminal/console/screen.
OS is Linux.
The standard input and standard output (and standard error) streams can point to basically any I/O device. This is commonly a terminal, but it can also be a file, a pipe, a network socket, a printer, etc. What exactly those streams direct their I/O to is usually determined by the process that launches your process, be that a shell or a daemon like cron or apache, but a process can redirect those streams itself it it would like.
I'll use Linux as an example, but the concepts are similar on most other OSes. On Linux, the standard input and standard output stream are represented by file descriptors 0 and 1. The macros STDIN_FILENO and STDOUT_FILENO are just for convenience and clarity. A file descriptor is just a number that matches up to some file description that the OS kernel maintains that tells it how to write to that device. That means that from a user-space process's perspective, you write to pretty much anything the same way: write(some_file_descriptor, some_string, some_string_length) (higher-level I/O functions like printf or cout are just wrappers around one or more calls to write). To the process, it doesn't matter what type of device some_file_descriptor represents. The OS kernel will figure that out for you and pass your data to the appropriate device driver.
The standard way to launch a new process is to call fork to duplicate the parent process, and then later to call one of the exec family of functions in the child process to start executing some new program. In between, it will often close the standard streams it inherited from its parent and open new ones to redirect the child process's output somewhere new. For instance, to have the child pipe its output back to the parent, you could do something like this in C++:
int main()
{
// create a pipe for the child process to use for its
// standard output stream
int pipefds[2];
pipe(pipefds);
// spawn a child process that's a copy of this process
pid_t pid = fork();
if (pid == 0)
{
// we're now in the child process
// we won't be reading from this pipe, so close its read end
close(pipefds[0]);
// we won't be reading anything
close(STDIN_FILENO);
// close the stdout stream we inherited from our parent
close(STDOUT_FILENO);
// make stdout's file descriptor refer to the write end of our pipe
dup2(pipefds[1], STDOUT_FILENO);
// we don't need the old file descriptor anymore.
// stdout points to this pipe now
close(pipefds[1]);
// replace this process's code with another program
execlp("ls", "ls", nullptr);
} else {
// we're still in the parent process
// we won't be writing to this pipe, so close its write end
close(pipefds[1]);
// now we can read from the pipe that the
// child is using for its standard output stream
std::string read_from_child;
ssize_t count;
constexpr size_t BUF_SIZE = 100;
char buf[BUF_SIZE];
while((count = read(pipefds[0], buf, BUF_SIZE)) > 0) {
std::cout << "Read " << count << " bytes from child process\n";
read_from_child.append(buf, count);
}
std::cout << "Read output from child:\n" << read_from_child << '\n';
return EXIT_SUCCESS;
}
}
Note: I've omitted error handling for clarity
This example creates a child process and redirects its output to a pipe. The program run in the child process (ls) can treat the standard output stream just as it would if it were referencing a terminal (though ls changes some behaviors if it detects its standard output isn't a terminal).
This sort of redirection can also be done from a terminal. When you run a command you can use the redirection operators to tell your shell to redirect that commands standard streams to some other location than the terminal. For instance, here's a convoluted way to copy a file from one machine to another using an sh-like shell:
gzip < some_file | ssh some_server 'zcat > some_file'
This does the following:
create a pipe
run gzip redirecting its standard input stream to read from "some_file" and redirecting its standard output stream to write to the pipe
run ssh and redirect its standard input stream to read from the pipe
on the server, run zcat with its standard input redirected from the data read from the ssh connection and its standard output redirected to write to "some_file"
I am trying to detect object using ssd_mobilenet_v1_coco model. My own trained model file .pb file is used for detection. After successful build , click run button and I got the below error.
"Not found: Op type not registered 'NonMaxSuppressionV2' in binary running on IPhone. Make sure the Op and Kernel are registered in the binary running in this process. "
I can executed and launch ios app for already trained .pb model file in below link.
please give a solution to fix the above issues and launch ios app.
https://github.com/JieHe96/iOS_Tensorflow_ObjectDetection_Example
The problem is exactly what the error says - The operation NonMaxSuppressionV2 is being used by the model (.pb file) you're using but it is not registered with tensorflow library while getting compiled for iOS platform.
This is because tensor flow restricts a lot of operations [especially the ones usually required only for training] on iOS/Android platforms so that the size of compiled libraries are less.
To rectify the above problem you can do the following -
Update the file present ops_to_register.h file present here.
Add the "NonMaxSuppressionV2Op<CPUDevice>" (Don't forget to add coma if you're adding in middle of array ) to kNecessaryOpKernelClasses array.
Like this -
constexpr const char* kNecessaryOpKernelClasses[] = {
"BinaryOp< CPUDevice, functor::add<float>>",
"BinaryOp< CPUDevice, functor::add<int32>>",
"AddNOp< CPUDevice, float>",
"NonMaxSuppressionOp<CPUDevice>",
//Added NonMaxSuppressionV2Op
"NonMaxSuppressionV2Op<CPUDevice>",
...
//Other operations
...
};
And also isequal(op, "NonMaxSuppressionV2") to constexpr inline bool ShouldRegisterOp(const char op[])
Like this -
constexpr inline bool ShouldRegisterOp(const char op[]) {
return false
|| isequal(op, "Add")
|| isequal(op, "NoOp")
|| isequal(op, "NonMaxSuppression")
//Added NonMaxSuppressionV2
|| isequal(op, "NonMaxSuppressionV2")
|| isequal(op, "Pack")
//other stuff
...
;
After you modify this file re-run everything from scratch as mentioned in the quick-start section of repo's readme.
If you are still losing some other operations. Repeating the same procedure for them too will work.
Hope that helped.
I would like to know which library should I use to manipulate mote leds.
I tried to use Led Contiki Module but I don't know which header file include.
Is there someone who can help me ?
Look in the platform-conf.h in eg contiki/platform/sky/. There you'll find the definitions you can use, of eg LEDS_RED.
Then, in your application,
#include "leds.h"
...
leds_on(LEDS_RED);
leds_off(LEDS_GREEN);
leds_toggle(LEDS_RED);
leds_off(LEDS_ALL);
You can look in leds-arch.h to see how the implementations sets/get LED state.
e.g. at contiki-2.7/core/dev you can find leds.c and leds.h.
In leds.c are implemented functions for accessing LEDs, e.g. leds_on(...), leds_off(...).
In leds.h are inter alia specified names of LEDs and their number values, e.g. #define LEDS_GREEN 1
To your question: #include "dev/leds.h"
I have a situation where disk becomes full and my program hangs because of fflush being used on stdout. I have put down a small code to mimic the problem. We have to redirect this programs stdout to a file in a disk whose size is full already.
while(1){
cout << "a big data to be written here";
int ret = fflush(stdout);
if(ret != 0){
cerr << "get error : " << strerror(errno) << endl;
exit(1);
}
}
And this code hangs forever. I tried to use fcntl with O_NONBLOCK for stdout. Even that doesn`t work somehow. Please note I can not use write system call here though that avoids this kinda hang problem when disk is full. As my system widely uses library calls in many places if I use write system call only in this place it would create output in mixed manner. Can anyone suggest how to avoid hanging ?
I have tried fsync, fdatasync also. Same hanging with those functions too.
Update: fcntl fixed this problem even with cout and fflush combination.
You are mixing C++ stream I/O and C stdio functions. Instead of using fflush(stdout), use cout.flush() if needed. Instead of checking the return code from fflush, you should instead check cout.good() or use cout.rdstate(). I assume the cout operations are failing but fflush is not the part seeing the failure.
Do any editors honer C #line directives with regards to goto line features?
Context:
I'm working on a code generator and need to jump to a line of the output but the line is specified relative to the the #line directives I'm adding.
I can drop them but then finding the input line is even a worse pain
If the editor is scriptable it should be possible to write a script to do the navigation. There might even be a Vim or Emacs script that already does something similar.
FWIW when I writing a lot of Bison/Flexx I wrote a Zeus Lua macro script that attempted to do something similar (i.e. move from input file to the corresponding line of the output file by search for the #line marker).
For any one that might be interested here is that particular macro script.
#line directives are normally inserted by the precompiler, not into source code, so editors won't usually honor that if the file extension is .c.
However, the normal file extension for post-compiled files is .i or .gch, so you might try using that and see what happens.
I've used the following in a header file occasionally to produce clickable items in
the VC6 and recent VS(2003+) compiler ouptut window.
Basically, this exploits the fact that items output in the compiler output
are essentially being parsed for "PATH(LINENUM): message".
This presumes on the Microsoft compiler's treatment of "pragma remind".
This isn't quite exactly what you asked... but it might be generally helpful
in arriving at something you can get the compiler to emit that some editors might honor.
// The following definitions will allow you to insert
// clickable items in the output stream of the Microsoft compiler.
// The error and warning variants will be reported by the
// IDE as actual warnings and errors... which means you can make
// them occur in the task list.
// In theory, the coding standards could be checked to some extent
// in this way and reminders that show up as warnings or even
// errors inserted...
#define strify0(X) #X
#define strify(X) strify0(X)
#define remind(S) message(__FILE__ "(" strify( __LINE__ ) ") : " S)
// example usage
#pragma remind("warning: fake warning")
#pragma remind("error: fake error")
I haven't tried it in a while but it should still work.
Use sed or a similar tool to translate the #lines to something else not interpreted by the compiler, so you get C error messages on the real line, but have a reference to the original input file nearby.