solving a large optimization problem with z3, which is not likely to reach optimum in a reasonable amount of time. Any way that I can get intermediate solutions ? perhaps set an internal time-out so it gives me the best solution it found so far ?
Thanks,
Ofer
You can interrupt Z3 from the API directly or by setting a timeout. From the text front-end you can interrupt it (CTRL^C) or set a timeout. It returns a range of upper/lower bound and model of the best bound found so far.
Related
Can anyone help me do a task with high(like 6kHz) execution rate?
Need to do a SPI transmission on this frequency(the task code is already written). I can achieve over 7kHz without any control(just one task with no timing control, running full time), so the time is not a problem.
The problem is that the TICK_RATE has a resolution of ms, which is too high for what I need. Doing some research I found that reducing the time resolution will cause an unwanted overhead.
So, the way would be using an ISR. Is that right? Couldn't find an example of how do that. I have almost null experience in FreeRTos.
Using Toradex FreeRTOS version in Toradex IMX7D.
Thanks in advance.
Are you asking how to do this using FreeRTOS? in which case the FreeRTOS book has examples, as does the website (this is just one way of doing it). However, as you point out yourself, due to the frequency you really need to be doing this in an interrupt - in which case you need to review the hardware manual to see what facilities the hardware has in regards to DMA'ing data to peripherals, etc.
You need to express your task more clear. What MCU? Two side transmittion? Do you have DMA?
You can try to use timer of your MCU to perform timing and in it's ISR run
xSemaphoreGiveFromISR.
In RTOS task put listener xSemaphoreTake( xSemaphore, LONG_TIME ) == pdTRUE
Resolved it based on the solution in examples/imx7_colibri_m4/driver_examples/gpt(Toradex FreeRTOS version).
Just used GPTB derived from ccmRootmuxGptOsc24m clock. This is important because linux kernel was hanging on startup using the default Pfd0 clock.
To get the frequency I needed just divided GPTB frequency by the desired frequency and passed to GPT_SetOutputCompareValue().
I'm using the Z3 SMT solver implemented through python. I have the source code, and I would like to have some, any, indication that the process is running. Is it possible to use some verbose command or anything to get an idea of what the process is currently doing? I know the algorithm behind it, but I want to visualize, even with printf, what is occurring in the code.
Thanks!
You can use:
set_option(verbose=10)
to obtain verbose output to standard err.
After a solver has finished, you can get statistics using the
statistics()
method.
In debug mode you can use
enable_trace("arith")
to get low level traces (here given with "arith" as an example tag).
This is intended for debugging only.
Say I have sensor data from a hardware which senses every 100ms. Now a switch is automatically turned on based on some set of features of the device and then again gets turn off based on the values of those features again.
I have build a model which looks only in the sensor data in the vicinity(by time) of turning on trigger point(as 1 output data) and same for turning off point as well(as 0 output data).
But the classifier works very poorly . It detect turn on point randomly and never turns off, and some times totally misses a on/off cycle.
Any suggestions on how to attack these kinds of problems?
There could be a number of reasons associated with the failure. I do not know what hardware or software that is being employed for the problem, but here are a few that may come to mind:
Timing - Perhaps there is some delay in your system
Algorithm - Perhaps there is something that is incorrect in your code that is causing random behavior in the system.
Wiring - Perhaps there is something that is sending the wrong signals.
The best way to attack these problems is to try and break the system down and test each individual component. This way, you may be able to diagnose the fault and resolve the issue.
Good luck!
For security reasons I have a feeling that that testing should be done server side. Nonetheless, that would be rather taxing on the server, right? Given the gear and buffs a player is wearing they will have a higher movement speed, so each time they move I would need to calculate that new constant and see if their movement is legitimate (using TCP so don't need to worry about lost, unordered packets). I realize I could instead just save the last movement speed and only recalculate it if they've changed something affecting their speed, but even then that's another check.
Another idea I had is that the server randomly picks data that the client is sending it and verifies it and gives each client a trust rating. A low enough trust rating would mean every message from the client would be inspected and all of their actions would be logged in a more verbose manner. I would then know they're hacking by inspecting the logs and could ban/suspend them as well as undo any benefits they may have spread around through hacking.
Any advice is appreciated, thank you.
Edit: I just realized there's also the case where a hacker could send tiny movements (within the capability of their regular speed) in a very high succession. Each individual movement alone would be legite, but the cumulative effect would be speed hacking. What are some ways around this?
The standard way to deal with this problem is to have the server calculate all movement. The only thing that the clients should send to the server are commands, e.g. "move left" and the server should then calculate how fast the player moves etc., then finally send the updated position back to the client.
If you leave any calculation at all on the client, the chances are that sooner or later someone will find a way to cheat.
[...] testing should be done server side. Nonetheless, that would be rather taxing on the server, right?
Nope. This is the way to do it. It's the only way to do it. All talk of checking trust or whatever is inherently flawed, one way or another.
If you're letting the player send positions:
Check where someone claims they are.
Compare that to where they were a short while ago. Allow a tiny bit of deviation to account for network lag.
If they're moving too quickly, reposition them somewhere more reasonable. Small errors may be due to long periods of lag, so clients should use interpolation to smooth out these corrections.
If they're moving far too quickly, disconnect them. And check for bugs in your code.
Don't forget to handle legitimate traversals over long distance, eg. teleports.
The way around this is that all action is done on the server. Never trust any information that comes from the client. If anybody actually plays your game, somebody will reverse-engineer the communication to the server and figure out how to take advantage of it.
You can't assign a random trust rating, because cautious cheaters will cheat only when they really need to. That gives them a considerable advantage with a low chance of being spotted cheating.
And, yes, this means you can't get by with a low-grade server, but there's really no other method of preventing client-side cheating.
If you are developing in a language that has access to Windows API function calls, I have found from my own studies in speed hacking, that you can easily identify a speed hacker by calling two functions and comparing results.
TimeGetTime
and...
GetTickCount
Both functions will return the number of seconds since the system started. However, TimeGetTime is much more accurate than GetTickCount, whereas TimeGetTime is accurate up to ~1ms vs. GetTickCount, which is accurate at around ~50ms
Even though there is a small lag between these two functions, if you turn on a speed hacking application (pick your poison), you should see a very large difference between the two result sets, sometimes even up to a couple of seconds. The difference is very noticable.
Write a simple application that returns the GetTickCount and TimeGetTime results, without the speed hacking application running, and leave it running. Compare the results and display the difference -- you should see a very small difference between the two. Then, with your application still running, turn on the speed hacking application and you will see the very large difference in the two result sets.
The trick is figuring out what threshold will constitue suspicious activity.
In regards to Operating System concepts... Can a process to have two working sets, one that represents data and another that represents code?
A "Working Set" is a term associated with Virtual Memory Manangement in Operating systems, however it is an abstract idea.
A working set is just the concept that there is a set of virtual memory pages that the application is currently working with and that there are other pages it isn't working with. Any page that is being currently used by the application is by definition part of the 'Working Set', so its impossible to have two.
Operating systems often do distinguish between code and data in a process using various page permissions and memory protection but this is a different concept than a "Working set".
This depends on the OS.
But on common OSes like Windows there is no real difference between data and code so no, it can't split up it's working set in data and code.
As you know, the working set is the set of pages that a process needs to have in primary store to avoid thrashing. If some of these are code, and others data, it doesn't matter - the point is that the process needs regular access to these pages.
If you want to subdivide the working set into code and data and possibly other categorizations, to try to model what pages make up the working set, that's fine, but the working set as a whole is still all the pages needed, regardless of how these pages are classified.
EDIT: Blocking on I/O - does tis affect the working set?
Remember that the working set is a model of the pages used over a given time period. When the length of time the process is blocked is short compared to the time period being modelled, then it changes little - the wait is insignificant and the working set over the time period being considered is unaffected.
But when the I/O wait is long compared to the modelled preriod, then it changes much. During the period the process is blocked, it's working set is emmpty. An OS could theoretically swap out all the processes' pages on the basis of this.
The working set model attempts to predict what pages the process will need based on it's past behaviour. In this case, if the process is still blocked at time t+1, then the model of an empty working set is correct, but as soon as the process is unblocked, it's working set will be non-empty - the prediction by the model still says no pages needed, so the predictive power of the model breaks down. But this is to be expected - you can't really predict the future. Normally. And the working set is expected to change over time.
This question is from the book "operating system concepts". The answer they are looking for (found elsewhere on the web) is:
Yes, in fact many processors provide two TLBs for this very reason. As
an example, the code being accessed by a process may retain the same
working set for a long period of time. However, the data the code
accesses may change, thus reflecting a change in the working set for
data accesses.
Which seems reasonable but is completely at odds with some of the other answers...