As in the question, is there anyway to implement DFS with queue and BFS with stack?
I couldn't find any related discussion about it.
Why would you want to do that? In practice, I do not think it will work quite well or if it is even possible, however, if it is I think it will be very inefficient. Why? Well, we need to keep track of vertices in certain ways with these algorithms. Let's use a queue for DFS for example, while running your algorithm, at some point you run into a situation where the vertex/node you are looking at cannot visit any other node, you will need to go back to a previous node but we can not retrieve elements at certain indexes of the queue except say you add that to your implementation which then makes it not a queue. Such will occur for BFS with stacks, you will have to remove all elements from the stack while searching. So why do that?
Related
I want to have a controller that somehow runs 3 processes to run the robot's code.
I am trying to simulate a humanoid soccer robot in webots . To run our robot's code, we run 3 processes. One for the servomotors' power management , another one for image processing and communications and the last one for motion control.
Now I want to have a controller making me somehow able to simulate something like this or at least similar to it. Does anyone have any idea how I can do this?
Good news: the Webots API is thread safe :-)
Generally speaking, I would not recommend to use multi-threads, because programming threads is a big source of issues. So, if you have any possibility to merge your threads into a single-threaded application, it's the way to go!
If you would like to go in this direction, the best solution is certainly to create a single controller running your 3 threads, and synchronize them with the main thread (thread 0).
The tricky part is to deal correctly with the time management and the simulation steps. A solution could be to set the Robot.synchronization field to FALSE and to use the main thread to call the wb_robot_step(duration) function every duration time (real time).
NSURLSessionTaskMetrics was introduced since iOS10, which is very helpful.
But i am still feel confused about the property transactionMetrics, which is an array of NSURLSessionTaskTransactionMetrics.
I want to know, if there are more than one transaction metrics in the array, how do i measure the performance of the task? simply summate all their time, or just pick one of them, if so which one to use?
I've got an rough idea that during the task execution, the session may use more than one transaction to accomplish the task.
But can any one give a more detail description about that, the transaction executed in order or concurrently .
looking forward for anyone could help
I'm working on a Python project, where I'm currently trying to speed things up in some horrible ways: I set up my Z3 solvers, then I fork the process, and have Z3 perform the solve in the child process and pass a pickle-able representation of the model back to the parent.
This works great, and represents the first stage of what I'm trying to do: the parent process is now no longer CPU-bound. The next step is to multi-thread the parent, so that we can solve multiple Z3 solvers in parallel.
I'm pretty sure I've mutexed away any concurrent accesses of Z3 in the setup phase, and only one thread should be touching Z3 at any one time. However, despite this, I'm getting random segfaults in libz3.so. It's important to note, at this point, that it's not always the same thread that touches Z3 -- the same object (not the solvers themselves, but the expressions) might be handled by different threads at different times.
My question is, is it possible to multi-thread Z3? There is a brief note here (http://research.microsoft.com/en-us/um/redmond/projects/z3/z3.html) saying "It is not safe to access Z3 objects from multiple threads.", which I guess would answer my question, but I'm holding out hope that it means to say that one shouldn't access Z3 from multiple threads simultaneously. Another resource (Again: Installing Z3 + Python on Windows) states, from Leonardo himself, that "Z3 uses thread local storage", which, I guess, would sink this whole undertaking, but a) that answer is from 2012, so maybe things have changed, and b) maybe it uses thread-local storage for some unrelated stuff?
Anyways, is multi-threading Z3 possible (from Python)? I'd hate to have to push the setup phase into the child processes...
Z3 does indeed use thread local storage, but as far as I can see, there is only one point left in the code where it does so (to track how much memory each thread is using; in memory_manager.cpp), but that should not be responsible for the symptoms you see.
Z3 should behave nicely in a multi-threaded setting, if every thread strictly uses only it's own context object (Z3_context, or in Python class Context). This means that any object created through one of the Context's can not in any way interact with any of the other Context's; if that is required, all objects have to be translated from one Context to another first, e.g. in Python via functions like translate(...) in class ASTRef.
That said, there surely are some bugs left to fix. My first target when seeing random segfaults would be the garbage collector, because it might not interact nicely with Z3's reference counting (which is the case in other APIs). There is also a known bug that's triggered when many Context objects are created at the same time (on my todo list though...)
I am parsing n images using NSUrl delegates. But I'm not getting the results in the same order.
How can i make it in the same order of request send?
If you're running these concurrently (i.e. just initiating a whole bunch of NSURLConnection requests), this behavior is not at all surprising because while you may initiate them in a particular order, you have no assurances that they'll necessarily finish in that same order. You could address this by initiating these requests serially (i.e. don't start the next request until the prior one finishes), but I'd discourage you from doing that as you'll pay a significant performance penalty. It is much better to refactor your code to handle the fact that they may complete in a non-sequential fashion, rather than placing an artificial constraint that they must finish in a particular order.
So, it's best to employ a mechanism that supports concurrency, but let's you constrain the degree of concurrency (e.g. namely an operation queue). It's not too hard to wrap your NSURLConnection requests in individual NSOperation subclass objects, but rather than reinventing the wheel, you might want to consider using AFNetworking, which does a lot of this for you.
Recently, I have encountered many difficulties when I was developing using C++ and Lua. My situation is: for some reason, there can be thousands of Lua-states in my C++ program. But these states should be same just after initialization. Of course, I can do luaL_loadlibs() and lua_loadfile() for each state, but that is pretty heavy(in fact, it takes a rather long time for me even just initial one state). So, I am wondering the following schema: What about keeping a separate Lua-state(the only state that has to be initialized) which is then cloned for other Lua-states, is that possible?
When I started with Lua, like you I once wrote a program with thousands of states, had the same problem and thoughts, until I realized I was doing it totally wrong :)
Lua has coroutines and threads, you need to use these features to do what you need. They can be a bit tricky at first but you should be able to understand them in a few days, it'll be well worth your time.
take a look to the following lua API call I think it is what you exactly need.
lua_State *lua_newthread (lua_State *L);
This creates a new thread, pushes it on the stack, and returns a pointer to a lua_State that represents this new thread. The new thread returned by this function shares with the original thread its global environment, but has an independent execution stack.
There is no explicit function to close or to destroy a thread. Threads are subject to garbage collection, like any Lua object.
Unfortunately, no.
You could try Pluto to serialize the whole state. It does work pretty well, but in most cases it costs roughly the same time as normal initialization.
I think it will be hard to do exactly what you're requesting here given that just copying the state would have internal references as well as potentially pointers to external data. One would need to reconstruct those internal references in order to not just have multiple states pointing to the clone source.
You could serialize out the state after one starts up and then load that into subsequent states. If initialization is really expensive, this might be worth it.
I think the closest thing to doing what you want that would be relatively easy would be to put the states in different processes by initializing one state and then forking, however your operating system supports it:
http://en.wikipedia.org/wiki/Fork_(operating_system)
If you want something available from within Lua, you could try something like this:
How do you construct a read-write pipe with lua?