I have a PDB structure that I'm analyzing which has a putative binding pocket in it for some endogenous ligand.
I'd like to to determine for the relevant amino acids within, say, 3A of the pocket, where the optimal hydrogen bond donor/acceptor pair for the ligand would be within the pocket. I have done this already for determining locations of optimal pi-pi stacking (e.g. find aromatic residues, determine plane the face of the ring, go out N angstroms orthogonal to the face), but i'm struggling to consider this for hydrogen bonds.
Can this be done?
Weel, I'll try to write out how I would try to do it.
First of all its not clear to me if your pocket is described by a grid that represent the pocket surface, or by a grid that represent all the pocket space (lets call it pocket cloud).
With Biopython assuming you have a cloud described by your grid:
Loop over all the cloud-grid points:
for every point loop over all the PDB atoms that are H donor or acceptor:
if the distance is in the desidered target range (3A - distance for optimal
donor or acceptor pair):
select the corresponding AA/atom/point
add to your result list the point as donor/acceptor/or both togeher
with the atom/AA selected
else:
pass
with Biopyton and distances see here: Biopython PDB: calculate distance between an atom and a point
H bonds are generally 2.7 to 3.3 Å
I am not sure my logic is correct, the idea is to end up with a subset of your grids point where you have red grid points where you could pose a donor and blue ones where you could pose an acceptor.
We are talking only about distances here, if you introduce geometry factors of the bond I think you should need a ligand with its own geometry too
Of course with this approach you would waste a lot of time on not productive computation, if You find a way to select only the grid surface point you could select a subset of PDB atoms that are close to the surface (3A) and then use the same approach above.
I would like ask what would be an appropriate workflow for solving a trajectory optimization problem that takes into account contact interactions between different bodies in an MBP in Drake. Problems like the ones addressed in this paper. I've tried using DirectCollocation class but apparently it does not consider contact forces as optimization variables. Is there anyway to incorporate them into the mathematical program generated by DirectCollocation? Thanks.
There are several things to consider when you plan through contact
The decision variables are not only the state/control any more, but should include the contact forces.
The constraints are not only the dynamic constraint x[n+1] = f(x[n], u[n]), but x[n+1] = f(x[n], u[n], λ[n]) where λ is the contact force, and also the complementarity constraint 0≤ λ ⊥ ϕ(q) ≥ 0
Instead of using direct collocation (which assumes the state trajectory being a piecewise cubic spline), Michael's paper uses just backward Euler integration, which assumes piecewise linear interpolation.
So you would need the following features:
Add the contact forces as decision variables.
Add the complementarity constraints.
Use backward Euler integration instead of DirectCollocationConstraint.
Unfortunately Drake doesn't have an implementation doing these yet. The closest match inside Drake is in this folder, and the starting point could be the StaticEquilibriumProblem. This class does feature 1 and part of feature 2 (it has the complementarity constraint, but only static equilibrium constraint instead of dynamic constraint), but doesn't do feature 3 (the backward Euler step).
I'm trying to add generalized forces on a free-floating system that has been linearized from a MultiBodyPlant. For a fixed base, the generalized forces basically correspond to the joint torques but for a free-floating case, it includes the base torques/forces as well. However, these are not included in the actuation port of the MultiBodyPlant system and hence while providing the actuation port for linearization (or LQR), the base wrenches are not included in the actuation matrix B post-linearization. For my work, I'd like to include the base torques/forces as well in the linearization. For this, I can think of 2 approaches in drake but I'm not sure which would be a better/correct decision architecturally. I plan to use this later for making a TVLQR controller to stabilize trajectories.
Idea 1: Change B matrix in LQR after linearization: Manually edit B matrix after using LinearQuadraticRegulator(system, context, Q, R, plant_actuation_input_pot())
Idea 2: Create a LeafeSystem class (similar to this) with output port connecting to get_applied_generalized_force_input_port() and input port exposed outside diagram (using ExportOutput()) and then linearized this system as a whole (containing the CustomLeafSystem connected to MultiBodyPlant). I assume doing this will add the base actuation to the B matrix of the linearized system.
Actually these two ideas is what I would've done myself. For your case it'd seem that Idea 1 is simpler to implement?
Matrix B maps actuation indexes to generalized velocity indexes. Therefore you have to make sure that you place the necessary "ones" in the right places of B. You can find the free floating body indexes with Body::floating_velocities_start() which returns the start index within the full state x (therefore you'd need to subtract MultibodyPlant::num_positions() for indexing within B). For floating bodies generalized velocities are angular velocity w_WB and translational velocity v_WB, in that order. Therefore the generalized forces corresponds to the torque t_Bo_W and f_Bo_W applied at B about its origin Bo and expressed in the world frame.
Context
My task is to design and build a velocity PID controller for a micro quadcopter than flies indoors. The room where the quadcopter is flying is equipped with a camera-based high accuracy indoor tracking system that can provide both velocity and position data for the quadcopter. The resulting system should be able to accept a target velocity for each axis (x, y, z) and drive the quadcopter with that velocity.
The control inputs for the quadcopter are roll/pitch/yaw angles and thrust percentage for altitude.
My idea is to implement a PID controller for each axis where the SP is the desired velocity in that direction, the measured value is the velocity provided by the tracking system and the output value is the roll/pitch/yaw angle and respectively thrust percent.
Unfortunately, because this is my first contact with control theory, I am not sure if I am heading in the right direction.
Questions
I understand the basic PID controller principle, but it is still
unclear to me how it can convert velocity (m/s) into roll/pitch/yaw
(radians) by only summing errors and multiplying with a constant?
Yes, velocity and roll/pitch are directly proportional, so does it
mean that by multiplying with the right constant yields a correct
result?
For the case of the vertical velocity controller, if the velocity is set to 0, the quadcopter should actually just maintain its altitude without ascending or descending. How can this be integrated with the PID controller so that the thrust values is not 0 (should keep hovering, not fall) when the error is actually 0? Should I add a constant term to the output?
Once the system has been implemented, what would be a good approach
on tuning the PID gain parameters? Manual trial and error?
The next step in the development of the system is an additional layer
of position PID controllers that take as a setpoint a desired
position (x,y,z), the measured positions are provided by the indoor
tracking system and the outputs are x/y/z velocities. Is this a good approach?
The reason for the separation of these PID control layers is that the project is part of a larger framework that promotes re-usability. Would it be better to just use a single layer of PID controllers that directly take position coordinates as setpoints and output roll/pitch/yaw/thrust values?
That's the basic idea. You could theoretically wind up with several constants in the system to convert from one unit to another. In your case, your P,I, D constants will implicitly include the right conversion factors. for example, if in an idealized system you would want to control the angle by feedback with P=0.5 in degrees, bu all you have access to is speed, and you know that 5 degrees of tilt gives 1 m/s, then you would program the P controller to multiply the measured speed by 0.1 and the result would be the control input for the angle.
I think the best way to do that is to implement a SISO PID controller not on height (h) but on vertical speed (dh/dt) with thrust percentage as the ouput, but I am not sure about that.
Unfortunately, if you have no advance knowledge about the parameters, that will be the only way to go. I recommend an environment that won't do much damage if the quadcopter crashes...
That sounds like a good way to go
My blog may be of interest to you: http://oskit.se/quadcopter-control-and-how-to-really-implement-it/
The basic principle is easy to understand. Tuning is much harder. It took me the most time to actually figure out what to feed into the pid controller and small things like the + and - signs on variables. But eventually I solved these problems.
The best way to test PID controller is to have some way to set your values dynamically so that you don't have to recompile your firmware every time. I use mavlink in my quadcopter firmware that I work on. With mavlink the copter can easily be configured using any ground control software that supports mavlink protocol.
Start small, build a working copter with using a ready made controller, then write your own. Then test it and tune it and experiment. You will never truly understand PID controllers unless you try building a copter yourself. My bare metal software development library can assist you with making it easier to write your software without having to worry about hardware driver implementation. Link: https://github.com/mkschreder/martink
If you are using python programming language, you can use ddcontrol framework.
You can estimate a transfer function by SISO data. You can use tfest function for this purpose. And then you can optimize a PID Controller by using pidopt function.
I'm trying to determine skeleton joints (or at the very least to be able to track a single palm) using a regular webcam. I've looked all over the web and can't seem to find a way to do so.
Every example I've found is using Kinect. I want to use a single webcam.
There's no need for me to calculate the depth of the joints - I just need to be able to recognize their X, Y position in the frame. Which is why I'm using a webcam, not a Kinect.
So far I've looked at:
OpenCV (the "skeleton" functionality in it is a process of simplifying graphical models, but it's not a detection and/or skeletonization of a human body).
OpenNI (with NiTE) - the only way to get the joints is to use the Kinect device, so this doesn't work with a webcam.
I'm looking for a C/C++ library (but at this point would look at any other language), preferably open source (but, again, will consider any license) that can do the following:
Given an image (a frame from a webcam) calculate the X, Y positions of the visible joints
[Optional] Given a video capture stream call back into my code with events for joints' positions
Doesn't have to be super accurate, but would prefer it to be very fast (sub-0.1 sec processing time per frame)
Would really appreciate it if someone can help me out with this. I've been stuck on this for a few days now with no clear path to proceed.
UPDATE
2 years later a solution was found: http://dlib.net/imaging.html#shape_predictor
To track a hand using a single camera without depth information is a serious task and topic of ongoing scientific work. I can supply you a bunch of interesting and/or highly cited scientific papers on the topic:
M. de La Gorce, D. J. Fleet, and N. Paragios, “Model-Based 3D Hand Pose Estimation from Monocular Video.,” IEEE transactions on pattern analysis and machine intelligence, vol. 33, Feb. 2011.
R. Wang and J. Popović, “Real-time hand-tracking with a color glove,” ACM Transactions on Graphics (TOG), 2009.
B. Stenger, A. Thayananthan, P. H. S. Torr, and R. Cipolla, “Model-based hand tracking using a hierarchical Bayesian filter.,” IEEE transactions on pattern analysis and machine intelligence, vol. 28, no. 9, pp. 1372–84, Sep. 2006.
J. M. Rehg and T. Kanade, “Model-based tracking of self-occluding articulated objects,” in Proceedings of IEEE International Conference on Computer Vision, 1995, pp. 612–617.
Hand tracking literature survey in the 2nd chapter:
T. de Campos, “3D Visual Tracking of Articulated Objects and Hands,” 2006.
Unfortunately I don't know about some freely available hand tracking library.
there is a simple way for detecting hand using skin tone. perhaps this could help... you can see the results on this youtube video. caveat: the background shouldn't contain skin colored things like wood.
here is the code:
''' Detect human skin tone and draw a boundary around it.
Useful for gesture recognition and motion tracking.
Inspired by: http://stackoverflow.com/a/14756351/1463143
Date: 08 June 2013
'''
# Required moduls
import cv2
import numpy
# Constants for finding range of skin color in YCrCb
min_YCrCb = numpy.array([0,133,77],numpy.uint8)
max_YCrCb = numpy.array([255,173,127],numpy.uint8)
# Create a window to display the camera feed
cv2.namedWindow('Camera Output')
# Get pointer to video frames from primary device
videoFrame = cv2.VideoCapture(0)
# Process the video frames
keyPressed = -1 # -1 indicates no key pressed
while(keyPressed < 0): # any key pressed has a value >= 0
# Grab video frame, decode it and return next video frame
readSucsess, sourceImage = videoFrame.read()
# Convert image to YCrCb
imageYCrCb = cv2.cvtColor(sourceImage,cv2.COLOR_BGR2YCR_CB)
# Find region with skin tone in YCrCb image
skinRegion = cv2.inRange(imageYCrCb,min_YCrCb,max_YCrCb)
# Do contour detection on skin region
contours, hierarchy = cv2.findContours(skinRegion, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# Draw the contour on the source image
for i, c in enumerate(contours):
area = cv2.contourArea(c)
if area > 1000:
cv2.drawContours(sourceImage, contours, i, (0, 255, 0), 3)
# Display the source image
cv2.imshow('Camera Output',sourceImage)
# Check for user input to close program
keyPressed = cv2.waitKey(1) # wait 1 milisecond in each iteration of while loop
# Close window and camera after exiting the while loop
cv2.destroyWindow('Camera Output')
videoFrame.release()
the cv2.findContour is quite useful, you can find the centroid of a "blob" by using cv2.moments after u find the contours. have a look at the opencv documentation on shape descriptors.
i havent yet figured out how to make the skeletons that lie in the middle of the contour but i was thinking of "eroding" the contours till it is a single line. in image processing the process is called "skeletonization" or "morphological skeleton". here is some basic info on skeletonization.
here is a link that implements skeletonization in opencv and c++
here is a link for skeletonization in opencv and python
hope that helps :)
--- EDIT ----
i would highly recommend that you go through these papers by Deva Ramanan (scroll down after visiting the linked page): http://www.ics.uci.edu/~dramanan/
C. Desai, D. Ramanan. "Detecting Actions, Poses, and Objects with
Relational Phraselets" European Conference on Computer Vision
(ECCV), Florence, Italy, Oct. 2012.
D. Park, D. Ramanan. "N-Best Maximal Decoders for Part Models" International Conference
on Computer Vision (ICCV) Barcelona, Spain, November 2011.
D. Ramanan. "Learning to Parse Images of Articulated Objects" Neural Info. Proc.
Systems (NIPS), Vancouver, Canada, Dec 2006.
The most common approach can be seen in the following youtube video. http://www.youtube.com/watch?v=xML2S6bvMwI
This method is not quite robust, as it tends to fail if the hand is rotated to much (eg; if the camera is looking at the side of the hand or at a partially bent hand).
If you do not mind using two camera's you can look into the work Robert Wang. His current company (3GearSystems) uses this technology, augmented with a kinect, to provide tracking. His original paper uses two webcams but has much worse tracking.
Wang, Robert, Sylvain Paris, and Jovan Popović. "6d hands: markerless hand-tracking for computer aided design." Proceedings of the 24th annual ACM symposium on User interface software and technology. ACM, 2011.
Another option (again if using "more" than a single webcam is possible), is to use a IR emitter. Your hand reflects IR light quite well whereas the background does not. By adding a filter to the webcam that filters normal light (and removing the standard filter that does the opposite) you can create a quite effective hand tracking. The advantage of this method is that the segmentation of the hand from the background is much simpler. Depending on the distance and the quality of the camera, you would need more IR leds, in order to reflect sufficient light back into the webcam. The leap motion uses this technology to track the fingers & palms (it uses 2 IR cameras and 3 IR leds to also get depth information).
All that being said; I think the Kinect is your best option in this. Yes, you don't need the depth, but the depth information does make it a lot easier to detect the hand (using the depth information for the segmentation).
My suggestion, given your constraints, would be to use something like this:
http://docs.opencv.org/doc/tutorials/objdetect/cascade_classifier/cascade_classifier.html
Here is a tutorial for using it for face detection:
http://opencv.willowgarage.com/wiki/FaceDetection?highlight=%28facial%29|%28recognition%29
The problem you have described is quite difficult, and I'm not sure that trying to do it using only a webcam is a reasonable plan, but this is probably your best bet. As explained here (http://docs.opencv.org/modules/objdetect/doc/cascade_classification.html?highlight=load#cascadeclassifier-load), you will need to train the classifier with something like this:
http://docs.opencv.org/doc/user_guide/ug_traincascade.html
Remember: Even though you don't require the depth information for your use, having this information makes it easier for the library to identify a hand.
At last I've found a solution. Turns out a dlib open-source project has a "shape predictor" that, once properly trained, does exactly what I need: it guesstimates (with a pretty satisfactory accuracy) the "pose". A "pose" is loosely defined as "whatever you train it to recognize as a pose" by training it with a set of images, annotated with the shapes to extract from them.
The shape predictor is described in here on dlib's website
I don't know about possible existing solutions. If supervised (or semi-supervised) learning is an option, training decision trees or neural networks might already be enough (kinect uses random forests from what i have heard). Before you go such a path, do everything you can to find an existing solution. Getting Machine Learning stuff right takes a lot of time and experimentation.
OpenCV has machine learning components, what you would need is training data.
With the motion tracking features of the open source Blender project it is possible to create a 3D model based on 2D footage. No kinect needed. Since blender is open source you might be able to use their pyton scripts outside the blender framework for your own purposes.
Have you ever heard about Eyesweb
I have been using it for one of my project and I though it might be usefull for what you want to achieve.
Here are some interesting publication LNAI 3881 - Finger Tracking Methods Using EyesWeb and Powerpointing-HCI using gestures
Basically the workflow is:
You create your patch in EyesWeb
Prepare the datas you want to send with a network client
Use theses processed datas on your own server (your app)
However, I don't know if there is a way to embed the real time image processing part of Eyes Web into a soft as a library.