Rotate a certain portion of PDB file around a provided vector - biopython

Can I rotate a certain portion of PDB file around a given vector and angle (say ATOM-10, ATOM-100) using BioPython !
Thanks!

Related

Point Cloud Stitching and Processing

I have pointclouds from two realsense D415 cameras, which is mounted in a way to have overlap. I am able to stitch the output pointclouds in realtime to get a single larger FOV pointcloud by using PCL ICP to find the transforms and transforming one pointcloud to match with the other. Now I would like to detect planes on this output pointcloud and further detect people with the help of the detected planes and ground plane. I have implemented the same with PCL libraries once again.
Now issues arise in two cases:
(1) The final stitched output is unordered leading to a lot of PCL functions not being able to use the pointcloud. To overcome this , say I resize my pointcloud to match the final dimensions with height not being 1, I run into (2).
(2) Upon passing the resized pointcloud I get an error from the normal algorithm I am using as Input not from a projective device, using only XXXX points (XXXX is a very small fraction of the total number of points in the pointcloud). Any other available normal algorithm performs really slow and is not capable of being used in real time applications.
Any ideas how to proceed with this? Happy to provide more information.

Estimate 3D Line from Image projections with known Camera Pose and Calibration

I know the principle of triangulation for 3D Point estimation from images.
However, how would you solve the following problem, I have images from a Line in 3D space with known Camera position and also known calibration. But since I don't know how much/which segment of the Line is seen in eag image, I am not sure how to form an equation for the Line estimation. See image (I have more than 2 images available, in all images most part of the Line visible should be the same, but not exact the same):
I am thinking of spanning a plane from the Camera through the line in the image and intersecting all Planes spanned from each perspective to get an estimate of the Line?
However, I don't really know if that's possible or how I could do this.
Thanks for any help.
Cheers
Afraid other answers and comments have it backwards, pun intended.
Backprojecting ("triangulating") image lines into 3D space and then trying to fit them together with some ad-hoc heuristics may be good for an initial approximation.
However, to refine this approximation, you should then assume that a 3D line exists with unknown parameters (a point and a unit vector), plus additional scalar parameters identifying the initial and final points along the line of the segments you observe. Using the projection equations, you then set up an optimization problem whose goal is to find the set of parameters that minimize the projection errors of the 3D line with those parameters onto the images. This is essentially bundle adjustment, but expressed in the language of your problem, and in fact you can use any good software package for bundle adjustment (hint: Ceres) to solve it. The initial approximation computed with some ad-hoc heuristic will be used as the starting point of the bundle adjustment.
Plane can be defined with one point and direction.
If bot cameras are calibrated and have known position then ...
Make two planes, first from first camera, using eye and projected line second plane at the same way.
Line from screen you can easily convert to line in space because you have camera position.
What is left to intersect those two planes.

How do I get my robot coordinates in (x,y,z) using hector_gazebo_plugin and python2?

I have a quadrotor, and I want to read its position in (x,y,z) using hector_gazebo_plugin and python. For now I am using libhector_gazebo_ros_gps.so file to get the latitude and longitude of the quadrotor.
But I would like to have the position of the quadrotor.
How do I do that?
In ros, there are several different frames, as BTables was trying to get to. If you're trying to get a position estimation from sensor data, using the robot_localization package, familiarize yourself with the different kinds of frames and ros message types / data that go in & out of that process.
Normally in ros, there are 3 main frames: map -> odom -> base_link. The base_link is the point on the robot that represents it. The odom frame tracks integrated velocity/acceleration sensor updates to have a continuous position; it's origin is usually wherever the robot boots up. The map frame is the "real world" location of the robot. It does require an origin position and yaw, because otherwise it's arbitrary.
In your case, it seems like you want to anchor the robot within the longitude/latitude coordinate frame. My recommendation is still to pick an origin in your environment, if you can, otherwise you can use the boot-up location as your origin.
So to do that, I'm assuming your odom->base_link transform is an EKF or UKF node (from robot_localization), using the IMU data from your quadcopter. Then your map->odom transform is another EKF or UKF that also takes in an absolute position in your map frame as an update. (See example launch file (navsat notes) and yaml config file (navsat notes) and more from the github). You can then use your fix topic (sensor_msgs/NavSatFix) from hector gazebo gps plugin with the navsat_transform_node to get a position update for your estimator above and map transforms to the global or utm coordinate system.
If you don't care about your position with respect to the world coming out, this gets a bit simpler, otherwise this also has the features to report your position back out to lat/long.

How to load a high vertices and high polygons object using SceneKit?

I start to build a 3D scene using SceneKit, I have converted a .obj file into .scn file, and I drag a camera, but I couldn't see anything, only the whole white color. Then I found that the object I have converted have high polygons and vertices, almost 21000 vertices and 7220 polygons. I think this is the problem. So what can I do? Is it possible to display those object with high vertices and polygons?
Twice Edit
I have solved this 'problem', simply give the zFar property a higher value.

Saving bounding box coordinates for each frame in a video

I have a video from a camera with humans on the scene. I need to go through each frame of that video and manually save the coordinates (go through each frame and draw the square around each human) of the bounding box of the detected humans on the scene and the coordinate of the center of the head - so basically, top-left, bottom-right, head-center coordinates. The bounding box has to be a square.
An additional program will then read a file with coordinates of the square and center of head and the frame number, and extract the boxes as an image.
For anybody that has experience with computer vision - is there any open-source software that can accomplish what I am requesting? If not, what technology would you recommend building this tool on? Any starter code?
I don't know of any programs that can do specifically this, but I think it is an easy problem and you can code it yourself in no time.
As you are in the computer vision field you must be used to OpenCV. You can use it to extract the frames from a video and to select the box and head center.
Here are some links that can help you out:
Extract video frames
Detect mouse events

Resources