We are using LMC078 motion controller from Schneider electric in our project to control 12 motors simultaneously. Softmotion encoder will act as master for all the axis.
We are using soMachine software which uses codesys for programming the controller.
We created softMotion encoder under IME_EncIn in device tree and created a logic drive under that. Please see the image. Also find the logical encoder page.
In our motion sequence, we enable camming between logical encoder master axis and a physical axis.
In the sequence encoder master will move forward, will reset the master axis position via MC_SetPosition.(We tried using MC_Home function block, but that was not working). We are setting the position to zero. But what we can see that even the position of encoder is resetting to 0, we can see at some point of time, encoder overflows as seen in the video.
Since the slave axis is engaged to master axis, when suddenly the position of the actual position changes to a negative value all the slave suddenly stops and try to match with the new position change which will create a sudden stop and whole machine jerks.
What we observe is that even the factPosition changes when we call MC_setPosition to 0, some of the internal variables in softMotionEncoder is not changing(dwActPosition...) because of which factPosition overflows at some point of time.
Is there any work around for this. Currently we are powering off PLC after every 4 hours sothat all internal varaibles will reset to its default value.
https://www.youtube.com/watch?v=7EdX3ENiC2c
Related
My goal is to place a object on a arcore plane in a room, then I save the plane and object's data in file. After app exit and start again, the object had been saved can be loaded from file, then it can be displayed at the same position just like last time.
To persist virtual objects, we probably can use VPS (visual positioning service, not released yet) to localize the device with an room.
However there's no API to achieve this in the developer preview version of ARCore.
You can save anchor positions in ARCore using Augmented Images.
All you have to do is place your objects wherever you want go back to an/more Augmented Images and save positions of corners of your Augmented Images into a text or a binary file in your device.
Then in the next Session, lets say you used one Augmented Image and 4 points(corners of the image), you load these positions and calculate a Transformation Matrix between two sessions using these 2 group of 4 points which are common in each Session. The reason why you need this is due to the fact that ARCore's coordinate system changes in every session depending on device's initial position and rotation.
At the end, you can calculate positions and rotations of anchors in new session using this Transformation Matrix. It will be placed at the same physical location with an error margin caused by accuracy of Augmented Image tracking. If you use more points this error margin will be relatively lower.
I have tested this with 4 points in each group and it is quite accurate considering my anchors were placed on Arbitrary locations not attached to any Trackable.
In order to calculate the Transformation Matrix you can refer to this
We have a simple robotic model with revolute joins in v-rep. The joints are in force/torque mode, and they are controlled via non-threaded child script using simSetJointTargetVelocity function of the simulator. The collision is enabled in the model, and some toy weights are set to the connecting poles.
The error we have is that the blue part of the joint (the movable part) "wiggles" around and eventually out of the red part of the joint (the fixed case). Here's a screenshot showing the error.
(The blue part of the upper joint should be inside the red part, as is in the lower joint)
How to fix the moving part of the joint so that it doesn't move around, but only rotates as requested by the velocity settings?
What do you mean by "toy weights" ?
You should keep in mind that physical simulations are relatively fragile and that some restrictions apply. In your case, it seems the masses you set are making the simulation behave strangely. Try to keep the mass ratio between linked objects below 1/10.
You can also modify the simulation settings, to increase its precision. You can do that in the simulation settings dialog (http://www.coppeliarobotics.com/helpFiles/) and in the general dynamics properties dialog. You can also try if your simulation works better with another physics engine than bullet (I suggest "Newton").
For more info you should take a look at http://www.coppeliarobotics.com/helpFiles/en/designingDynamicSimulations.htm, especially at the "Design considerations" section.
Summary including updates:
Unity 5 scene using deferred lighting, containing approximately 200 lights spread along 800 units of space.
Most of the lights are point lights, some are spots - the spots work fine.
The point lights cut instantly to dark at around 150-200 units away from the camera.
If large numbers of point lights are moved inside this range they work without issue.
Switching the Render Priority between Auto and Important makes no difference.
If I play a different scene in the editor which allows me to load this scene, it displays correctly! It still does not display correctly when played directly or when running the build.
I've got a basic scene together, which consists of 5 cloned sections of corridor, each with 12 lights, so 60 total.
However, only the first couple of sections show properly, the others are almost completely dark:
(please be kind, I only started this one today :P)
And from the editor, with the end section highlighted to show the distance better:
As you can faintly see, the lights are actually there, just very, very dim:
When you walk down the corridor, they snap to full brightness as you approach.
This is on a build with deferred lighting set, and the pixel light count turned right up, just in case.
I' guessed this might relate to LOD or camera range in some way, but I can't seem to get anything to effect the issue currently.
(This scene is actually based around the lights cutting out and switching to emergency lighting so I really do need to be able to control them!)
UPDATE
The lights you can see are spot lights. The lights that are disappearing are all point lights.
You can just see the spot lights uplighting onto the ceiling in the distance, but the main ceiling lights just go completely dark.
UPDATE 2
I've added a frankly silly number of lights into the scene and extended the corridor, to run some tests.
There are now 24 lights per section, and a total of 8 sections, making 192 lights in all:
I wanted to check if more lights would cut out, and they don't. It seems to be entirely based around range - around 150-200 units in my scene.
To confirm this, I also walked to the center of the tunnel to see if the number of lights visible would effectively double, including ones behind - they do.
I also moved all of the sections close to the camera to confirmed that all the lights can be displayed at once without problems, and this works as well.
UPDATE 3
I have found a situation where the scene displays correctly!
If I press play in the editor on my main menu scene, then press the UI button that loads this scene, it displays correctly!
It still does not display correctly either when playing the scene directly, or when running the final build.
My solution for this problem is (I'm new, but I found this helped):
Go to quality settings
Search for Pixel Light Count
Raise this number to actual number of your lights in your scene.
This parameter establishes number of sources of light which can be displayed at they same time - I think.
I'm writing a app using Scenekit where the client wishes to push the limits of animation in IOS. This particular app has requirements where I"m pushing out to the screen over 1,500 redraws. Even with this many redraws, I've locked down the FPS to 60, which is great, but when I add all elements the client wants, the redraws are pushed to 7,500 redraws (and yes, this isn't a mistake or a joke, this is the redraw number even though it's almost 50-80 times more than most redraw times I've seen with scenekit). At this level of redrawing, the screen contains 1.7 million vertices, and around 800k polygons. This is a a lot of stuff, and it's really too much stuff for this app to be useful to anyone because now my FPS drops to 15-30FPS which is expected from drawing over 3K geometry elements on screen. What I've done so far:
I clone all nodes, cloning allows me to push the limits of Scenekit. I was able to fit on screen over 1.5k constant CAAnimations with over 1.8K unique geometries placed in different locations across the screen.
I've forced all windows, views, and screens in app to be opaque by looping through all windows and setting their opaque property to yes.
Question is this, I can deal with the performance issues, but I'm having a problem with the node cloning. Well, the node cloning works, but the problem is that each geometry that is pushed to the screen must have a different size and it seems like there is no way to change the geometry of each separate clone. I know that I can change the geometry of a "copied" node (SCNNode *node = [masterNode copy];), and I know I can change the materials property of a cloned node, but is there a way to change the geometry of the cloned node? Apple doesn't give any insight about the geometry being changed, but they do talk about changing the materials. Am I to assume that I can't change the size of the geometry of the clone? I can change the transform, pivot, rotation, animation, position, etc, of the clone, but the size of the geometry won't change. For my purposes, I just need the "height" variable of a cylinder to be changeable, I have everything else in good order, AND, there's no other way to push over 2k redraws to a screen without node cloning, I've tried it without cloning and FPS drops to less than 10 with just 300 redraws when declaring each geometry and node with geometry as it's own unique variable.
Lastly, given this same scenario, how much of a performance increase should I expect by moving from Scenekit to Metal. I'm not worried about the math, the level of detail, the time consuming operations of setting up the rendering pipeline or whatever else might come my way, I'm merely trying to find the BEST solution for my problem here, and I've not used Metal yet because I'm not sure I'd get different results given how many polygons, vertices, and redraws are required. Thanks.
is there a way to change the geometry of the cloned node
I believe you can change the baked geometry itself, but not the parametric one (not the SCNCylinder). So you can (to change the height):
Scale the node
Change the Transformation Matrix (so scaling too, just a different way to)
Add a Geometry Shader modifier that moves the points up/down on the axis you want
Changing the actual geometry kind of defeats the whole purpose of cloning, so I don't think there is a way around that.
Lastly, given this same scenario, how much of a performance increase should I expect by moving from Scenekit to Metal.
A lot. Around 30% from what I've seen, but again it will depend on your setup. Metal comes with iOS 9, and you won't have to do anything to get it for your scene, so just update one of your devices and try it there, to see if it helps!
Out of curiosity: why do you need so much cylinders? Could you not cheat the way they are rendered?
When I capture camera images of projected patterns using openCV via 'cvQueryFrame', I often end up with an unintended artifact: the projector's scan line. That is, since I'm unable to precisely time when 'cvQueryFrame' captures an image, the image taken does not respect the constant 30Hz refresh of the projector. The result is that typical horizontal band familiar to those who have turned a video camera onto a TV screen.
Short of resorting to hardware sync, has anyone had some success with approximate (e.g., 'good enough') informal projector-camera sync in openCV?
Below are two solutions I'm considering, but was hoping this is a common enough problem that an elegant solution might exist. My less-than-elegant thoughts are:
Add a slider control in the cvWindow displaying the video for the user to control a timing offset from 0 to 1/30th second, then set up a queue timer at this interval. Whenever a frame is needed, rather than calling 'cvQueryFrame' directly, I would request a callback to execute 'cvQueryFrame' at the next firing of the timer. In this way, theoretically the user would be able to use the slider to reduce the scan line artifact, provided that the timer resolution is sufficient.
After receiving a frame via 'cvQueryFrame', examine the frame for the tell-tale horizontal band by looking for a delta in HSV values for a vertical column of pixels. Naturally this would only work when the subject being photographed contains a fiducial strip of uniform color under smoothly varying lighting.
I've used several cameras with OpenCV, most recently a Canon SLR (7D).
I don't think that your proposed solution will work. cvQueryFrame basically copies the next available frame from the camera driver's buffer (or advances a pointer in a memory mapped region, or blah according to your driver implementation).
In any case, the timing of the cvQueryFrame call has no effect on when the image was captured.
So as you suggested, hardware sync is really the only route, unless you have a special camera, like a point grey camera, which gives you explicit software control of the frame integration start trigger.
I know this has nothing to do with synchronizing but, have you tried extending the exposure time? Or doing so by intentionally "blending" two or more images into one?