I'm trying to let Pepper do a really basic sequence of movements, with Choregraphe: a rotation, then one meter forward, than another rotation and finally one meter forward.
Most of the times that I'm running the behaviour, the sequence cannot be completed as the robot freezes. Every time I can hear the noise of the motors, but most of the times the robot won't move. Please consider that it is on a perfectly smooth surface.
Does anybody know what could be the reason of this problem? Do you have any suggestion on how to fix it?
The version of NAOqi is 2.5.5.5
The robot has a lot of safety. If the robot can't move because of an obstacle, the choregraphe box will say that your movement failed (grey output on the move To box) and cancel your flow. In your program, the flow will only continue if the movement is a success.
As mcaniot says the robot has some aggressive safety features and the robot may stop suddenly. However if you know what you are doing and accept the risk, you can disable the security in the web settings.
Read the specifics of the collision avoidance here:
http://doc.aldebaran.com/2-5/naoqi/motion/reflexes-external-collision.html#pepp-pepper
You can find the settings to disable it here:
Use this method to enable/disable settings:
http://doc.aldebaran.com/2-5/naoqi/motion/reflexes-external-collision-api.html#ALMotionProxy::setExternalCollisionProtectionEnabled__ssCR.bCR
Related
I have made an application for the hololens with the holotoolkit. The app gets an array with 3D positions and spawns GameObjects based on these positions.
After walking a bit in the room and change some realworld objects (maybe open a door or something) the hololens will recognized it, then a window opens and says 'Finding your space' and all of the GameObjects are in the wrong position.
How can I use SpaitalMapping, but not allow to replace the GameObjects i place?
After walking a bit in the room and change some realworld objects (maybe open a door or something) the hololens will recognized it
I'm not sure the details of the changes you made on realworld objects, but some changes in the real world may have a great impact on the lighting conditions. For example, after opening an outward door in a dimly lit room. a lot of daylight will enter the room.
Lighting can make a difference in the visual features that HoloLens detects. So changeable lighting conditions might cause HoloLens to lose tracking.
Besides, in environments where the visual features change because most objects move, will also cause the same problem.
I recommend that you try the steps in I see a message that says Finding your space to fix this problem. In short, make sure your artificial lighting conditions and WiFi signal are stable and try moving more slowly.
Currently working on licentiate detection system and need some guidance on how to proceed.
I can capture (via video playback) and with the help of an open source library called OpenALPR display the license plates directly to the terminal, now the issue is it capture on a frame by frame basis so it capture the same license plate multiple times. I added a frame skip variable and now it skips however many number of frames I want it to but the issue is still there.
Furthermore, I'd like to distinguish between different license plates if possible but don't know how to work around that, I've attempted employing basic object detection and detection but failed miserably.
Below is an image of the program running, as seen it detects a single license plate and display multiple instance of it, now the issue is I expect it to move on to the next car and display Plate#1, unfortunately it does not and continues feeding into Plate #0
Program Running
Program Running
The function that actually helps display the license plate text is below, really the first line does all the work. OpenALPR is a pretty powerful.
results = alpr.recognize_ndarray(frame)
for i, plate in enumerate(results['results']):
best_candidate = plate['candidates'][0]
print('Plate #{}: {:} ({:}%)'.format(i,
best_candidate['plate'].upper(),
best_candidate['confidence']))
I'd like some guidance towards how I can solve this problem? Which is basically distinguish between different license plates.
It is a general problem without general solution, because it highly depends on context. Some thoughts:
If it is a video feed you can track the plate movement, the track will "jump" when it detects another plate. Let say the maximum optical flow velocity is 100 px/frame, if it jumps more than this threshold, you can suppose it is a new plate.
Depending on you video quality and detector, may there be spurious jumps, I would add a Kalman filter or any simple filter.
Perhaps there is a minimum time lapse between a plate goes out the image and the next arrives. You can use a time threshold to trigger the "changed plate alert" event.
I am working on an iOS app with tracking. I have implemented Kalman smoothing in order to present a pleasing path. This is working pretty well at this point.
I am having a bit of trouble dealing with the user-not-moving case though. When the user IS moving we get very good reads back from the CLLocation Manager. And even when a reading is a bit off the Kalman algorithm takes care of it.
When standing still the CLLocation Manager delegate is still receiving "accurate" locations. They have good accuracy, not an unbelievable speed. Looking at the screen with human eyes it's clear that the user is standing still with all these points just scattered around. Some points very close and a few of them far out.
I have tried setting the CLLocationManager property pausesLocationUpdatesAutomatically but it doesn't seem to be working that well. It doesn't always stop when it should and there have been difficulty restarting the tracking again as the antennas are powered down.
So I'm looking to keep the tracking on the whole time but I want to filter out the jitter in post processing. So I determine programmatically that the user is stopped and discard (or ignore) all locations until the user is moving again.
I'm not really sure how to go about this, what algorithm is appropriate to achieve something like this?
Is it possible to calculate small distances with CoreMotion?
For example a user moves his iOS device up or down, left and right and facing the device in front of him (landscape).
EDIT
Link as promised...
https://www.youtube.com/watch?v=C7JQ7Rpwn2k position stuff starts at about 23 minutes in.
His summary...
The best thing to do is to try and not use position in your app.
There is a video that I will find to show you. But short answer... No. The margin for error is too great and the integration that you have to do (twice) just amplifies this error.
At best you will end up with the device telling you it is slowly moving in one direction all the time.
At worst it could think it's hurtling around the planet.
2020 Update
So, iOS has added the measure app that does what the OP wanted. And uses a combination of accelerometer and gyroscope and magnetometer in the phone along with ARKit to get the external reference that I was talking about in this answer.
I’m not 100% certain but if you wanted to do something like the OP was asking you should be able to dig into ARKit and find some apis in there that do what you want.
👍🏻
I realize scanning bar code with ios abd zbar:ZBarReaderViewController.but when the camera show, the focusing very blurry,always need two second focus to become clear,why this happen?is there some setting that can make it better?
You can play around with flash/torch settings to enable better focusing. Bad/Low light is a big enemy of auto-focus. Setting focus setting to continuous will help with scanning.
Keep in mind the minimum focus distance of the camera, at a certain point it won't focus.