Why is NodeMCU triggering gpio in reverse when using Lua? - lua

When using Lua and the GPIO module with my NodeMCU, my the high and low values are occurring in reverse.
I downloaded my build from NodeMCU custom builds: Link
To turn on the blue LED on the ESP8266, normally you set GPIO pin 0 to high. What's happening for me is I have to set it to low.
This is what I'm executing in the serial console to light up the blue LED:
gpio.write(0, gpio.LOW)
If I take this pin and directly connect it to ground, it also lights up the blue LED which I believe is correct.
What's causing my low and high values to be read incorrectly in NodeMCU?

This is normal - the on-board LED turns on with a LOW value and turns off with a HIGH value.
I've programmed these both in Lua and Arduino and the on-board LED works the same way.
Try attaching a regular LED to the same pin. You'll notice that it's inverse -- it will turn on with a HIGH value and off with a LOW value.

HIGH means the pin is set to supply voltage (it is "sourcing" voltage) and LOW means it is set to 0V (it is "sinking" voltage).
Assuming this board is wired like most of them, this is the rough schematic of the LED (note that "0" in gpio.write refers to GPIO16 hardware pin per the diagram here):
Diagram of the GPIO16 pin
You can see the diode is "pointing" in the direction that current should flow through it for the diode to light, which is "towards" GPIO16. So to get current to flow you need to set GPIO16 to LOW (0V) so there is a voltage difference. Otherwise both sides of the diode are at 3.3V and no current flows.

Related

Understanding AVCaptureDevice exposureDuration, exposureTargetOffset, exposureTargetBias

Come from Android os, I'm trying to understand the AVCaptureDevice API and find a match between the different parameters of the IOS and Android.
I'm working with auto-continuous exposure mode.
I'm having trouble with exposure parameters above:
To my understating:
exposureDuration - This is the length of time in which the expose actuacly happens. It can be normalized to units of [seconds] by using the value and scale of this property.
exposureTargetOffset, exposureTargetBias - I'm not sure what these values represents - are they kind of fix applied to get the desired exposure level? what is this exposure target value?
You aren't alone. I'm not a professional photographer either, so it's pretty confusing. I think your gut is leading you in the right direction.
If you set exposureDuration, you're out of "auto-exposure mode" and it'll freeze that exposure duration and current or specified ISO setting. If the light changes, you're stuck with that setting.
If you set the exposureTargetBias, it will mimic a fancy camera and move the automatically calculated exposure settings up or down an exposure value (combination of f-number and exposure duration). There's a standard value for exposure of an image, but sometimes you want to over-expose or under-expose for style or shutter-speed priority. Changing the bias tells the automatic exposure system to aim for a value over or under the "correct" standard value.
Here's a great article explaining it in iOS: https://www.imore.com/camera-api-ios-8-explained
Exposure compensation is expressed in f-stops. +1 f-stop doubles the brightness, -1 f-stop halves the brightness.
Developers can currently set exposure target biases between -8 and +8 for all existing iOS devices. However, Apple warns that that could change in the future.
If you have a new iPhone (11 or newer) you can even change the bias in real time.
Exposure Bias is explained here: https://digital-photography-school.com/using-exposure-bias-to-improve-picture-detail/
exposureTargetOffset tells you how well the camera is hitting your requested bias value. Sometimes it just can't adjust enough to darken the image (aiming at the sun, the camera tries to shorten the exposure time and drops the ISO very low) or lighten it (pitch-black closet, the camera tries to expose the image sensor for a long time and bumps up the ISO a ton to gather all the light, resulting in a dark and grainy image). If the camera can't hit the target or is in the process of adjusting to it, the offset tells you how far off it currently is. For video, the exposure is obviously limited by framerate.

BBB: GPIO signal won't stay high

So I have a BeagleBone Black board, and I want to be able to set some GPIO pin from a low value to a high value.
For achieving this I'm using the BlackLib1 library (a C++ library that offers general access to all beaglebone's pins).
That library haves a class called BlackGPIO that offers the functionality that I want.
BlackLib::BlackGPIO NSLP_pin(BlackLib::GPIO_61, BlackLib::output, BlackLib::SecureMode);
auto NSLP_pinMode = NSLP_pin.getValue();
NSLP_pin.setValue(BlackLib::low);
I expect that this lines of code will set the signal from a low value to a high one (the signal is low by default).
The problem is that the signal goes high only for about ~10ms (measured on a scope), and after that it goes low again.
What I do wrong?
How can I set the some GPIO pin at a certain value, and remain like that until I change it?
[1] link
The link specifies to export the BBB pins from command line and to set it HIGH or LOW. You can develop a small C++ function to send those commands to kernel to export, ON/OFF the BBB pins. I'm using the same method in my C application and it works perfect.
Example code snippet in C to Enable the pin:
FILE *GPIO;
GPIO = fopen("/sys/class/gpio/gpio65/direction", "w");
fseek(GPIO,0,SEEK_SET);
fprintf(GPIO,"61");
fflush(GPIO);
fclose(GPIO);

Revolute joints jump out of frame - vrep, bullet engine

We have a simple robotic model with revolute joins in v-rep. The joints are in force/torque mode, and they are controlled via non-threaded child script using simSetJointTargetVelocity function of the simulator. The collision is enabled in the model, and some toy weights are set to the connecting poles.
The error we have is that the blue part of the joint (the movable part) "wiggles" around and eventually out of the red part of the joint (the fixed case). Here's a screenshot showing the error.
(The blue part of the upper joint should be inside the red part, as is in the lower joint)
How to fix the moving part of the joint so that it doesn't move around, but only rotates as requested by the velocity settings?
What do you mean by "toy weights" ?
You should keep in mind that physical simulations are relatively fragile and that some restrictions apply. In your case, it seems the masses you set are making the simulation behave strangely. Try to keep the mass ratio between linked objects below 1/10.
You can also modify the simulation settings, to increase its precision. You can do that in the simulation settings dialog (http://www.coppeliarobotics.com/helpFiles/) and in the general dynamics properties dialog. You can also try if your simulation works better with another physics engine than bullet (I suggest "Newton").
For more info you should take a look at http://www.coppeliarobotics.com/helpFiles/en/designingDynamicSimulations.htm, especially at the "Design considerations" section.

Is it possible to use 1 wifi access point to locate android mobile or would i need 3 to triangulate?

I'm working on a project but i want to try if i can locate my position using 1 wifi access point because i only have 1 at home, or would i need to go somewhere where there are atleast 3?
It depends on how accurate you need the reading to be. A typical wi-fi point will have a range of something like 30m indoors or 90m outdoors.
So, if you can actually locate the wi-fi point perfectly and you don't need better than 90m resolution (although, technically, that's probably an up-to-180m error), one should be fine.
If you need more resolution, the more points you can get (assuming of course the points aren't sitting on top of one another in a rack or something) should allow you to refine your position.
It is like xyz coordinate system. With one Wifi AP you will know the distance from the AP but not the direction. With two wifi APs you will know the location within a 2d plane (only the positive plane). You will need a third wifi AP for the 3rd dimension.

openCV: is it possible to time cvQueryFrame to synchronize with a projector?

When I capture camera images of projected patterns using openCV via 'cvQueryFrame', I often end up with an unintended artifact: the projector's scan line. That is, since I'm unable to precisely time when 'cvQueryFrame' captures an image, the image taken does not respect the constant 30Hz refresh of the projector. The result is that typical horizontal band familiar to those who have turned a video camera onto a TV screen.
Short of resorting to hardware sync, has anyone had some success with approximate (e.g., 'good enough') informal projector-camera sync in openCV?
Below are two solutions I'm considering, but was hoping this is a common enough problem that an elegant solution might exist. My less-than-elegant thoughts are:
Add a slider control in the cvWindow displaying the video for the user to control a timing offset from 0 to 1/30th second, then set up a queue timer at this interval. Whenever a frame is needed, rather than calling 'cvQueryFrame' directly, I would request a callback to execute 'cvQueryFrame' at the next firing of the timer. In this way, theoretically the user would be able to use the slider to reduce the scan line artifact, provided that the timer resolution is sufficient.
After receiving a frame via 'cvQueryFrame', examine the frame for the tell-tale horizontal band by looking for a delta in HSV values for a vertical column of pixels. Naturally this would only work when the subject being photographed contains a fiducial strip of uniform color under smoothly varying lighting.
I've used several cameras with OpenCV, most recently a Canon SLR (7D).
I don't think that your proposed solution will work. cvQueryFrame basically copies the next available frame from the camera driver's buffer (or advances a pointer in a memory mapped region, or blah according to your driver implementation).
In any case, the timing of the cvQueryFrame call has no effect on when the image was captured.
So as you suggested, hardware sync is really the only route, unless you have a special camera, like a point grey camera, which gives you explicit software control of the frame integration start trigger.
I know this has nothing to do with synchronizing but, have you tried extending the exposure time? Or doing so by intentionally "blending" two or more images into one?

Resources