Reading floating point value(-ve) from energy meter using micropython - esp8266

I am trying to read the live power from energy meter EM6400 from the two holding registers and try to convert to float value using the following MicroPython code
zedregister_value = modbus_obj.read_input_registers(zedCfg["zedmeterid"],zedCfg["zedmeterregstart"],zedCfg["zedmeterregcount"])
zedMeterReadpower = int(struct.unpack('<f', struct.pack('<h', int(zedregister_value[1])) + struct.pack('<h', int(zedregister_value[0])))[0])
This works for +ve value, but when meter showing -Value, the above code returning positive value only .
Note: since I am using esp8266, I am not able to use pymodbust lib due to memory restriction.

Related

How to playback Lidar pointcloud from bag file in ros, rviz?

I got a bag dataset and want to play the message containing velodyne VLP-16 points back. But got uncomplete result.
I've tried:
- increasing fps in rviz
- using timestamp from other simulation
I expect to got uncut result / a ring of lidar radius beam.
This is the result I got in rviz
result
As you can see, the points are available but they are not visible long enought because points are not showed simultaneously. The reason is that a single message does not contain all points of a complete (360°) beam. A beam gets splittet to several messages typically, of which only the latest gets shown by default. Checking the rviz point cloud documentation you will find a parameter called decay time:
The amount of time to keep a cloud/scan around before removing it. A value of 0 means to only display the most recent data.
Try increasing the value of this parameter in rviz, then you should be able to see more points simultaneously.

Measure (frequency-weighted) sound levels with AudioKit

I am trying to implement an SLM app for iOS using AudioKit. Therefore I need to determine different loudness values to a) display the current loudness (averaged over a second) and b) do further calculations (e.g. to calculate the "Equivalent Continuous Sound Level" over a longer time span). The app should be able to track frequency-weighted decibel values like dB(A) and dB(C).
I do understand that some of the issues im facing are related to my general lack of understanding in the field of signal and audio processing. My question is how one would approach this task with AudioKit. I will describe my current process and would like to get some input:
Create an instance of AKMicrophone and a AKFrequencyTracker on this microphone
Create a Timer instance with some interval (currently 1/48_000.0)
Inside the timer: retrieve the amplitude and frequency. Calculate a decibel value from the amplitude with 20 * log10(amplitude) + calibrationOffset (calibration offset will be determined per device model with the help of a professional SLM). Calculate offsets for the retrieved frequency according to frequency-weighting (A and C) and apply these to the initial dB value. Store dB, dB(A) and dB(C) values in an array.
Calculate the average for arrays over the give timeframe (1 second).
I read somewhere else that using a Timer this is not the best approach. What else is there that I could use for the "sampling"? What exactly is the frequency of AKFrequencyTracker? Will this frequency be sufficient to determine dB(A) and dB(C) values or will I need an AKFFTTap for this? How are values retrieved from the AKFrequencyTracker averaged, i.e. what time frame is used for the RMS?
Possibly related questions: Get dB(a) level from AudioKit in swift, AudioKit FFT conversion to dB?

How do I find the required maxima in acceleration data obtained from an iPhone?

I need to find the number of times the accelerometer value stream attains a maximum. I made a plot of the accelerometer values obtained from an iPhones against time, using CoreMotion method to obtain the DeviceMotionUpdates. When the data was being recorded, I shook the phone 9 times (where each extremity was one of the highest points of acceleration).
I have marked the 18 (i.e. 9*2) times when acceleration had attained maximum in red boxes on the plot.
But, as you see, there are some local maxima that I do not want to consider. Can someone direct me towards an idea that will help me achieve detecting only the maxima of importance to me?
Edit: I think I have to use a low pass filter. But, how do I implement this in Swift? How do I choose the frequency of cut-off?
Edit 2:
I implemented a low pass filter and passed the raw motion data through it and obtained the graph as shown below. This is a lot better. I still need a way to avoid the insignificant maxima that can be observed. I'll work in depth with the filter and probably fix it.
Instead of trying to find the maximas, I would try to look for cycles. Especially, we note that the (main) minimas seem to be a lot more consistent than the maximas.
I am not familiar with swift, so I'll layout my idea in pseudo code. Suppose we have our values in v[i] and the derivative in dv[i] = v[i] - v[i - 1]. You can use any other differentiation scheme if you get a better result.
I would try something like
cycles = [] // list of pairs
cstart = -1
cend = -1
v_threshold = 1.8 // completely guessing these figures looking at the plot
dv_threshold = 0.01
for i in v:
if cstart < 0 and
v[i] > v_threshold and
dv[i] < dv_threshold then:
// cycle is starting here
cstart = i
else if cstart > 0 and
v[i] < v_threshold and
dv[i] < dv_threshold then:
// cycle ended
cend = i
cycles.add(pair(cstart, cend))
cstart = -1
cend = -1
end if
Now you note in comments that the user should be able to shake with different force and you should be able to recognise the motion. I would start with a simple 'hard-coded' cases as the one above, and see if you can get it to work sufficiently well. There is a lot of things you could try to get a variable threshold, but you will nevertheless always need one. However, from the data you show I strongly suggest at least limiting yourself to looking at the minimas and not the maximas.
Also: the code I suggested is written assuming you have the full data set, however you will want to run this in real time. This will be no problem, and the algorithm will still work (that is, the idea will still work but you'll have to code it somewhat differently).

FSK demodulation with GNU Radio

I'm trying to demodulate a signal using GNU Radio Companion. The signal is FSK (Frequency-shift keying), with mark and space frequencies at 1200 and 2200 Hz, respectively.
The data in the signal text data generated by a device called GeoStamp Audio. The device generates audio of GPS data fed into it in real time, and it can also decode that audio. I have the decoded text version of the audio for reference.
I have set up a flow graph in GNU Radio (see below), and it runs without error, but with all the variations I've tried, I still can't get the data.
The output of the flow graph should be binary (1s and 0s) that I can later convert to normal text, right?
Is it correct to feed in a wav audio file the way I am?
How can I recover the data from the demodulated signal -- am I missing something in my flow graph?
This is a FFT plot of the wav audio file before demodulation:
This is the result of the scope sink after demodulation (maybe looks promising?):
UPDATE (August 2, 2016): I'm still working on this problem (occasionally), and unfortunately still cannot retrieve the data. The result is a promising-looking string of 1's and 0's, but nothing intelligible.
If anyone has suggestions for figuring out the settings on the Polyphase Clock Sync or Clock Recovery MM blocks, or the gain on the Quad Demod block, I would greatly appreciate it.
Here is one version of an updated flow graph based on Marcus's answer (also trying other versions with polyphase clock recovery):
However, I'm still unable to recover data that makes any sense. The result is a long string of 1's and 0's, but not the right ones. I've tried tweaking nearly all the settings in all the blocks. I thought maybe the clock recovery was off, but I've tried a wide range of values with no improvement.
So, at first sight, my approach here would look something like:
What happens here is that we take the input, shift it in frequency domain so that mark and space are at +-500 Hz, and then use quadrature demod.
"Logically", we can then just make a "sign decision". I'll share the configuration of the Xlating FIR here:
Notice that the signal is first shifted so that the center frequency (middle between 2200 and 1200 Hz) ends up at 0Hz, and then filtered by a low pass (gain = 1.0, Stopband starts at 1 kHz, Passband ends at 1 kHz - 400 Hz = 600 Hz). At this point, the actual bandwidth that's still present in the signal is much lower than the sample rate, so you might also just downsample without losses (set decimation to something higher, e.g. 16), but for the sake of analysis, we won't do that.
The time sink should now show better values. Have a look at the edges; they are probably not extremely steep. For clock sync I'd hence recommend to just go and try the polyphase clock recovery instead of Müller & Mueller; chosing about any "somewhat round" pulse shape could work.
For fun and giggles, I clicked together a quick demo demod (GRC here):
which shows:

How to Update the TI SensorTags to increase outcomes of accelerometer and Gyroscope on per second time intervals

Hi I am using TI sensor tag i want to draw path of moving sensors by using the values of accelerometer and gyroscope.
i have found Pitch and Roll by this equation:
pitch = (atan2(-ACy, ACz)*180.0)/M_PI;
roll = (atan2(ACx, sqrt(ACy*ACy + ACz*ACz))*180.0)/M_PI;
But sensor is giving 3-4 data values per second of intervals but for accurate path drawing i need 20-30 values per second
Is there is any way to update the sensors or to update firmware of Sensor tags??
Follow the answer given at How to modify the TI SensorTag Firmware to advertise indefinitely? by #Mathijs to update the TI SensorTags.
Hello #Gorav Grover Please download multi tool application.It provide option to upgrade your sensors.For updating you also required img/a and ing/B for your firmware.
If you open the Sensrtag Project in IAR Embedded Workbench.
In the File SensorTag.c
you will see following CONSTANTS in the begining of the file
each of these constants set the update interval
// How often to perform sensor reads (milliseconds)
#define TEMP_DEFAULT_PERIOD 1000
#define HUM_DEFAULT_PERIOD 1000
#define BAR_DEFAULT_PERIOD 1000
#define MAG_DEFAULT_PERIOD 2000
#define ACC_DEFAULT_PERIOD 1000
#define GYRO_DEFAULT_PERIOD 1000
If you can't gain access to IAR Embedded Workbench or doesn't want to deal with embedded programming, you can still use the Accelerometer Period characteristic (0xAA13) in the Accelerometer service and then perform a Write Characteristic of a byte value from 0 - 255 with your application. 100 is the default value (1 notification per second), and if you write the value 10, you will receive 10 notifications per second.

Resources