Wrong temperature/humidity results using SHT11 ZIG001 - contiki

I have an issue in contiki2.7 (I use InstantContiki) using Z1 motes and ZIGLET Z001 (temperature & humidity).
I tried the code “test-sht11.c” in the directory examples/z1 to get the temperature and the humidity but I have wrong results:
Rime started with address 227.15
MAC e3:0f:00:00:00:00:00:00 Contiki 2.7 started. Node id is set to 4067.
CSMA ContikiMAC, channel check rate 8 Hz, radio channel 26
Starting 'SHT11 test'
Temperature: 615 degrees Celsius
Rel. humidity: 2650%
Temperature: 615 degrees Celsius
Rel. humidity: 2650%
I saw that the I2c drivers had to be disabled (http://sourceforge.net/p/contiki/mailman/message/29682840/) but it still doesn’t work, I have the same results.
Code :
#include "contiki.h"
#include "dev/sht11.h"
#include <stdio.h>
PROCESS(test_sht11_process, "SHT11 test");
AUTOSTART_PROCESSES(&test_sht11_process);
PROCESS_THREAD(test_sht11_process, ev, data)
{
static struct etimer et;
static unsigned rh;
PROCESS_BEGIN();
i2c_disable();
sht11_init();
for (etimer_set(&et, CLOCK_SECOND);; etimer_reset(&et)) {
PROCESS_YIELD();
printf("Temperature: %u degrees Celsius\n",
(unsigned) (-39.60 + 0.01 * sht11_temp()));
rh = sht11_humidity();
printf("Rel. humidity: %u%%\n",
(unsigned) (-4 + 0.0405*rh - 2.8e-6*(rh*rh)));
}
PROCESS_END();
}
I'm quite sure it’s not a hardware problem (I tried with different ZIG001 and different Z1 motes).
Thank you for your help, I'm desperate…
Jibus.

The person in charge on the repository advised to use the SHT25 instead of the SHT11. Those are new files that may not exist if you're stuck on an old contiki, so you'll have to get them manually. If you switch to a newer release, you'll get them.
If you just drop the files (.c and .h) next to the old sht11 ones (without updating contiki), don't forget to add the sht25.c in the Makefile (after sht11.c).
To use the new version of sensing (see the new examples for Z1) :
PROCESS_THREAD(test_sht25_process, ev, data)
{
int16_t temperature, humidity;
PROCESS_BEGIN();
SENSORS_ACTIVATE(sht25);
while(1) {
etimer_set(&et, CLOCK_SECOND);
PROCESS_WAIT_EVENT_UNTIL(etimer_expired(&et));
temperature = sht25.value(SHT25_VAL_TEMP);
printf("Temperature %d.%d ºC\n", temperature / 100, temperature % 100);
humidity = sht25.value(SHT25_VAL_HUM);
printf("Humidity %d.%d %RH\n", humidity / 100, humidity % 100);
}
PROCESS_END();
}
Also don't use sht11 for Z1 built-in temperature sensor like me, only for the zigglet that's attached to it (doesn't apply to the OP but may help someone else) ...

Related

Can iOS boot time drift?

I'm using this code to determine when my iOS device last rebooted:
int mib[MIB_SIZE];
size_t size;
struct timeval boottime;
mib[0] = CTL_KERN;
mib[1] = KERN_BOOTTIME;
size = sizeof(boottime);
if (sysctl(mib, MIB_SIZE, &boottime, &size, NULL, 0) != -1) {
return boottime.tv_sec;
}
return 0;
I'm seeing some anomalies with this time. In particular, I save the long and days and weeks later check the saved long agains the value returned by the above code.
I'm not sure, but I think I'm seeing some drift. This doesn't make any sense to me. I'm not converting to NSDate to prevent drift. I would think that boot time is record by the kernel when it boots and isn't computed again, it is just stored. But could iOS be saving boot time as an NSDate, with any inherent drift problems with that?
While the iOS Kernel is closed-source, it's reasonable to assume most of it is the same as the OSX Kernel, which is open-source.
Within osfmk/kern/clock.c there is the function:
/*
* clock_get_boottime_nanotime:
*
* Return the boottime, used by sysctl.
*/
void
clock_get_boottime_nanotime(
clock_sec_t *secs,
clock_nsec_t *nanosecs)
{
spl_t s;
s = splclock();
clock_lock();
*secs = (clock_sec_t)clock_boottime;
*nanosecs = 0;
clock_unlock();
splx(s);
}
and clock_boottime is declared as:
static uint64_t clock_boottime; /* Seconds boottime epoch */
and finally the comment to this function shows that it can, indeed, change:
/*
* clock_set_calendar_microtime:
*
* Sets the current calendar value by
* recalculating the epoch and offset
* from the system clock.
*
* Also adjusts the boottime to keep the
* value consistent, writes the new
* calendar value to the platform clock,
* and sends calendar change notifications.
*/
void
clock_set_calendar_microtime(
clock_sec_t secs,
clock_usec_t microsecs)
{
...
Update to answer query from OP
I am not certain about how often clock_set_calendar_microtime() is called, as I am not familiar with the inner workings of the kernel; however it adjusts the clock_boottime value and the clock_bootime value is initialized in clock_initialize_calendar(), so I would say it can be called more than once. I have been unable to find any call to it using:
$ find . -type f -exec grep -l clock_set_calendar_microtime {} \;
RE my comment above...
"to my understanding, when the user goes into settings and changes the
time manually, the boot time is changed by the delta to the new time
to keep the interval between boot time and system time, equal. but it
does not "drift" as it is a timestamp, only the system clock itself
drifts."
I'm running NTP on my iOS app, and speak with Google's time servers.
I feed NTP the uptime since boot (which doesn't pause and is correctly adjusted if some nefarious user starts messing with system time... which is the whole point of this in the first place), and then add the offset between uptime since boot and epoch time to my uptime.
inline static struct timeval uptime(void) {
struct timeval before_now, now, after_now;
after_now = since_boot();
do {
before_now = after_now;
gettimeofday(&now, NULL);
after_now = since_boot();
} while (after_now.tv_sec != before_now.tv_sec && after_now.tv_usec != before_now.tv_usec);
struct timeval systemUptime;
systemUptime.tv_sec = now.tv_sec - before_now.tv_sec;
systemUptime.tv_usec = now.tv_usec - before_now.tv_usec;
return systemUptime;
}
I sync with the time servers once every 15 minutes, and calculate the offset drift (aka on system clock drift) every time.
static void calculateOffsetDrift(void) {
static dispatch_queue_t offsetDriftQueue = dispatch_queue_create("", DISPATCH_QUEUE_CONCURRENT);
static double lastOffset;
dispatch_barrier_sync(offsetDriftQueue, ^{
double newOffset = networkOffset();
if (lastOffset != 0.0f) printf("offset difference = %f \n", lastOffset - newOffset);
lastOffset = newOffset;
});
}
On my iPhone Xs Max the system clock usually runs around 30ms behind over 15 minutes.
Here's some figures from a test I just ran using LTE in NYC..
+47.381592 ms
+43.325684 ms
-67.654541 ms
+24.860107 ms
+5.940674 ms
+25.395264 ms
-34.969971 ms

Convert Win32 FILETIME to Unix timestamp in Delphi 7 [duplicate]

I have a trace file that each transaction time represented in Windows filetime format. These time numbers are something like this:
128166372003061629
128166372016382155
128166372026382245
Would you please let me know if there are any C/C++ library in Unix/Linux to extract actual time (specially second) from these numbers ? May I write my own extraction function ?
it's quite simple: the windows epoch starts 1601-01-01T00:00:00Z. It's 11644473600 seconds before the UNIX/Linux epoch (1970-01-01T00:00:00Z). The Windows ticks are in 100 nanoseconds. Thus, a function to get seconds from the UNIX epoch will be as follows:
#define WINDOWS_TICK 10000000
#define SEC_TO_UNIX_EPOCH 11644473600LL
unsigned WindowsTickToUnixSeconds(long long windowsTicks)
{
return (unsigned)(windowsTicks / WINDOWS_TICK - SEC_TO_UNIX_EPOCH);
}
FILETIME type is is the number 100 ns increments since January 1 1601.
To convert this into a unix time_t you can use the following.
#define TICKS_PER_SECOND 10000000
#define EPOCH_DIFFERENCE 11644473600LL
time_t convertWindowsTimeToUnixTime(long long int input){
long long int temp;
temp = input / TICKS_PER_SECOND; //convert from 100ns intervals to seconds;
temp = temp - EPOCH_DIFFERENCE; //subtract number of seconds between epochs
return (time_t) temp;
}
you may then use the ctime functions to manipulate it.
(I discovered I can't enter readable code in a comment, so...)
Note that Windows can represent times outside the range of POSIX epoch times, and thus a conversion routine should return an "out-of-range" indication as appropriate. The simplest method is:
... (as above)
long long secs;
time_t t;
secs = (windowsTicks / WINDOWS_TICK - SEC_TO_UNIX_EPOCH);
t = (time_t) secs;
if (secs != (long long) t) // checks for truncation/overflow/underflow
return (time_t) -1; // value not representable as a POSIX time
return t;
New answer for old question.
Using C++11's <chrono> plus this free, open-source library:
https://github.com/HowardHinnant/date
One can very easily convert these timestamps to std::chrono::system_clock::time_point, and also convert these timestamps to human-readable format in the Gregorian calendar:
#include "date.h"
#include <iostream>
std::chrono::system_clock::time_point
from_windows_filetime(long long t)
{
using namespace std::chrono;
using namespace date;
using wfs = duration<long long, std::ratio<1, 10'000'000>>;
return system_clock::time_point{floor<system_clock::duration>(wfs{t} -
(sys_days{1970_y/jan/1} - sys_days{1601_y/jan/1}))};
}
int
main()
{
using namespace date;
std::cout << from_windows_filetime(128166372003061629) << '\n';
std::cout << from_windows_filetime(128166372016382155) << '\n';
std::cout << from_windows_filetime(128166372026382245) << '\n';
}
For me this outputs:
2007-02-22 17:00:00.306162
2007-02-22 17:00:01.638215
2007-02-22 17:00:02.638224
On Windows, you can actually skip the floor, and get that last decimal digit of precision:
return system_clock::time_point{wfs{t} -
(sys_days{1970_y/jan/1} - sys_days{1601_y/jan/1})};
2007-02-22 17:00:00.3061629
2007-02-22 17:00:01.6382155
2007-02-22 17:00:02.6382245
With optimizations on, the sub-expression (sys_days{1970_y/jan/1} - sys_days{1601_y/jan/1}) will translate at compile time to days{134774} which will further compile-time-convert to whatever units the full-expression requires (seconds, 100-nanoseconds, whatever). Bottom line: This is both very readable and very efficient.
The solution that divides and adds will not work correctly with daylight savings.
Here is a snippet that works, but it is for windows.
time_t FileTime_to_POSIX(FILETIME ft)
{
FILETIME localFileTime;
FileTimeToLocalFileTime(&ft,&localFileTime);
SYSTEMTIME sysTime;
FileTimeToSystemTime(&localFileTime,&sysTime);
struct tm tmtime = {0};
tmtime.tm_year = sysTime.wYear - 1900;
tmtime.tm_mon = sysTime.wMonth - 1;
tmtime.tm_mday = sysTime.wDay;
tmtime.tm_hour = sysTime.wHour;
tmtime.tm_min = sysTime.wMinute;
tmtime.tm_sec = sysTime.wSecond;
tmtime.tm_wday = 0;
tmtime.tm_yday = 0;
tmtime.tm_isdst = -1;
time_t ret = mktime(&tmtime);
return ret;
}
Assuming you are asking about the FILETIME Structure, then FileTimeToSystemTime does what you want, you can get the seconds from the SYSTEMTIME structure it produces.
Here's essentially the same solution except this one encodes negative numbers from Ldap properly and lops off the last 7 digits before conversion.
public static int LdapValueAsUnixTimestamp(SearchResult searchResult, string fieldName)
{
var strValue = LdapValue(searchResult, fieldName);
if (strValue == "0") return 0;
if (strValue == "9223372036854775807") return -1;
return (int)(long.Parse(strValue.Substring(0, strValue.Length - 7)) - 11644473600);
}
If somebody need convert it in MySQL
SELECT timestamp,
FROM_UNIXTIME(ROUND((((timestamp) / CAST(10000000 AS UNSIGNED INTEGER)))
- CAST(11644473600 AS UNSIGNED INTEGER),0))
AS Converted FROM events LIMIT 100
Also here's a pure C#ian way to do it.
(Int32)(DateTime.FromFileTimeUtc(129477880901875000).Subtract(new DateTime(1970, 1, 1))).TotalSeconds;
Here's the result of both methods in my immediate window:
(Int32)(DateTime.FromFileTimeUtc(long.Parse(strValue)).Subtract(new DateTime(1970, 1, 1))).TotalSeconds;
1303314490
(int)(long.Parse(strValue.Substring(0, strValue.Length - 7)) - 11644473600)
1303314490
DateTime.FromFileTimeUtc(long.Parse(strValue))
{2011-04-20 3:48:10 PM}
Date: {2011-04-20 12:00:00 AM}
Day: 20
DayOfWeek: Wednesday
DayOfYear: 110
Hour: 15
InternalKind: 4611686018427387904
InternalTicks: 634389112901875000
Kind: Utc
Millisecond: 187
Minute: 48
Month: 4
Second: 10
Ticks: 634389112901875000
TimeOfDay: {System.TimeSpan}
Year: 2011
dateData: 5246075131329262904

Corrupted netcdf output file related with chunk size or dimension naming

I noticed, that in the following program the created netcdf file is corrupted, i.e., executing ncdump -h out.nc produces errors.
#include <netcdf>
/**
* This file produces a corrupted nc output file.
* Compile with `g++ -std=c++11 -o test test.cpp -lnetcdf -lnetcdf_c++4
*/
// this is the first non-working
// chunk size. It does work with 1048576
// 1048576 is representable by exactly 20 bits.
#define CHUNK_SIZE 1048577
using namespace std;
using namespace netCDF;
using namespace netCDF::exceptions;
int main()
{
typedef std::vector<size_t> vs;
typedef std::vector<netCDF::NcDim> vd;
try
{
NcFile outFile = NcFile("out.nc", NcFile::replace);
// create the dimensions complying to the AMBER specs
NcDim frameDim = outFile.addDim("frame");
NcDim atomDim = outFile.addDim("atom");
NcDim spatialDim = outFile.addDim("spatial", 3);
NcDim radiusDim = outFile.addDim("radius",1);
// create the variables
NcVar coords = outFile.addVar("coordinates", ncFloat, vd({frameDim, atomDim, spatialDim}));
NcVar radii = outFile.addVar("radius", ncFloat, vd({frameDim, atomDim}));
// set up chunking
vs chunk_coords({1, CHUNK_SIZE, 3});
vs chunk_radii({1, CHUNK_SIZE, 1});
coords.setChunking(NcVar::nc_CHUNKED, chunk_coords);
radii.setChunking(NcVar::nc_CHUNKED, chunk_radii);
// set up compression
coords.setCompression(false, true, 1);
radii.setCompression(false, true, 1);
return 0;
}
catch(NcException& e)
{
return -1;
}
}
The out.nc becomes a valid and working netcdf file when ...
... the CHUNK_SIZE becomes less than 1048577
... CHUNKING is disabled
... the unused "radius" dimension is named differently or not added at all
Note, that the maximum number of working CHUNK_SIZE, 1048576 is the maximal integer number representable by 20 bits.
What causes this behaviour? It is easy to work around by renaming the radius dimension, but I am still curious in why this is in any way related to the chunking of HDF5/netcdf.

Extract some YUV frames from large YUV file

I am looking for WIN32 program to copy part of the large 1920x1080px 4:2:0 .YUV file (cca. 43GB) into smaller .YUV files. All of the programs I have used, i.e. YUV players, can only copy/save 1 frame at the time. What is the easiest/appropriate method to cut YUV raw data to smaller YUV videos(images)? SOmething similar to ffmpeg command:
ffmpeg -ss [start_seconds] -t [duration_seconds] -i [input_file] [outputfile]
Here is the Minimum Working Example of the code, written in C++, if anyone will search for a simple solution:
// include libraries
#include <fstream>
using namespace std;
#define P420 1.5
const int IMAGE_SIZE = 1920*1080; // ful HD image size in pixels
const double IMAGE_CONVERTION = P420;
int n_frames = 300; // set number of frames to copy
int skip_frames = 500; // set number of frames to skip from the begining of the input file
char in_string[] = "F:\\BigBucksBunny\\yuv\\BigBuckBunny_1920_1080_24fps.yuv";
char out_string[] = "out.yuv";
//////////////////////
// main
//////////////////////
int main(int argc, char** argv)
{
double image_size = IMAGE_SIZE * IMAGE_CONVERTION;
long file_size = 0;
// IO files
ofstream out_file(out_string, ios::out | ios::binary);
ifstream in_file(in_string, ios::in | ios::binary);
// error cheking, like check n_frames+skip_frames overflow
//
// TODO
// image buffer
char* image = new char[(int)image_size];
// skip frames
in_file.seekg(skip_frames*image_size);
// read/write image buffer one by one
for(int i = 0; i < n_frames; i++)
{
in_file.read(image, image_size);
out_file.write(image, image_size);
}
// close the files
out_file.close();
in_file.close();
printf("Copy finished ...");
return 0;
}
If you have python available, you can use this approach to store each frame as a separate file:
src_yuv = open(self.filename, 'rb')
for i in xrange(NUMBER_OF_FRAMES):
data = src_yuv.read(NUMBER_OF_BYTES)
fname = "frame" + "%d" % i + ".yuv"
dst_yuv = open(fname, 'wb')
dst_yuv.write(data)
sys.stdout.write('.')
sys.stdout.flush()
dst_yuv.close()
src_yuv.close()
just change the capitalized variable into valid numbers, e.g
NUMBER_OF_BYTES for one frame 1080p should be 1920*1080*3/2=3110400
Or if you install cygwin you can use the dd tool, e.g. to get the first frame of a 1080p clip do:
dd bs=3110400 count=1 if=sample.yuv of=frame1.yuv
Method1:
If you are using gstreamer and you just want first X amount of yuv frames from large yuv files then you can use below method
gst-launch-1.0 filesrc num-buffers=X location="Your_large.yuv" ! videoparse width=x height=y format="xy" ! filesink location="FirstXframes.yuv"
Method2:
Calculate size of 1 frames and then use split utility to divide large files in small files.
Use
split -b size_in_bytes Large_file prefix

Can we create n channel image in opencv,n will be around 20

Presently Iam working in finding disparity of stereo pair. I have got a situation in creating 20 channel data set, When I declare array of 3 dimension it was giving error, Instead can I create image of 20 channels so that I can store data. If I can what are the additional conditions I have to include to get results without any error of memory allocation or sort of .... Creating an Image of 20 channels will be even comfortable for me...
The C++ interface of OpenCV presents cv::Mat, which replaces and improves the IplImage type of the C interface. This new type provides several constructors, including the one below which can be used to specify the desired number of channels through the param type:
Mat::Mat(int rows, int cols, int type)
Sample code:
#include <cv.h>
#include <highgui.h>
#include <iostream>
void test_mat(cv::Mat mat)
{
std::cout << "Channels: " << mat.channels() << std::endl;
}
int main(int argc, char* argv[])
{
cv::Mat mat20(1024, 768, CV_8UC(20));
test_mat(mat20);
return 0;
}
Opencv implements template class for small matrices whose type and size are known at compilation time:
template<typename _Tp, int m, int n> class Matx {...};
You can create a specified template of a partial case of Matx, which is cv::Vec like those already written in opencv for 1,2, or 3 "channels" like that:
typedef Vec<uchar, 3> Vec3b; // 3 channel -- written in opencv
typedef Vec<uchar, 20> Vec20b; // the one you need
And then, declare a Matrix of your new (20 channel of uchar) object:
cv::Mat_<Vec20b> myMat;
myMat.at<Vec20b>(i,j)(10) = .. // access to the 10 channel of pixel (i,j)

Resources