Inconsistent speed and distance units - rinsim

When changing the distance and speed unit of the road model from km and km/h to m and m/s (and also adjusting the speed value of our vehicle which is said to use these same units), we get inconsistent results. Using the default units, the vehicle is moving really slow even when giving it a decent speed of 70 km/h. When using m/s and also converting the speed, the vehicle is moving really fast even though the behaviour of the vehicle should be the same.
We are using the same graph of Leuven and methods to construct it as used in the Taxi example. Upon further inspection, we saw that 1000 is used as speed for the taxi's with the default units (km/h and km if we are correct).
Did we miss something, or is there something wrong with the units of the parsed graph?

Related

Temporal autocorrelation in spatial bam (gam)

I am modelling fish depth in a river based on acoustic tag detections (meaning the data are not exactly a perfectly spaced continuous time series). I predict that depth will differ based on spatial location in the river because different areas have different depths available, time of day because depth responds to light, day of year for the same reason, and differ among individuals. The basic model is then
depth ~ s(lon, lat) + s(hour) + s(yday) + s(ID, bs="re")
There are a few million detections so the model is a bam, so
bam(depth ~ s(lon, lat) + s(hour) + s(yday) + s(ID, bs="re")
The depth for each individual should be autocorrelated to the previous recording (of course this depends how recently it was last registered, but I don't know quite how to account for the discrete spacing in time).
I understand the rho parameter is used in bam as a sort of corAR1 function, which I guess can account for the autocorrelation. I also considered including lag(depth, by=ID) as a predictor and it performed quite well but I wasn't sure of the validity of this approach.
I followed several breadcrumbs to find that rho can be estimated from a model without a correlation structure rho<-acf(resid(m1), plot=FALSE)$acf2-
For each individual I added an ARSTART variable to call AR.start = df$ARSTART to account for time series differing among individuals- so my model is
m2<-bam(depth~s(lon, lat)+s(yday)+s(hour, bs="cc")+s(fID, bs="re"), AR.start=df$ARSTART, discrete=T, rho=rho, data=df)
Everything works swimmingly, the model with the autocorrelation structure fits better (way better) according to AIC, but the posterior estimates of effects are wildly inaccurate (or badly scaled). The spatial effects according to the lon, lat smoother become extreme (and homogenous) compared to the model without the structure, in which the spatial smoother seems to capture the spatial variance quite effectively, showing that they are predicted to be deeper in the deeper areas and shallower in the shallower areas.
I can provide example code if desired, but the question is, essentially, does it make any sense that the autocorrelation structure would change the values of the posterior estimates so dramatically compared to the model, and is the temporal autocorrelation structure absorbing all the variance that is otherwise associated with the spatial effects (which appear to be negated in the model with the autocorrelation structure)?
Some ideas- I cannot figure out what is best:
blindly follow the AIC without really understanding why the posterior estimates are so odd (huge) or why the spatial effects disappear despite clearly being important based on biological knowledge of the system
report that we fit an autocorrelation structure to the data, it fit well, but didn't change the shape of the relationships and therefore we present results of the model without the structure
model without the autocorrelation structure but with an s(lagDepth) variable as a fixed effect
model change in depth rather than depth, which seems to eliminate some of the autocorrelation.
All help greatly appreciated- thanks so much

How can I split EEG evoked potentials in different frequency bands using MNE-python?

So far, I have calculated the evoked potentials. However, I would like to see if there is relatively more activity in the theta band wrt the other bands. When I use mne.Evoked.filter, I get a plot which lookes a lot like a sine wave, containing no useful information. Furthermore, the edge regions (time goes from -0.2s to 1s) are highly distorted.
Filtering will always result in edge artifacts, especially for low frequencies like theta (longer filter). To perform analyses on low frequency signal you should epoch your data into longer segments (epochs) than the time period you are interested in.
Also, if you are interested in theta oscillations it is better to perform time-frequency analysis than filter the ERP. ERP contains only time-locked activity, while with time-frequency representation you will be able to see theta even in time periods where it was not phase-aligned across trials. You may want to follow this tutorial for example.
Also make sure to see the many rich tutorials and examples in mne docs.
If you have any further problems we use Discourse now: https://mne.discourse.group/

Stationary and nonstationary time series data

This question might sound trivial to some but I just got interested in time series analysis and have been reading up about it in the last couple of days. However, I am yet to understand the topic of identifying stationary/non-stationary time-series data. I generated some time-series data of two dimensions using some tool I found. Plotting it out, I get something like in this image:
Looking at the plot, I think it shows some seasonalities (with the spike in the middle) and I would say its not stationary. However, doing the stationarity test as described in Machine Learning Mastery, it passed the stationarity test (the tests says its stationary) . Now, I'm confused maybe I didn't understand what seasons and trends means in time-series data. Am I wrong in thinking that the spikes hints at seasons?
Judging from the plot, your data looks like white noise, which is a type of stationary random data. A stationary time series has constant mean (in your case zero), variance, autocorrelation, etc. across time.
Seasonality is regular patterns that occur at specific calendar intervals within a year, for example quarterly, monthly, or daily patterns. Accordingly, large spikes in a plot do not usually indicate seasonality.
In contrast, the following time series (using R) exhibits an upward trend, monthly seasonality, and increasing variance:
plot(AirPassengers)
In sum, the AirPassengers time series is not stationary.

Objective-C normalise dB level from iPhone's built in mic

I am building a dB meter as part of an app I am creating, I have got it receiving the peak and average power's from the mic on my iPhone (values ranging from -60 to 0.4) and now I need to figure out how to convert these power levels into db levels like the ones in this chart http://www.gcaudio.com/resources/howtos/loudness.html
Does anyone have any idea how I could do this? I can't figure an algorithm out for the life of me and it is kind of crucial as the whole point of the app is to do with real word audio levels, if that makes sense.
Any help will be greatly appreciated.
Apple's done a pretty good job at making the frequency response of the microphone flat and consistent between devices, so the only thing you'll need to determine is a calibration point. For this you will require a reference audio source and calibrated Sound Pressure level meter.
It's worth noting that sound pressure measurements are often measured against the A-weighting scale. This is frequency weighted for the human aural system. In order to measure this, you will need to apply the relevant filter curve to results taken form the microphone.
Also be aware of the difference between peak and mean (in this case RMS) measurements.
As far as I can tell from looking at the documentation, the "power level" returned from an AVAudioRecorder (assuming that's what you're using – you didn't specify) is already in decibels. See here from the documentation for averagePowerForChannel:
Return Value
The current average power, in decibels, for the sound
being recorded. A return value of 0 dB indicates full scale, or
maximum power; a return value of -160 dB indicates minimum power (that
is, near silence).

What are the latitude longitude decimal values

i want to develop a geolocation based app. Currently i am reading the phones position and get the longitude and latitude values.
These are for example currently
Lat 50.89808359344173
Lon 4.482628385834053
I have 14 decimal places for the latitude an 15 for the longitude.
What is the meaning of these places?
A man is being shown around a natural history museum by a guide, sees a fossilised tyrannosaurus skeleton. He asks the guide how old it is, and the guide tells him it's 67million and 13 years old.
"Wow, how can you be so accurate?"
"Well, I was told it was 67million years old when I started here, and that was 13 years ago".
The precision of your result is similar to this. Or to calculating your taxes on a transaction and realising you need to pay USD1492.19482019331. The software has done some calcuations based on GPS and/or wifi location signals, and that's the answer it came up with, but it isn't really so precise as to be accurate to the nearest atom's width, it's false precision; precision in a result that is greater than the accuracy of that result.
Just how accurate it is will depend on which services are turned on (gps or wifi-based only), available distinctive wifi signals (makes the wifi-based better) whether you are indoors or in a built-up area (makes GPS less accurate) speed you are moving. It could be better than a few meters or even as bad as 1km. There's also a difference in absolute and relative accuracy, so it might be e.g. 30m out, but after the phone has moved 100m still be 30m out, and have tracked that relative movement of 100m to within a meter or so.
The phone might not only be inaccurate but also be inaccurate in trying to judge how inaccurate it is! This is a reason in itself not to round the figure; how much to round them? It's also a good general principle that you should do any rounding at the last possible point; better to have false precision greater than your accuracy than to introduce a rounding error by rounding early and actually lose accuracy further.
If you're storing the positions in a memory-conscious way, or presenting the figures directly, then 5 or 6 decimal places is probably more than enough to be sure you're being more precise than you can really claim to be.
If you're plotting a point on a map, you may as well just feed in the ridiculously over-precise figures you get; it won't hurt anything, and its simpler than trying to second-guess just how accurate you really are.
They are just the decimal portion of degrees. It's a lot easier to do calculations with decimal degrees than degrees, minutes and seconds.
So for example: 32.5° = 32° 30'.

Resources