raw.plot() not plotting time series in eeg data - mne-python

I am using mne to read my EEG data. When I am doing raw.plot(), I am getting an image of this type, rather than a time series. I am unable to interpret this. Please help.

You can see here: https://mne.discourse.group/t/plotting-eeg-data-too-big-and-scroll-does-not-work/2533. Had the same problem, and doing scalings='auto' actually helped. Hope it helps you too:)

Related

Why is some time series models(ARIMA,BATS,ETS) generate flat lines for forecasts?

what is the reason behind flat line generation for some time series models ? Is it because they assume constant trend for forecast ? Please help be with a logical reasoning , i have read some materials online but couldn't be convinced
Thanks

OpenCV Background Model Component Extraction

I am working with the BackgroundSubtractorMOG2 class in OpenCV (Python), and am trying to extract the individual components of the background model. As I understand it, each pixel will be modeled by the mixture of a varying number of gaussian distributions, each defined by a mean and variance. So, how can I determine what all of these components (means and variances) are after feeding the background subtractor a given number of frames?
The documentation here:
https://docs.opencv.org/3.4.3/d7/d7b/classcv_1_1BackgroundSubtractorMOG2.html#adbb1d295befaff88a54a929e50aaf879
Does not seem to discuss doing this.
This information must be contained somewhere in the background subtractor object. Does anyone know how to get to it?
Thanks!
Edit: A little more searching has led me to believe that the cv2.Algorithm class is required to read the parameters from the BackgroundSubtractorMOG2 object. I think the two questions posed here:
http://answers.opencv.org/question/28008/how-to-derive-from-algorithm/
Reading algorithm parameters from file in OpenCV
are similar to what I am asking, but I am unable to interpret the answers. I thought the solution would be something along the lines of:
Parameters = cv2.Algorithm.read('name_of_backgroundsubtractorMOG2_object')
but this returns an error of: 'Required argument 'fn' (pos 1) not found'
Edit 2: Unfortunately I think this question has been answered here:
Save opencv BackgroundSubtractorMOG to file?
Short answer: It cannot be done! Sad!

encogmodel selectmethod configuration

Could someone point me to examples on how to configure the encogmodel with selectmethod? This is an overloaded method with the first one providing just taking inputs as dataset and method. The second one however allows the following:
dataset
methodtype
methodArgs
trainingType
trainingArgs
I am unable to get this working as the following error appears "Layer can't have zero neurons, Unknown architecture element:". Any help is appreciated. thank you.
Also, some insight on how to dump the weights in this approach? When the model is built via building the network (BasicNetwork), it is possible to dump the weights as network.flat approach. In this encogmodel driven approach, how do we dump the weights, gradients etc? thank you
There are three examples for EncogModel, you can find them here:
If that does not help, let me know more specifically what you are trying to do, or provide some code that is not working, and I update this to a more specific answer.
The weights can be directly accessed by BasicNetwork.dumpWeights, BasicNetwork.dumpWeightsVerbose(), or more directly with BasicNetwork.getWeight

Silhoutte coefficient- Information retrieval

I have been trying to get my hands dirty with Information Retrieval.My professor gave us this problem to solve, but I can't get my way around it. The matrix given, if it is a distance matrix, the diagonal elements should all be 0. But in the table, they're given as 1. The other entries are also less than 1. How is this possible? Can someone please explain?
Please see question 5.c. I could not enter the table manually and apologize for that.
In every similarity measurement, 1 means totally similar and 0 means there is no similarity between documents.

What should I do for multiple histograms?

I'm working with openCV and I'm a newbie in this field. I'm researching about Camshift. I want to extend this method by using multiple histograms. It means when tracking an object has many than one apperance (ex: rubik cube with six apperance), if we use only one histogram, Camshift will most likely fail.
I know calcHist function in openCV (http://docs.opencv.org/modules/imgproc/doc/histograms.html#calchist) has a parameter is "accumulate", but I don't know how to use and when to use (apply for camshiftdemo.cpp in opencv samples folder). This function can help me solve this problem? Or I have to use difference solution?
I have an idea, that is: create an array histogram for object, for every appearance condition that strongly varies in color, we pre-compute and store all to this array. But when we compute new histogram? It means that the pre-condition to start compute new histogram is what?
And what happend if I have to track multiple object has same color?
Everybody please help me. Thank you so much!

Resources