In SPSS, when defining the measure of a variable, the usual options are "Scale", "Ordinal", and "Nominal" (see image).
However, when using actual dialog boxes to do analyses, SPSS will often ask us to describe whether the data are "Continuous" or "Categorical". E.g., I was watching this video by James Gaskin (a great YouTube teacher by the way), and saw this dialog box (image below).
My Question: In the second image, you can see that the narrator put some "Ordinal" variables in the "Continuous" box. Is it okay to do that? How come?
For most procedures, the treatment of a variable is determined by how you use it. The measurement level is just a reminder, so you can treat a variable however it makes sense.
There are some procedures that automatically determine how to treat a variable based on the measurement level, including CTABLES, the Chart Builder, and TREE, but you can change the level temporarily in the dialog box or in syntax or change it persistently via VARIABLE LEVEL or in the Data Editor. Also, most of the statistical extension commands use the declared measurement level to determine whether a variable is continuous or a factor.
Related
I am using the ELKI MiniGUI to run LOF. I have found out how to normalize the data before running by -dbc.filter, but I would like to look at the original data records and not the normalized ones in the output.
It seems that there is some flag called -normUndo, which can be set if using the command-line, but I cannot figure out how to use it in the MiniGUI.
This functionality used to exist in ELKI, but has effectively been removed (for now).
only a few normalizations ever supported this, most would fail.
there is no longer a well defined "end" with the visualization. Some users will want to visualize the normalized data, others not.
it requires carrying over normalization information along, which makes data structures more complex (albeit the hierarchical approach we have now would allow this again)
due to numerical imprecision of floating point math, you would frequently not get out the exact same values as you put in
keeping the original data in memory may be too expensive for some use cases, so we would need to add another parameter "keep non-normalized data"; furthermore you would need to choose which (normalized or non-normalized) to use for analysis, and which for visualization. This would not be hard with a full-blown GUI, but you are looking at a command line interface. (This is easy to do with Java, too...)
We would of course appreciate patches that contribute such functionality to ELKI.
The easiest way is this: Add a (non-numerical) label column, and you can identify the original objects, in your original data, by this label.
We have a simple DTM global set to D=g. In our case it is eVar4. This value is not getting set ~30% of the time.
We have another eVar that is set with a direct call. eVar3. This direct call simply sets s.campaign with a data element. This data element returns a value no matter what. (an actual cookie value or a default). Again ~30% unspecified.
I can see referring domain information for these unspecified.
So my question is - If we can collect referring domain information why can't we collect the value for these eVars? Is this cache related or prefetch (prerender)
BTW - webkitVisibilityState - is used in the 1.5.1 file, so adobe knows about prerender. We are using app measurement 1.5.1.
There are 2 issues to consider here:
Using Dynamic Variables is a smart idea but DTM doesn't always treat
them like you'd expect. Given that they are sometimes "flakey" I
recommend setting your eVars with a data element or JS instead and
see if that improves your data set.
If you're using the global variable section of the Adobe Analytics tool remember that those values will execute on a s.t () call. And if you're using a Direct Call Rule after the original PV, the original values may not cascade or be available like you'd expect. One strategy to consider is to use a "global" page load rule instead of the global vars section in the AA tool. And if you set your campaign vars there, a value should be set on every page load.
In summary:
Populate your vars with data elements or JS vars directly
Consider moving your AA "Global Variables" to a global page load rule
for better flexibility and control over timing.
Hope this helps
You might not have classified your data, To solve this create a classification export file and classify the appropriate columns.
Here are a couple of more suggestions
https://helpx.adobe.com/analytics/kb/none-unspecified-and-unknown.html
I'm trying to use butterworth filter. The input data comes from an "index array" module (the data is acquired through DAQ and I want to process the voltage signal which is in an array of waveforms). when I use this filter in a case structure, it doesn't work. yet, when I use the filters in the "waveform conditioning" section, there is no problem. what exactly is the difference between these two types of filters?
a little add on to my problem: the second picture is from when i tried to reassemble the initial combination, and the error happened
You are comparing offline filtering to online filtering.
In LabVIEW, the PtbyPt-VIs are intended to be used in an online setting, that is - iteratively.
For each new sample that is obtained, it would be input directly into the VI. The VI stores the states of the previous iterations to perform the filtering.
The "normal" filter VIs are intended for offline analysis and expects an array containing the full data of the signal.
The following whitepaper explains Point-by-Point-VIs. Note that this paper is quite old, so it should explain the concepts - but might be otherwise outdated.
http://www.ni.com/pdf/manuals/370152b.pdf
If VoltageBuf is an array of consecutive values of the same signal (the one that you want to filter) you only need to connect VoltageBuf directly to the filter.
Is there a way to filter based on historical data?
For example: "Show me all objects who had "Attribute_X" == True on 01/01/2013"
As Steve stated, this would require an advanced DXL script.
I'm not sure about creating a filter on this, but identifying those objects you are looking for, I might be able to help. Having recently solved a similar task, I recommend to start with Tony Goodman's really excellent Smart History Viewer (this code could be used as DXL tutorial!) which has almost all the code you need. You just need to find and understand it.
Let me elaborate. Besides other nifty stuff, the history viewer basically does:
For all (selected) baselines, explicitly including un-baselined current version: gather all module changes and put them into a two-dimensional Skip list each, for module/object/session changes. Focus on the object changes.
There is an unused function printObjectHistory in the code which helps understanding the data structures. Have a look at the inner loop
for hist in skipHistory do
Inside this loop, consider only changes which happened before "01/01/2013" (check hist->HIST_DATE to obtain this information). The history viewer code already classified the detected changes, so you want to watch out for changes which contain the string "Modify Attribute: Attribute_X". Assign the new value to a buffer. Outside this loop, check if the buffer contains "True". If so, you this is one of the objects you wanted to find.
I would like to perform a binary classification of documents (.txt, .pdf, .jpeg, .img, etc.) into two categories: printable and non-printable. Essentially our school runs a free printing service for clubs, but the reality is that many clubs abuse the free printing and end up printing their homework, papers, etc., which amounts to thousands of dollars in ink and paper. Thus we would like to take some unsupervised methods to help limit this by determining whether a document is with high probability not club related (e.g. Biophysics paper, there is no biophysics club!).
So this is a very simple binary classification problem. I am not looking for low-level implementation details or which ML algorithms I should use, but rather how I should discover the relevant features that will then be fed to the training, etc.
My first idea was to gather all the documents that students print in the library. The idea is that if you have actual club printing, you'll do it for free at the club printing center rather than pay for it at the library. That would be a massive dataset, assuming every document printed at the library is assigned the non-printable/club material category. Unfortunately, the school is very liberal and opposed to allowing this due to privacy concerns, so it is not really an option without legal risks.
A similar-minded option would be to collect documents that are tied to courses / school work, e.g. course syllabi, available course documents online (homeworks, papers, etc.) and do feature extraction / selection on these. The assumption is that students would be abusing the printing to generally print material relevant to their studies.
While for .pdf and .txt based document this approach should have reasonable performance, I am at a loss at how to classify image based documents, besides perhaps using the title of the document and other meta data. A clever violator could simply convert all their text documents to image format to circumvent this system. However that is outside the scope of this question and should be saved for a future question / research. For now the scope is just text based documents.
Note that there are previous questions on topics similar to this, but mine is very specific and I believe it may pose challenges that something like movie review classification might not have to face.
I just wanted to leave a comment but it ended way longer than what I imagined.
While this is an interesting problem I'm not sure ML will get you what you need easily.
Firstly your classification problem is of the type A vs the World and A isn't strictly defined. Unless you know exactly what kind of stuff the clubs print you can't really say that new material belong or no to that class.
This will prove particularly difficult when you will need to assemble a large enough training set to be able to cover whatever can or cannot be printed. Such task will be extremely tedious, and as you said you won't have access to what the clubs usually print out so at best you will have a large class imbalance in your training set.
As the goal is to make the system automated (I mean if there is human interaction anyway, it's faster to check what will be printed than to make a ML algorithm that will provide a score that a human will have to investigate anyway) the number of false positives and false negatives will also be problematic. There will be cases where the clubs won't be able to print things they have the right to.
As you said you could simplify greatly the problem by classifying Course Material and Not Course Material. For that I will look towards BoW because some words are more present than others in papers or course material (everything remotely technical). The number of words as well as the overall size of the file seem like sensible things to extract. The structure is often also particular : it might be a good idea to extract such things : "number of lines with less than x words", "number of lines per page", "number of pictures" (if that's something you can extract from the file), ...
For pictures the major thing to check would be if this a scan of something (often they will scan and print course related things I guess), for that the format of the image is already a good indication but I don't see other things that would be particularly "course related".
So for me, if you can't really define precisely one of your two classes don't go with classification or reduce the problem to something you can really define (course related things).
If you are able to compile a "black list" of documents students are not allowed to print, you can then implement a several layers rejection mechanism.
I would suggest these 3 levels:
compare the md5 of the file they want to print with a database of all the md5 of the black-listed documents.
if the 1) is passed, compare repeat 1) but at a page level, rather than at document level (perhaps they want to print just few pages rather than the entire document).
if 2) is passed you can compare the page they want to print with the pages of the black-listed documents document using an image similarity method, like SSIM. if you get a high score between the page they want print and one of the black-listed items do not print, and update your md5 database accordingly.
if 3) is passed: print!
A few words about SSIM: this method is quite robust to noise, so even a smart student who added some sort of niose to the image will be caught
However:
you have to find a proper way to extract a region of interest (ROI) from the page and the db of documents (if the two ROIs are in two different area of the page, SSIM will be negative)
SSIM might be slow! definitely a C implementation is needed here.
I think SSIM is not rotational invariant, hence the check will fail if they print the page upside down (unless you have a smart way to rotate the page).