I want to use foreach to cut out same sample using Stata.
I have written the following code snippet:
foreach i of numlist 0/11 {
preserve
keep id projectno nickname
gen start=`i'*30000+1
gen end=(`i'+1)*30000
outsheet using d:\profile\nickname_`i'.xls in `start'/`end'
restore
}
However, I receive the error below despite having revised it many times:
'/' invalid observation number
How can I correct my code?
This isn't a complete answer -- and focuses on a side-issue to your question -- but it won't fit easily into a comment.
Together with changes explained elsewhere, I would change the order of your commands to
preserve
keep id projectno nickname
forval i = 0/11 {
local start= `i' * 30000 + 1
local end = (`i' + 1) * 30000
outsheet using d:\profile\nickname_`i'.xls in `start'/`end'
}
restore
The in statement in the outsheet command is wrong because start and end are generated as variables and not local macros. You need to initialze both start and end as follows:
local start = `i' * 30000 + 1
local end = (`i' + 1) * 30000
Consider the following toy example using Stata's auto toy dataset:
sysuse auto, clear
foreach i of numlist 0/11 {
preserve
keep price mpg make
local start = (`i' * 3) + 1
local end = (`i' + 1) * 3
list in `start' / `end'
restore
}
Results:
+---------------------------+
| make price mpg |
|---------------------------|
1. | AMC Concord 4,099 22 |
2. | AMC Pacer 4,749 17 |
3. | AMC Spirit 3,799 22 |
+---------------------------+
+-----------------------------+
| make price mpg |
|-----------------------------|
4. | Buick Century 4,816 20 |
5. | Buick Electra 7,827 15 |
6. | Buick LeSabre 5,788 18 |
+-----------------------------+
+------------------------------+
| make price mpg |
|------------------------------|
7. | Buick Opel 4,453 26 |
8. | Buick Regal 5,189 20 |
9. | Buick Riviera 10,372 16 |
+------------------------------+
+------------------------------+
| make price mpg |
|------------------------------|
10. | Buick Skylark 4,082 19 |
11. | Cad. Deville 11,385 14 |
12. | Cad. Eldorado 14,500 14 |
+------------------------------+
+-------------------------------+
| make price mpg |
|-------------------------------|
13. | Cad. Seville 15,906 21 |
14. | Chev. Chevette 3,299 29 |
15. | Chev. Impala 5,705 16 |
+-------------------------------+
+---------------------------------+
| make price mpg |
|---------------------------------|
16. | Chev. Malibu 4,504 22 |
17. | Chev. Monte Carlo 5,104 22 |
18. | Chev. Monza 3,667 24 |
+---------------------------------+
+------------------------------+
| make price mpg |
|------------------------------|
19. | Chev. Nova 3,955 19 |
20. | Dodge Colt 3,984 30 |
21. | Dodge Diplomat 4,010 18 |
+------------------------------+
+-------------------------------+
| make price mpg |
|-------------------------------|
22. | Dodge Magnum 5,886 16 |
23. | Dodge St. Regis 6,342 17 |
24. | Ford Fiesta 4,389 28 |
+-------------------------------+
+----------------------------------+
| make price mpg |
|----------------------------------|
25. | Ford Mustang 4,187 21 |
26. | Linc. Continental 11,497 12 |
27. | Linc. Mark V 13,594 12 |
+----------------------------------+
+---------------------------------+
| make price mpg |
|---------------------------------|
28. | Linc. Versailles 13,466 14 |
29. | Merc. Bobcat 3,829 22 |
30. | Merc. Cougar 5,379 14 |
+---------------------------------+
+-----------------------------+
| make price mpg |
|-----------------------------|
31. | Merc. Marquis 6,165 15 |
32. | Merc. Monarch 4,516 18 |
33. | Merc. XR-7 6,303 14 |
+-----------------------------+
+------------------------------+
| make price mpg |
|------------------------------|
34. | Merc. Zephyr 3,291 20 |
35. | Olds 98 8,814 21 |
36. | Olds Cutl Supr 5,172 19 |
+------------------------------+
Note that it is not necessary the commands preserve, keep and restore to be within your loop as they are one-time operations and repeating them is just inefficient.
I am using EmoKit (https://github.com/openyou/emokit) to retrieve data. The sample data looks like as follows:
+========================================================+
| Sensor | Value | Quality | Quality L1 | Quality L2 |
+--------+----------+----------+------------+------------+
| F3 | -768 | 5672 | None | Excellent |
| FC5 | 603 | 7296 | None | Excellent |
| AF3 | 311 | 7696 | None | Excellent |
| F7 | -21 | 296 | Nothing | Nothing |
| T7 | 433 | 104 | Nothing | Nothing |
| P7 | 581 | 7592 | None | Excellent |
| O1 | 812 | 7760 | None | Excellent |
| O2 | 137 | 6032 | None | Excellent |
| P8 | 211 | 5912 | None | Excellent |
| T8 | -51 | 6624 | None | Excellent |
| F8 | 402 | 7768 | None | Excellent |
| AF4 | -52 | 7024 | None | Excellent |
| FC6 | 249 | 6064 | None | Excellent |
| F4 | 509 | 5352 | None | Excellent |
| X | -2 | N/A | N/A | N/A |
| Y | 0 | N/A | N/A | N/A |
| Z | ? | N/A | N/A | N/A |
| Batt | 82 | N/A | N/A | N/A |
+--------+----------+----------+------------+------------+
|Packets Received: 3101 | Packets Processed: 3100 |
| Sampling Rate: 129 | Crypto Rate: 129 |
+========================================================+
Are these values in micro-volts? If so, how can these be more than 200 microvolts? The EEG data is in the range of 0-200 microvolts. Or does this require some kind of processing? If so what?
As described in the frequently asked questions of emokit, :
What unit is the data I'm getting back in? How do I get volts out of it?
One least-significant-bit of the fourteen-bit value you get back is 0.51 microvolts. See the specification for more details.
Looking for the details in the specification (via archive.org), we find the following for the "Emotiv EPOC Neuroheadset":
Resolution | 14 bits 1 LSB = 0.51μV (16 bit ADC,
| 2 bits instrumental noise floor discarded)
Dynamic range (input referred) | 8400μV (pp)
As a validation we can check that for a 14 bits linear ADC, the 8400 microvolts (peak-to-peak) would be divided in steps of 8400 / 16384 or approximately 0.5127 microvolts.
For the Epoc+, the comparison chart indicates a 14-bit and a 16-bit version (with a +/- 4.17mV dynamic range or 8340 microvolts peak-to-peak). The 16-bit version would then have raw data steps of 8340 / 65536 or approximately 0.127 microvolts. If that is what you are using, then the largest value of 812 you listed would correspond to 812 * 0.127 = 103 microvolts.
I currently have two datasets, RTWANEW2016.sav and MERGED.sav.
RTWANEW2016:
+----+------------+--------+--------+--------+--------+--------+--------+
| id | date | value1 | value2 | value3 | value4 | value5 | value6 |
+----+------------+--------+--------+--------+--------+--------+--------+
| 1 | 01-03-2006 | 3 | 9 | 85 | 766 | 3 | 45 |
| 1 | 03-23-2010 | 56 | 34 | 23 | 33 | 556 | 43 |
| 2 | 12-04-2014 | 56 | 655 | 523 | 566 | 9 | 9 |
| 3 | 07-23-2011 | 34 | 56 | 661 | 23 | 22 | 11 |
| 4 | 03-05-2007 | 45 | 345 | 222 | 556 | 4566 | 4 |
+----+------------+--------+--------+--------+--------+--------+--------+
MERGED:
+----+------------+--------+--------+--------+
| id | date | value4 | value5 | value6 |
+----+------------+--------+--------+--------+
| 1 | 01-03-2006 | 345 | 44 | 5345 |
| 2 | 12-04-2014 | 522 | 55 | 5444 |
| 4 | 03-05-2007 | 234 | 88 | 9001 |
+----+------------+--------+--------+--------+
I want to update RTWANEW2016 with the values from variables "value4", "value5" and "value6" from MERGED.
Notice that some data RTWANEW2016 has duplicate ID's, but different dates, so I would need to sort by both id and date
See the UPDATE command which is designed to achieve this.
Overview (UPDATE command)
UPDATE replaces values in a master file with updated values recorded
in one or more files called transaction files. Cases in the master
file and transaction file are matched according to a key variable.
The master file and the transaction files must be IBM® SPSS®
Statistics data files or datasets available in the current session,
including the active dataset. UPDATE replaces values and creates a new
active dataset, which replaces the original active dataset.
UPDATE is designed to update values of existing variables for existing
cases. Use MATCH FILES to add new variables to a data file and ADD
FILES to add new cases.
UPDATE FILE='/RTWANEW2016.sav'
/FILE='/MERGED.sav'
/BY=ID Date.
I have a SPSS dataset with information of different household members and I need to generate a new variable that counts the number of people that compose each one of these households.The original dataset is something like:
ID | age | height
332 | 23 | 1.78
332 | 27 | 1.65
344 | 56 | 1.79
344 | 34 | 1.98
344 | 15 | 1.58
etc... and I need to generate a new variable that counts the id repetitions such as 'n' in:
ID | age | height | n
332 | 23 | 1.78 | 2
332 | 27 | 1.65 | 2
344 | 56 | 1.79 | 3
344 | 34 | 1.98 | 3
344 | 15 | 1.58 | 3
Is there any straightforward way to do it with window commands or do I need to use command language?
Look up the AGGREGATE command.
AGGREGATE OUTFILE=* MODE=ADDVARIABLES /BREAK=ID /Count=N.
Using the searchable plugin in Grails (which uses Compass/Lucene under the hood) we're trying to share a search index between two different web applications. One application accesses the data only in a read-only fashion. The other application allows to modify the data and is in charge of updating the index on any change or do a full re-index on demand.
To store the index we're using the JDBC Store (with both applications pointing to the same DB) http://www.compass-project.org/docs/latest/reference/html/core-connection.html.
Unfortunately, as soon as we rebuild the whole index in one application, the other application seems to have invalid data cached and an exception is thrown if a search is performed:
| Error 2012-05-30 09:22:07,560 [http-bio-8080-exec-8] ERROR errors.GrailsExceptionResolver - IndexOutOfBoundsException occurred when processing request: [POST] /search
Index: 45, Size: 13. Stacktrace follows:
Message: Index: 45, Size: 13
Line | Method
->> 547 | RangeCheck in java.util.ArrayList
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
| 322 | get in ''
| 265 | fieldInfo . in org.apache.lucene.index.FieldInfos
| 254 | fieldName in ''
| 86 | read . . . in org.apache.lucene.index.TermBuffer
| 127 | next in org.apache.lucene.index.SegmentTermEnum
| 158 | scanTo . . in ''
| 271 | get in org.apache.lucene.index.TermInfosReader
| 332 | terms . . . in ''
| 717 | terms in org.apache.lucene.index.SegmentReader
| 93 | generate . in org.apache.lucene.search.PrefixGenerator
| 58 | getDocIdSet in org.apache.lucene.search.PrefixFilter
| 116 | <init> . . in org.apache.lucene.search.ConstantScoreQuery$ConstantScorer
| 81 | scorer in org.apache.lucene.search.ConstantScoreQuery$ConstantWeight
| 230 | scorer . . in org.apache.lucene.search.BooleanQuery$BooleanWeight
| 131 | search in org.apache.lucene.search.IndexSearcher
| 112 | search . . in ''
| 204 | search in org.apache.lucene.search.MultiSearcher
| 113 | getMoreDocs in org.apache.lucene.search.Hits
| 90 | <init> in ''
| 61 | search . . in org.apache.lucene.search.Searcher
| 146 | findByQuery in org.compass.core.lucene.engine.transaction.support.AbstractTransactionProcessor
| 259 | doFind . . in org.compass.core.lucene.engine.transaction.readcommitted.ReadCommittedTransactionProcessor
| 246 | find in org.compass.core.lucene.engine.transaction.support.AbstractConcurrentTransactionProcessor
| 352 | find . . . in org.compass.core.lucene.engine.LuceneSearchEngine
| 188 | hits in org.compass.core.lucene.engine.LuceneSearchEngineQuery
| 199 | hits . . . in org.compass.core.impl.DefaultCompassQuery
| 104 | doInCompass in grails.plugin.searchable.internal.compass.search.DefaultSearchMethod$SearchCompassCallback
| 133 | execute . . in org.compass.core.CompassTemplate
| 57 | doInCompass in grails.plugin.searchable.internal.compass.support.AbstractSearchableMethod
| 66 | invoke . . in grails.plugin.searchable.internal.compass.search.DefaultSearchMethod
| 37 | search in grails.plugin.searchable.SearchableService
We could communicate the fact that the index is rebuilt from one to the other application so that some clean-up could be performed.
Did anybody have a similar problem with Grails and the Searchable plugin?
Is it possible to discard data cached by Compass/Lucene?
Is it possible to disable caching generally?
Clearing all caches before searching seems to solve the issue...
searchableService.compass.compass.searchEngineFactory.indexManager.clearCache()