Dynamic video streaming using vlc - vlc

I wish to perform adaptive bitrate streaming using VLC. The method which I am using is capturing a video using USB camera, encoding it and creating an MPD file with different birates so that VLC will be able to stream the video on client machine based on the quality of bandwidth. But VLC is not able to switch between the bandwidths while I tried playing the MPD file using "http://IPaddress:80/mpd_file.mpd".
Here is the part of MPD file
<Representation id="v1" mimeType="video/mp4" codecs="avc1.42C01E" width="3840" height="2160" frameRate="30" sar="1:1" startWithSAP="1" bandwidth="6000000.000000">
</Representation>
<Representation id="v2" mimeType="video/mp4" codecs="avc1.42C01E" width="1920" height="1080" frameRate="30" sar="1:1" startWithSAP="1" bandwidth="8000000.000000">
</Representation>
The following is the log message I'm getting in VLC when trying to switch the bandwidth from 6 Mbps to 8 Mbps
main debug: using opengl module "egl_x11"
main debug: looking for glconv module matching "any": 3 candidates
glconv_vaapi_x11 error: vaInitialize: unknown libva error
main debug: using glconv module "glconv_vaapi_drm"
main debug: using vout display module "gl"
main debug: original format sz 3840x2160, of (0,0), vsz 3840x2160, 4cc VAOP, sar 1:1, msk r0x0 g0x0 b0x0
main debug: removing module "freetype"
main debug: looking for text renderer module matching "any": 3 candidates
freetype debug: Building font databases.
freetype debug: Took -11968 microseconds
main debug: using text renderer module "freetype"
main debug: looking for hw decoder module matching "any": 3 candidates
main debug: using hw decoder module "vaapi"
main debug: Received first picture
main debug: Decoder wait done in 296 ms
main debug: creating demux: access='' demux='mp4' location='' file='(null)'
main debug: looking for demux module matching "mp4": 56 candidates
adaptive debug: Retrieving http://10.75.28.48:80/Dash_Disp1-Zone1/v2_2_gpac.m4s #0
mp4 debug: dumping root Box "root"
mp4 debug: | + ftyp size 24 offset 0
mp4 debug: | + free size 66 offset 24
mp4 debug: | + moov size 677 offset 90
mp4 debug: | | + mvhd size 108 offset 98
mp4 debug: | | + mvex size 56 offset 206
mp4 debug: | | | + mehd size 16 offset 214
mp4 debug: | | | + trex size 32 offset 230
mp4 debug: | | + trak size 505 offset 262
mp4 debug: | | | + tkhd size 92 offset 270
mp4 debug: | | | + mdia size 405 offset 362
mp4 debug: | | | | + mdhd size 32 offset 370
mp4 debug: | | | | + hdlr size 55 offset 402
mp4 debug: | | | | + minf size 310 offset 457
mp4 debug: | | | | | + vmhd size 20 offset 465
mp4 debug: | | | | | + dinf size 36 offset 485
mp4 debug: | | | | | | + dref size 28 offset 493
mp4 debug: | | | | | | | + url size 12 offset 509
mp4 debug: | | | | | + stbl size 246 offset 521
mp4 debug: | | | | | | + stsd size 154 offset 529
mp4 debug: | | | | | | | + avc1 size 138 offset 545
mp4 debug: | | | | | | | | + avcC size 52 offset 631
mp4 debug: | | | | | | + stts size 16 offset 683
mp4 debug: | | | | | | + stss size 16 offset 699
mp4 debug: | | | | | | + stsc size 16 offset 715
mp4 debug: | | | | | | + stsz size 20 offset 731
mp4 debug: | | | | | | + stco size 16 offset 751
mp4 debug: | + styp size 28 offset 767
Any help is highly appreciated.

Related

Error when using foreach to cut out sample

I want to use foreach to cut out same sample using Stata.
I have written the following code snippet:
foreach i of numlist 0/11 {
preserve
keep id projectno nickname
gen start=`i'*30000+1
gen end=(`i'+1)*30000
outsheet using d:\profile\nickname_`i'.xls in `start'/`end'
restore
}
However, I receive the error below despite having revised it many times:
'/' invalid observation number
How can I correct my code?
This isn't a complete answer -- and focuses on a side-issue to your question -- but it won't fit easily into a comment.
Together with changes explained elsewhere, I would change the order of your commands to
preserve
keep id projectno nickname
forval i = 0/11 {
local start= `i' * 30000 + 1
local end = (`i' + 1) * 30000
outsheet using d:\profile\nickname_`i'.xls in `start'/`end'
}
restore
The in statement in the outsheet command is wrong because start and end are generated as variables and not local macros. You need to initialze both start and end as follows:
local start = `i' * 30000 + 1
local end = (`i' + 1) * 30000
Consider the following toy example using Stata's auto toy dataset:
sysuse auto, clear
foreach i of numlist 0/11 {
preserve
keep price mpg make
local start = (`i' * 3) + 1
local end = (`i' + 1) * 3
list in `start' / `end'
restore
}
Results:
+---------------------------+
| make price mpg |
|---------------------------|
1. | AMC Concord 4,099 22 |
2. | AMC Pacer 4,749 17 |
3. | AMC Spirit 3,799 22 |
+---------------------------+
+-----------------------------+
| make price mpg |
|-----------------------------|
4. | Buick Century 4,816 20 |
5. | Buick Electra 7,827 15 |
6. | Buick LeSabre 5,788 18 |
+-----------------------------+
+------------------------------+
| make price mpg |
|------------------------------|
7. | Buick Opel 4,453 26 |
8. | Buick Regal 5,189 20 |
9. | Buick Riviera 10,372 16 |
+------------------------------+
+------------------------------+
| make price mpg |
|------------------------------|
10. | Buick Skylark 4,082 19 |
11. | Cad. Deville 11,385 14 |
12. | Cad. Eldorado 14,500 14 |
+------------------------------+
+-------------------------------+
| make price mpg |
|-------------------------------|
13. | Cad. Seville 15,906 21 |
14. | Chev. Chevette 3,299 29 |
15. | Chev. Impala 5,705 16 |
+-------------------------------+
+---------------------------------+
| make price mpg |
|---------------------------------|
16. | Chev. Malibu 4,504 22 |
17. | Chev. Monte Carlo 5,104 22 |
18. | Chev. Monza 3,667 24 |
+---------------------------------+
+------------------------------+
| make price mpg |
|------------------------------|
19. | Chev. Nova 3,955 19 |
20. | Dodge Colt 3,984 30 |
21. | Dodge Diplomat 4,010 18 |
+------------------------------+
+-------------------------------+
| make price mpg |
|-------------------------------|
22. | Dodge Magnum 5,886 16 |
23. | Dodge St. Regis 6,342 17 |
24. | Ford Fiesta 4,389 28 |
+-------------------------------+
+----------------------------------+
| make price mpg |
|----------------------------------|
25. | Ford Mustang 4,187 21 |
26. | Linc. Continental 11,497 12 |
27. | Linc. Mark V 13,594 12 |
+----------------------------------+
+---------------------------------+
| make price mpg |
|---------------------------------|
28. | Linc. Versailles 13,466 14 |
29. | Merc. Bobcat 3,829 22 |
30. | Merc. Cougar 5,379 14 |
+---------------------------------+
+-----------------------------+
| make price mpg |
|-----------------------------|
31. | Merc. Marquis 6,165 15 |
32. | Merc. Monarch 4,516 18 |
33. | Merc. XR-7 6,303 14 |
+-----------------------------+
+------------------------------+
| make price mpg |
|------------------------------|
34. | Merc. Zephyr 3,291 20 |
35. | Olds 98 8,814 21 |
36. | Olds Cutl Supr 5,172 19 |
+------------------------------+
Note that it is not necessary the commands preserve, keep and restore to be within your loop as they are one-time operations and repeating them is just inefficient.

How to interpret and use Emokit data?

I am using EmoKit (https://github.com/openyou/emokit) to retrieve data. The sample data looks like as follows:
+========================================================+
| Sensor | Value | Quality | Quality L1 | Quality L2 |
+--------+----------+----------+------------+------------+
| F3 | -768 | 5672 | None | Excellent |
| FC5 | 603 | 7296 | None | Excellent |
| AF3 | 311 | 7696 | None | Excellent |
| F7 | -21 | 296 | Nothing | Nothing |
| T7 | 433 | 104 | Nothing | Nothing |
| P7 | 581 | 7592 | None | Excellent |
| O1 | 812 | 7760 | None | Excellent |
| O2 | 137 | 6032 | None | Excellent |
| P8 | 211 | 5912 | None | Excellent |
| T8 | -51 | 6624 | None | Excellent |
| F8 | 402 | 7768 | None | Excellent |
| AF4 | -52 | 7024 | None | Excellent |
| FC6 | 249 | 6064 | None | Excellent |
| F4 | 509 | 5352 | None | Excellent |
| X | -2 | N/A | N/A | N/A |
| Y | 0 | N/A | N/A | N/A |
| Z | ? | N/A | N/A | N/A |
| Batt | 82 | N/A | N/A | N/A |
+--------+----------+----------+------------+------------+
|Packets Received: 3101 | Packets Processed: 3100 |
| Sampling Rate: 129 | Crypto Rate: 129 |
+========================================================+
Are these values in micro-volts? If so, how can these be more than 200 microvolts? The EEG data is in the range of 0-200 microvolts. Or does this require some kind of processing? If so what?
As described in the frequently asked questions of emokit, :
What unit is the data I'm getting back in? How do I get volts out of it?
One least-significant-bit of the fourteen-bit value you get back is 0.51 microvolts. See the specification for more details.
Looking for the details in the specification (via archive.org), we find the following for the "Emotiv EPOC Neuroheadset":
Resolution | 14 bits 1 LSB = 0.51μV (16 bit ADC,
| 2 bits instrumental noise floor discarded)
Dynamic range (input referred) | 8400μV (pp)
As a validation we can check that for a 14 bits linear ADC, the 8400 microvolts (peak-to-peak) would be divided in steps of 8400 / 16384 or approximately 0.5127 microvolts.
For the Epoc+, the comparison chart indicates a 14-bit and a 16-bit version (with a +/- 4.17mV dynamic range or 8340 microvolts peak-to-peak). The 16-bit version would then have raw data steps of 8340 / 65536 or approximately 0.127 microvolts. If that is what you are using, then the largest value of 812 you listed would correspond to 812 * 0.127 = 103 microvolts.

How to replace multiple values across different datasets in SPSS

I currently have two datasets, RTWANEW2016.sav and MERGED.sav.
RTWANEW2016:
+----+------------+--------+--------+--------+--------+--------+--------+
| id | date | value1 | value2 | value3 | value4 | value5 | value6 |
+----+------------+--------+--------+--------+--------+--------+--------+
| 1 | 01-03-2006 | 3 | 9 | 85 | 766 | 3 | 45 |
| 1 | 03-23-2010 | 56 | 34 | 23 | 33 | 556 | 43 |
| 2 | 12-04-2014 | 56 | 655 | 523 | 566 | 9 | 9 |
| 3 | 07-23-2011 | 34 | 56 | 661 | 23 | 22 | 11 |
| 4 | 03-05-2007 | 45 | 345 | 222 | 556 | 4566 | 4 |
+----+------------+--------+--------+--------+--------+--------+--------+
MERGED:
+----+------------+--------+--------+--------+
| id | date | value4 | value5 | value6 |
+----+------------+--------+--------+--------+
| 1 | 01-03-2006 | 345 | 44 | 5345 |
| 2 | 12-04-2014 | 522 | 55 | 5444 |
| 4 | 03-05-2007 | 234 | 88 | 9001 |
+----+------------+--------+--------+--------+
I want to update RTWANEW2016 with the values from variables "value4", "value5" and "value6" from MERGED.
Notice that some data RTWANEW2016 has duplicate ID's, but different dates, so I would need to sort by both id and date
See the UPDATE command which is designed to achieve this.
Overview (UPDATE command)
UPDATE replaces values in a master file with updated values recorded
in one or more files called transaction files. Cases in the master
file and transaction file are matched according to a key variable.
The master file and the transaction files must be IBM® SPSS®
Statistics data files or datasets available in the current session,
including the active dataset. UPDATE replaces values and creates a new
active dataset, which replaces the original active dataset.
UPDATE is designed to update values of existing variables for existing
cases. Use MATCH FILES to add new variables to a data file and ADD
FILES to add new cases.
UPDATE FILE='/RTWANEW2016.sav'
/FILE='/MERGED.sav'
/BY=ID Date.

Count repeated values in variable and add results to a new one in SPSS

I have a SPSS dataset with information of different household members and I need to generate a new variable that counts the number of people that compose each one of these households.The original dataset is something like:
ID | age | height
332 | 23 | 1.78
332 | 27 | 1.65
344 | 56 | 1.79
344 | 34 | 1.98
344 | 15 | 1.58
etc... and I need to generate a new variable that counts the id repetitions such as 'n' in:
ID | age | height | n
332 | 23 | 1.78 | 2
332 | 27 | 1.65 | 2
344 | 56 | 1.79 | 3
344 | 34 | 1.98 | 3
344 | 15 | 1.58 | 3
Is there any straightforward way to do it with window commands or do I need to use command language?
Look up the AGGREGATE command.
AGGREGATE OUTFILE=* MODE=ADDVARIABLES /BREAK=ID /Count=N.

Shared Compass/Lucene Index in JDBC Store

Using the searchable plugin in Grails (which uses Compass/Lucene under the hood) we're trying to share a search index between two different web applications. One application accesses the data only in a read-only fashion. The other application allows to modify the data and is in charge of updating the index on any change or do a full re-index on demand.
To store the index we're using the JDBC Store (with both applications pointing to the same DB) http://www.compass-project.org/docs/latest/reference/html/core-connection.html.
Unfortunately, as soon as we rebuild the whole index in one application, the other application seems to have invalid data cached and an exception is thrown if a search is performed:
| Error 2012-05-30 09:22:07,560 [http-bio-8080-exec-8] ERROR errors.GrailsExceptionResolver - IndexOutOfBoundsException occurred when processing request: [POST] /search
Index: 45, Size: 13. Stacktrace follows:
Message: Index: 45, Size: 13
Line | Method
->> 547 | RangeCheck in java.util.ArrayList
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
| 322 | get in ''
| 265 | fieldInfo . in org.apache.lucene.index.FieldInfos
| 254 | fieldName in ''
| 86 | read . . . in org.apache.lucene.index.TermBuffer
| 127 | next in org.apache.lucene.index.SegmentTermEnum
| 158 | scanTo . . in ''
| 271 | get in org.apache.lucene.index.TermInfosReader
| 332 | terms . . . in ''
| 717 | terms in org.apache.lucene.index.SegmentReader
| 93 | generate . in org.apache.lucene.search.PrefixGenerator
| 58 | getDocIdSet in org.apache.lucene.search.PrefixFilter
| 116 | <init> . . in org.apache.lucene.search.ConstantScoreQuery$ConstantScorer
| 81 | scorer in org.apache.lucene.search.ConstantScoreQuery$ConstantWeight
| 230 | scorer . . in org.apache.lucene.search.BooleanQuery$BooleanWeight
| 131 | search in org.apache.lucene.search.IndexSearcher
| 112 | search . . in ''
| 204 | search in org.apache.lucene.search.MultiSearcher
| 113 | getMoreDocs in org.apache.lucene.search.Hits
| 90 | <init> in ''
| 61 | search . . in org.apache.lucene.search.Searcher
| 146 | findByQuery in org.compass.core.lucene.engine.transaction.support.AbstractTransactionProcessor
| 259 | doFind . . in org.compass.core.lucene.engine.transaction.readcommitted.ReadCommittedTransactionProcessor
| 246 | find in org.compass.core.lucene.engine.transaction.support.AbstractConcurrentTransactionProcessor
| 352 | find . . . in org.compass.core.lucene.engine.LuceneSearchEngine
| 188 | hits in org.compass.core.lucene.engine.LuceneSearchEngineQuery
| 199 | hits . . . in org.compass.core.impl.DefaultCompassQuery
| 104 | doInCompass in grails.plugin.searchable.internal.compass.search.DefaultSearchMethod$SearchCompassCallback
| 133 | execute . . in org.compass.core.CompassTemplate
| 57 | doInCompass in grails.plugin.searchable.internal.compass.support.AbstractSearchableMethod
| 66 | invoke . . in grails.plugin.searchable.internal.compass.search.DefaultSearchMethod
| 37 | search in grails.plugin.searchable.SearchableService
We could communicate the fact that the index is rebuilt from one to the other application so that some clean-up could be performed.
Did anybody have a similar problem with Grails and the Searchable plugin?
Is it possible to discard data cached by Compass/Lucene?
Is it possible to disable caching generally?
Clearing all caches before searching seems to solve the issue...
searchableService.compass.compass.searchEngineFactory.indexManager.clearCache()

Resources