Capture video on ios using PJSIP - ios

i am not able to render or capture video on ios.
After successful SDP negtiation i tried to add video call for existing audio call, in on_call_media_state callback i observed that the media is not active for video.
When i hung up the call i get the following log which shows #1 video H263-1998, inactive, peer=10.11.201.147:50858.
According to Siphon some are able to get the video stream on ios devices.
Any help would be appreciated
3-11-15 14:59:39.075 ipjsua[220:6007] 14:59:39.075 pjsua_app.c .....Call 1 is DISCONNECTED [reason=200 (Normal call clearing)]
2013-11-15 14:59:39.093 ipjsua[220:6007] 14:59:39.093 pjsua_app.c .....
2013-11-15 14:59:39.097 ipjsua[220:6007] [DISCONNCTD] To: "102" <sip:102#10.11.201.147>;tag=bf76b652
Call time: 00h:04m:04s, 1st res in 121 ms, conn in 674ms
#0 audio speex #16kHz, sendrecv, peer=10.11.201.147:22268
SRTP status: Not active Crypto-suite: (null)
RX pt=100, last update:00h:00m:00.627s ago
total 12.1Kpkt 1.29MB (1.78MB +IP hdr) #avg=42.2Kbps/58.2Kbps
pkt loss=4 (0.0%), discrd=0 (0.0%), dup=0 (0.0%), reord=0 (0.0%)
(msec) min avg max last dev
loss period: 20.000 20.000 20.000 20.000 0.000
jitter : 0.000 6.258 32.187 14.500 3.978
TX pt=100, ptime=20, last update:00h:00m:02.013s ago
total 8.3Kpkt 249.0KB (581.7KB +IP hdr) #avg=8.1Kbps/19.0Kbps
pkt loss=152 (1.8%), dup=0 (0.0%), reorder=0 (0.0%)
(msec) min avg max last dev
loss period: 20.000 49.836 140.000 120.000 25.816
jitter : 5.500 20.248 75.750 33.250 14.885
RTT msec : 5.966 50.438 146.325 131.000 33.555
#1 video H263-1998, inactive, peer=10.11.201.147:50858
SRTP status: Not active Crypto-suite: (null)
RX last update:00h:01m:30.404s ago
total 12pkt 48B (528B +IP hdr) #avg=2bps/22bps
pkt loss=0 (0.0%), discrd=0 (0.0%), dup=0 (0.0%), reord=0 (0.0%)
(msec) min avg max last dev
loss period: 0.000 0.000 0.000 0.000 0.000
jitter : -0.001 0.000 0.000 0.000 0.000
TX last update:00h:01m:03.332s ago
total 0pkt 0B (0B +IP hdr) #avg=0bps/0bps
pkt loss=1 (100.0%), dup=0 (0.0%), reorder=0 (0.0%)
(msec) min avg max last dev
loss period: 0.000 0.000 0.000 0.000 0.000
jitter : 0.000 0.000 0.000 0.000 0.000
RTT msec : 0.000 0.000 0.000 0.000 0.000
#2 video H263-1998, inactive, peer=10.11.201.147:22264
SRTP status: Not active Crypto-suite: (null)
RX last update:00h:01m:39.059s ago
total 15pkt 60B (660B +IP hdr) #avg=2bps/27bps
pkt loss=0 (0.0%), discrd=2 (13.3%), dup=2 (13.3%), reord=0 (0.0%)
(msec) min avg max last dev
loss period: 0.000 0.000 0.000 0.000 0.000
jitter : -0.001 0.000 0.000 0.000 0.000
TX last update:00h:01m:08.467s ago
total 0pkt 0B (0B +IP hdr) #avg=0bps/0bps
pkt loss=1 (100.0%), dup=0 (0.0%), reorder=0 (0.0%)
(msec) min avg max last dev
loss period: 0.000 0.000 0.000 0.000 0.000
jitter : 0.000 0.000 0.000 0.000 0.000
RTT msec : 0.000 0.000 0.000 0.000 0.000
2013-11-15 14:59:39.214 ipjsua[220:6007] 14:59:39.214 pjsua_media.c .....Call 1: deinitializing media..
2013-11-15 14:59:39.231 ipjsua[220:6007] 14:59:39.231 pjsua_media.c .......Media stream call01:0 is destroyed
2013-11-15 14:59:39.253 ipjsua[220:6007] 14:59:39.253 pjsua_vid.c .......Stopping video stream..
2013-11-15 14:59:39.259 ipjsua[220:6007] 14:59:39.259 pjsua_media.c .......Media stream call01:1 is destroyed
2013-11-15 14:59:39.264 ipjsua[220:6007] 14:59:39.264 pjsua_vid.c .......Stopping video stream..
2013-11-15 14:59:39.276 ipjsua[220:6007] 14:59:39.276 pjsua_media.c .......Media stream call01:2 is destroyed
2013-11-15 14:59:40.232 ipjsua[220:6007] 14:59:40.231 pjsua_aud.c Closing sound device after idle for 1 second(s)
2013-11-15 14:59:40.234 ipjsua[220:6007] 14:59:40.234 pjsua_app.c .Turning sound device OFF
2013-11-15 14:59:40.253 ipjsua[220:6007] 14:59:40.252 pjsua_aud.c .Closing iPhone IO device sound playback device and iPhone IO device sound capture device
2013-11-15 14:59:40.415 ipjsua[220:6007] 14:59:40.415 coreaudio_dev. .core audio stream stopped
2013-11-15 15:00:33.609 ipjsua[220:6007] 15:00:33.608 pjsua_core.c .RX 719 bytes Request msg SUBSCRIBE/cseq=52 (rdata0xa41a14) from UDP 10.11.201.147:5060:
SUBSCRIBE sip:101#10.11.208.114:5060;ob SIP/2.0
2013-11-15 15:19:54.347 ipjsua[220:6007] 15:19:54.347 pjsua_app.c .....Call 2 is DISCONNECTED [reason=200 (Normal call clearing)]

PJSIP on iOS does not currently implement Video Media.
The data sheet states which OSs video is implemented for:
Video Media
Platforms:
Windows,
Linux,
Mac
Codecs:
H.263-1998 (ffmpeg),
H.264 (ffmpeg and x264)
Capture devices:
colorbar (all platforms)
DirectShow (Windows)
Video4Linux2 (Linux)
QuickTime (Mac OS X)
Rendering devices:
SDL (Windows, Linux, and Mac OS X)
DirectShow (Windows)
http://trac.pjsip.org/repos/wiki/PJSIP-Datasheet
Further, the video user guide states that mobile OSs are not yet supported:
Video is available on PJSIP version 2.0 and later. Only desktop platforms are supported, mobile devices such as iOS are not yet supported.
http://trac.pjsip.org/repos/wiki/Video_Users_Guide

Related

write: no buffer space available socket-can/linux-can

I'm running a program with two CAN channels (using TowerTech CAN Cape TT3201).
The two channels are can0 (500k) and can1 (125k). The can0 channels works perfectly but can1 runs a write:No buffer space available error.
I'm using ValueCAN3/VehicleSpy to check the messages.
This is before I run the program. can0 and can1 both seem to send, but only can0 shows up in VehicleSpy.
root#cantool:~# cansend can0 100#00
root#cantool:~# cansend can1 100#20
This is after I try running the program
root#cantool:~# cansend can1 100#20
write: No buffer space available
root#cantool:~# cansend can0 111#10
While my program is running : I get this error for all messages to be sent on can1
2016-11-02 15:36:03,052 - can.socketcan.native.tx - WARNING - Failed to send: 0.000000 12f83018 010 1 00
2016-11-02 15:36:03,131 - can.socketcan.native.tx - WARNING - Failed to send: 0.000000 0af81118 010 6 00 00 00 00 00 00
2016-11-02 15:36:03,148 - can.socketcan.native.tx - WARNING - Failed to send: 0.000000 12f81018 010 6 00 00 00 00 00 00
2016-11-02 15:36:03,174 - can.socketcan.native.tx - WARNING - Failed to send: 0.000000 0af87018 010 3 00 00 00
2016-11-02 15:36:03,220 - can.socketcan.native.tx - WARNING - Failed to send: 0.000000 12f89018 010 4 00 00 00 00
2016-11-02 15:36:03,352 - can.socketcan.native.tx - WARNING - Failed to send: 0.000000 12f83018 010 1 00
However sometimes the whole program works perfectly (if the module is rebooted or some random instances).
How do I fix this?
root#cantool:~# uname -r
4.1.15-ti-rt-r43
after doing some digging, I found this
root#cantool:-#ip -details link show can0
4:can0: <NOARP,UP,LOWER_UP, ECHO> mtu 16 qdisc pfifo_fast state UNKNOWN mode DEFAULT group default qlen 10
link/can promiscuity 0
can state ERROR-ACTIVE (berr-counter tx 0 rx 0) restart-ms 100
bitrate 500000 sample-point 0.875
tq 125 prop-seg 6 phase-seg1 7 phase-seg2 2 sjw 1
c_can: tseg1 2..16 tseg2 1..8 sjw 1..4 brp 1..1024 brp-inc 1
clock 24000000
root#cantool:-#ip -details link show can1
5: can1: <NOARP,UP,LOWER_UP, ECHO> mtu 16 qdisc pfifo_fast state UNKNOWN mode DEFAULT group default qlen 10
link/can promiscuity 0
can state STOPPED restart-ms 100
bitrate 125000 sample-point 0.875
tq 500 prop-seg 6 phase-seg1 7 phase-seg2 2 sjw 1
c_can: tseg1 2..16 tseg2 2..8 sjw 1..4 brp 1..64 brp-inc 1
clock 8000000
Turns out that can1 is STOPPED for some reason
however when I try:
ip link set can1 type can restart
RNETLINK answers: Invalid argument
After you enable the can0 interface with sudo ifconfig can0 up, run:
sudo ifconfig can0 txqueuelen 1000
This will increase the number of frames allowed per kernel transmission queue for the queuing discipline. More info here
... sometimes the whole program works perfectly (if the module is rebooted or some random instances).
The reason why it works when you restart the SocketCAN interface, is that you might clear up just enough buffer space to make it work.

Test Plan with ApacheBench(AB) testing tool

I am trying load testing here. My backend is in Ruby(2.2) on Rails(3).
I read many pages about how to work with Ab testing.
Here is what I have tried:
ab -n 100 -c 30 url
Result:
This is ApacheBench, Version 2.3 <$Revision: 1554214 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 52.74.130.35 (be patient).....done
Server Software: nginx/1.6.2
Server Hostname: 52.74.130.35
Server Port: 80
Document Path: url
Document Length: 1372 bytes
Concurrency Level: 3
Time taken for tests: 10.032 seconds
Complete requests: 100
Failed requests: 0
Total transferred: 181600 bytes
HTML transferred: 137200 bytes
Requests per second: 9.97 [#/sec] (mean)
Time per request: 300.963 [ms] (mean)
Time per request: 100.321 [ms] (mean, across all concurrent requests)
Transfer rate: 17.68 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 2 9 25.0 5 227
Processing: 176 289 136.5 257 1134
Waiting: 175 275 77.9 256 600
Total: 180 298 139.2 264 1143
Percentage of the requests served within a certain time (ms)
50% 264
66% 285
75% 293
80% 312
90% 361
95% 587
98% 1043
99% 1143
Which seams to be working perfectly. But my problem is I want to test many API's, not just one. So I have to write a script in which I write all the Api's with particular probabilities(weights) and load test on them.
I know how its possible with Locust, but locust does not support nested json to be passed as parameters.
Can somebody help with this.
Also let me know if there is any problem/ambiguity in the question itself.

memory leaks using matplotlib

This is not intended as a bug report--even if these leaks may be a result of mpl bugs, please interpret the question ask asking for a way around them.
The problem is simple: plot a large chunk of data (using plot() or scatter()), clear/release everything, garbage collect, but still not nearly all the memory is released.
Line # Mem usage Increment Line Contents
================================================
391 122.312 MiB 0.000 MiB #profile
392 def plot_network_scatterplot(t_sim_stop, spikes_mat, n_cells_per_area, n_cells, basedir_output, condition_idx):
393
394 # make network scatterplot
395 122.312 MiB 0.000 MiB w, h = plt.figaspect(.1/(t_sim_stop/1E3))
396 122.324 MiB 0.012 MiB fig = mpl.figure.Figure(figsize=(10*w, 10*h))
397 122.328 MiB 0.004 MiB canvas = FigureCanvas(fig)
398 122.879 MiB 0.551 MiB ax = fig.add_axes([.01, .1, .98, .8])
399 134.879 MiB 12.000 MiB edgecolor_vec = np.array([(1., 0., 0.), (0., 0., 1.)])[1-((spikes_mat[:,3]+1)/2).astype(np.int)]
400 '''pathcoll = ax.scatter(spikes_mat[:,1],
401 spikes_mat[:,0] + n_cells_per_area * (spikes_mat[:,2]-1),
402 s=.5,
403 c=spikes_mat[:,3],
404 edgecolor=edgecolor_vec)'''
405 440.098 MiB 305.219 MiB pathcoll = ax.plot(np.random.rand(10000000), np.random.rand(10000000))
406 440.098 MiB 0.000 MiB ax.set_xlim([0., t_sim_stop])
407 440.098 MiB 0.000 MiB ax.set_ylim([1, n_cells])
408 440.098 MiB 0.000 MiB plt.xlabel('Time [ms]')
409 440.098 MiB 0.000 MiB plt.ylabel('Cell ID')
410 440.098 MiB 0.000 MiB plt.suptitle('Network activity scatterplot')
411 #plt.savefig(os.path.join(basedir_output, 'network_scatterplot-[cond=' + str(condition_idx) + '].png'))
412 931.898 MiB 491.801 MiB canvas.print_figure(os.path.join(basedir_output, 'network_scatterplot-[cond=' + str(condition_idx) + '].png'))
413 #fig.canvas.close()
414 #pathcoll.set_offsets([])
415 #pathcoll.remove()
416 931.898 MiB 0.000 MiB ax.cla()
417 931.898 MiB 0.000 MiB ax.clear()
418 931.898 MiB 0.000 MiB fig.clf()
419 931.898 MiB 0.000 MiB fig.clear()
420 931.898 MiB 0.000 MiB plt.clf()
421 932.352 MiB 0.453 MiB plt.cla()
422 932.352 MiB 0.000 MiB plt.close(fig)
423 932.352 MiB 0.000 MiB plt.close()
424 932.352 MiB 0.000 MiB del fig
425 932.352 MiB 0.000 MiB del ax
426 932.352 MiB 0.000 MiB del pathcoll
427 932.352 MiB 0.000 MiB del edgecolor_vec
428 932.352 MiB 0.000 MiB del canvas
429 505.094 MiB -427.258 MiB gc.collect()
430 505.094 MiB 0.000 MiB plt.close('all')
431 505.094 MiB 0.000 MiB gc.collect()
I have tried many combinations and different orders of all the clear/release to no avail. I've tried not using an explicit fig/canvas creation but just using mpl.pyplot, with the same results.
Is there any way to free this memory, and go out with the 122.312 that I came in?
Cheers!
Alex Martelli explains
It's very hard, in general, for a process to "give memory back to the OS"
(until the process terminates and the OS gets back all the memory, of course)
because (in most implementation) what malloc returns is carved out of big blocks
for efficiency, but the whole block can't be given back if any part of it is
still in use." So what you think is a memory leak may just be a side effect of
this. If so, fork can solve the problem.
Furthermore,
The only really reliable way to ensure that a large but
temporary use of memory DOES return all resources to the system when it's done,
is to have that use happen in a subprocess, which does the memory-hungry work
then terminates."
Therefore, you instead of trying to clear the figure and axes, delete references and garbage collecting (all of which will not work), you can instead use multiprocessing to run plot_network_scatterplot in a separate process:
import multiprocessing as mp
def plot_network_scatterplot(
t_sim_stop, spikes_mat, n_cells_per_area, n_cells, basedir_output,
condition_idx):
# make network scatterplot
w, h = plt.figaspect(.1/(t_sim_stop/1E3))
fig = mpl.figure.Figure(figsize=(10*w, 10*h))
canvas = FigureCanvas(fig)
ax = fig.add_axes([.01, .1, .98, .8])
edgecolor_vec = np.array([(1., 0., 0.), (0., 0., 1.)])[1-((spikes_mat[:,3]+1)/2).astype(np.int)]
'''pathcoll = ax.scatter(spikes_mat[:,1],
spikes_mat[:,0] + n_cells_per_area * (spikes_mat[:,2]-1),
s=.5,
c=spikes_mat[:,3],
edgecolor=edgecolor_vec)'''
pathcoll = ax.plot(np.random.rand(10000000), np.random.rand(10000000))
ax.set_xlim([0., t_sim_stop])
ax.set_ylim([1, n_cells])
plt.xlabel('Time [ms]')
plt.ylabel('Cell ID')
plt.suptitle('Network activity scatterplot')
canvas.print_figure(os.path.join(basedir_output, 'network_scatterplot-[cond=' + str(condition_idx) + '].png'))
def spawn(func, *args):
proc = mp.Process(target=func, args=args)
proc.start()
# wait until proc terminates.
proc.join()
if __name__ == '__main__':
spawn(plot_network_scatterplot, t_sim_stop, spikes_mat, n_cells_per_area,
n_cells, basedir_output, condition_idx)

reducing jitter of serial ntp refclock

I am currently trying to connect my DIY DC77 clock to ntpd (using Ubuntu). I followed the instructions here: http://wiki.ubuntuusers.de/Systemzeit.
With ntpq I can see the DCF77 clock
~$ ntpq -c peers
remote refid st t when poll reach delay offset jitter
==============================================================================
+dispatch.mxjs.d 192.53.103.104 2 u 6 64 377 13.380 12.608 4.663
+main.macht.org 192.53.103.108 2 u 12 64 377 33.167 5.008 4.769
+alvo.fungus.at 91.195.238.4 3 u 15 64 377 16.949 7.454 28.075
-ns1.blazing.de 213.172.96.14 2 u - 64 377 10.072 14.170 2.335
*GENERIC(0) .DCFa. 0 l 31 64 377 0.000 5.362 4.621
LOCAL(0) .LOCL. 12 l 927 64 0 0.000 0.000 0.000
So far this looks OK. However I have two questions.
What exactly is the sign of the offset? Is .DCFa. ahead of the system clock or behind the system clock?
.DCFa. points to refclock-0 which is a DIY DCF77 clock emulating a Meinberg clock. It is connected to my Ubuntu Linux box with an FTDI usb-serial adapter running at 9600 7e2. I verified with a DSO that it emits the time with jitter significantly below 1ms. So I assume the jitter is introduced by either the FTDI adapter or the kernel. How would I find out and how can I reduce it?
Part One:
Positive offsets indicate time in the client is behind time on the server.
Negative offsets indicate that time in the client is ahead of time on the server.
I always remember this as "what needs to happen to my clock?"
+0.123 = Add 0.123 to me
-0.123 = Subtract 0.123 from me
Part Two:
Yes the USB serial converters add jitter. Get a real serial port:) You can also use setserial and tell it that the serial port needs to be low_latency. Just apt-get setserial.
Bonus Points:
Lose the unreferenced local clock entry. NO LOCL!!!!

Is this a good way to demonstrate Nodejs (expressjs) advantage over Rails/Django/etc?

UPDATE
This was not supposed to be a benchmark, or a node vs ruby thing (I should left that more clear in the question, sorry). The point was to compare and demonstrate the diference between blocking and non blocking and how easy it is to write non blocking. I could compare using EventMachine for exemple, but node has this builtin, so it was the obvious choice.
I'm trying to demonstrate to some friends the advantage of nodejs (and it's frameworks) over other technologies, some way that is very simple to understand mainly the non blocking IO thing.
So I tried creating a (very little) Expressjs app and a Rails one that would do a HTTP request on google and count the resulting html length.
As expected (on my computer) Expressjs was 10 times faster than Rails through ab (see below). My questioon is if that is a "valid" way to demonstrate the main advantage that nodejs provides over other technologies. (or there is some kind of caching going on in Expressjs/Connect?)
Here is the code I used.
Expressjs
exports.index = function(req, res) {
var http = require('http')
var options = { host: 'www.google.com', port: 80, method: 'GET' }
var html = ''
var googleReq = http.request(options, function(googleRes) {
googleRes.on('data', function(chunk) {
html += chunk
})
googleRes.on('end', function() {
res.render('index', { title: 'Express', html: html })
})
});
googleReq.end();
};
Rails
require 'net/http'
class WelcomeController < ApplicationController
def index
#html = Net::HTTP.get(URI("http://www.google.com"))
render layout: false
end
end
This is the AB benchmark results
Expressjs
Server Software:
Server Hostname: localhost
Server Port: 3000
Document Path: /
Document Length: 244 bytes
Concurrency Level: 20
Time taken for tests: 1.718 seconds
Complete requests: 50
Failed requests: 0
Write errors: 0
Total transferred: 25992 bytes
HTML transferred: 12200 bytes
Requests per second: 29.10 [#/sec] (mean)
Time per request: 687.315 [ms] (mean)
Time per request: 34.366 [ms] (mean, across all concurrent requests)
Transfer rate: 14.77 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.1 0 0
Processing: 319 581 110.6 598 799
Waiting: 319 581 110.6 598 799
Total: 319 581 110.6 598 799
Percentage of the requests served within a certain time (ms)
50% 598
66% 608
75% 622
80% 625
90% 762
95% 778
98% 799
99% 799
100% 799 (longest request)
Rails
Server Software: WEBrick/1.3.1
Server Hostname: localhost
Server Port: 3001
Document Path: /
Document Length: 65 bytes
Concurrency Level: 20
Time taken for tests: 17.615 seconds
Complete requests: 50
Failed requests: 0
Write errors: 0
Total transferred: 21850 bytes
HTML transferred: 3250 bytes
Requests per second: 2.84 [#/sec] (mean)
Time per request: 7046.166 [ms] (mean)
Time per request: 352.308 [ms] (mean, across all concurrent requests)
Transfer rate: 1.21 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 180 387.8 0 999
Processing: 344 5161 2055.9 6380 7983
Waiting: 344 5160 2056.0 6380 7982
Total: 345 5341 2069.2 6386 7983
Percentage of the requests served within a certain time (ms)
50% 6386
66% 6399
75% 6402
80% 6408
90% 7710
95% 7766
98% 7983
99% 7983
100% 7983 (longest request)
To complement Sean's answer:
Benchmarks are useless. They show what you want to see. They don't show the real picture. If all your app does is proxy requests to google, then an evented server is a good choice indeed (node.js or EventMachine-based server). But often you want to do something more than that. And this is where Rails is better. Gems for every possible need, familiar sequential code (as opposed to callback spaghetti), rich tooling, I can go on.
When choosing one technology over another, assess all aspects, not just how fast it can proxy requests (unless, again, you're building a proxy server).
You're using Webrick to do the test. Off the bat the results are invalid because Webrick can only process on request at a time. You should use something like thin, which is built on top of eventmachine, which can process multiple requests at a time. Your time per request across all concurrent requests, transfer rate and connection times will improve dramatically making that change.
You should also keep in mind that request time is going to be different between each run because of network latency to Google. You should look at the numbers several times to get an average that you can compare.
In the end, you're probably not going to see a huge difference between Node and Rails in the benchmarks.

Resources