How to use custom modetimings with NVIDIA on Linux? (In a mult-screen setup) - nvidia

I am trying out the NREAL Air, and after extensive testing I've found that it is more comfortable for me to use it at the not-edid-available 90 Hz mode. I was able to find this by setting the following parameters in the "screen" section of my xorg conf.
Option "UseEDID" "false"
Option "ModeDebug" "true”
Option "ExactModeTimingsDVI" "true"
Option "ModeValidation" "NoEdidModes, NoMaxPClkCheck, AllowNonEdidModes"
And then appropriately configuring the rest of my xorg.conf. See https://gist.github.com/cnlohr/9375d6fc0097b50f73c4da7b75a20e83
This has the added beneift of you being able to use custom ModeLine with xrandr. Which is awesome, btw!
sudo xrandr --output DP-2 --newmode HS2 200 1920 1952 1968 2000 1080 1089 1094 1250 +hsync +vsync
sudo xrandr --addmode DP-2 HS2
sudo xrandr --output DP-2 --mode HS2
But, this does not work with display managers, like lightdm, which switch into my desired mode, and then immediately switch to the default 1024x768 for both monitors, which my laptop display cannot handle.
It does, however, work with startx but I'd rather not use my system that way, as I like display managers.
Even if I use custom ModeValidation commands, as soon as I don't disable UseEDID the NVIDIA drivers seem to block xrandr from doing any particularly interesting modes, failing with the following error:
X Error of failed request: BadMatch (invalid parameter attributes)
Major opcode of failed request: 140 (RANDR)
Minor opcode of failed request: 18 (RRAddOutputMode)
Serial number of failed request: 45
Current serial number in output stream: 46
So, I'm in a bit of a pickle. If I want to use the smoother, less headache-inducing mode for the NREAL AIR, I can't use EDIDs, but what is a life without EDIDs?

Related

How can I tell if my laptop can do 4k resolution? (Using Linux Mint)

I am thinking about getting a new monitor. Short of plugging one in and seeing what happens, how can I tell if my laptop can output 4k resolution?
Is there a command I can run that will tell me this?
I thought maybe I could xrandr, but I think that only tells me what each monitor is capable of (even if the controller is capable of more.)
I also thought maybe I could look up the device from the lspci and find it on google, but I couldn't find much.
My lspci -v says:
00:02.0 VGA compatible controller: Intel Corporation 3rd Gen Core processor Graphics Controller (rev 09) (prog-if 00 [VGA controller])
Subsystem: Lenovo Device 500d
Flags: bus master, fast devsel, latency 0, IRQ 45
Memory at f3000000 (64-bit, non-prefetchable) [size=4M]
Memory at d0000000 (64-bit, prefetchable) [size=256M]
I/O ports at 6000 [size=64]
Expansion ROM at <unassigned> [disabled]
Capabilities: <access denied>
Kernel driver in use: i915
Update: Mr. Llama's suggestion about xrandr made me think that this output could be useful to someone who knows more than I do. Here's my xrandr:
Screen 0: minimum 8 x 8, current 5206 x 1080, maximum 16384 x 16384
eDP-1-0 connected 1366x768+0+0 309mm x 173mm
1366x768 60.1*+
... other resolution options removed for brevity ...
320x240 60.1
HDMI-1-0 connected 1920x1080+3286+0 160mm x 90mm
1920x1080 60.0*+ 50.0 59.9
... other resolution options removed for brevity ...
640x480 60.0 59.9
DisplayPort-1-1 connected 1920x1080+1366+0 509mm x 286mm
1920x1080 60.0*+
... other resolution options removed for brevity ...
720x400 70.1
VGA-1-0 disconnected
HDMI-1-1 disconnected
DisplayPort-1-0 disconnected
Looks like the xrandr command might be of use to you. It lists the available and current monitor resolution.
See this SuperUser question for more information.

Set mmc2 on beaglebone black

I am working with a Beaglebone Black and I would like to use the mmc2 slot.
according to AM335xx TRM, a beaglebone black should have 3 mmc available:
mmc0 (sd card);
mmc1 (2G flash),
mmc2.
I am trying to enable mmc2 by device tree (and I am quite sure to have the right pin settings) but, by doing
dmesg
I obtain:
/ocp/mmc#47810000: can't find DMA channel
omap_hsmmc mmc.11: unable to obtain RX DMA engine channel 65
By putting the oscilloscope probe on the header (e.g. the mmc2 clk signal), I do not see any transition.
I already removed R 160 to have mmc2 cmd accessible but I do not see any transition also there.
I tried both to enable it by
echo > /sys/devices/..../slots
and by
capemgr.enable_partno
with no success:
I can see it in
/sys/devices/..../slots
(with the L meaning loaded)..but no way to see any signal on the header.
I already googled it but answers are not clear at all.
Any ideas?
My
uname -a
is:
Linux beaglebone 3.8.13 #1 SMP Tue Jun 18 02:11:09 EDT 2013 armv7l GNU/Linux
Thanks for your help.
You need to configure the mmc2 DMA events to some DMA channel since these events are not direct mapped.
I was not able to do this successfully using device tree overlays. So I made a change in the
am335-x-bone-common.dtsi directly (not sure this is the best way though):
&edma {
ti,edma-xbar-event-map = <32 12>, /* gpevt2 -> 12 */
<30 20>, /* xdma_event_intr2 -> 20 */
+ <1 32>,
+ <2 33>;
};
In the example above the event 1 (SDTXEVT2) was mapped to channel 32 and event 2 (SDRXEVT2) to channel 33.
In case you want to pick another open DMA channel check tables 11-23. Direct Mapped and Table 11-24. Crossbar Mapped from the technical reference manual Rev J.
In your device tree overlay file add these channels in the mmc3 node:
dmas = <&edma 32
&edma 33>;
dma-names = "tx", "rx";

Error in file.read() return above 2 GB on 64 bit python

I have several ~50 GB text files that I need to parse for specific contents. My files contents are organized in 4 line blocks. To perform this analysis I read in subsections of the file using file.read(chunk_size) and split into blocks of 4 then analyze them.
Because I run this script often, I've been optimizing and have tried varying the chunk size. I run 64 bit 2.7.1 python on OSX Lion on a computer with 16 GB RAM and I noticed that when I load chunks >= 2^31, instead of the expected text, I get large amounts of /x00 repeated. This continues as far as my testing has shown all the way to, and including 2^32, after which I once again get text. However, it seems that it's only returning as many characters as bytes have been added to the buffer above 4 GB.
My test code:
for i in range((2**31)-3, (2**31)+3)+range((2**32)-3, (2**32)+10):
with open('mybigtextfile.txt', 'rU') as inf:
print '%s\t%r'%(i, inf.read(i)[0:10])
My output:
2147483645 '#HWI-ST550'
2147483646 '#HWI-ST550'
2147483647 '#HWI-ST550'
2147483648 '\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
2147483649 '\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
2147483650 '\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
4294967293 '\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
4294967294 '\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
4294967295 '\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
4294967296 '\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
4294967297 '#\x00\x00\x00\x00\x00\x00\x00\x00\x00'
4294967298 '#H\x00\x00\x00\x00\x00\x00\x00\x00'
4294967299 '#HW\x00\x00\x00\x00\x00\x00\x00'
4294967300 '#HWI\x00\x00\x00\x00\x00\x00'
4294967301 '#HWI-\x00\x00\x00\x00\x00'
4294967302 '#HWI-S\x00\x00\x00\x00'
4294967303 '#HWI-ST\x00\x00\x00'
4294967304 '#HWI-ST5\x00\x00'
4294967305 '#HWI-ST55\x00'
What exactly is going on?
Yes, this is the known issue according to the comment in cpython's source code. You can check it in Modules/_io/fileio.c. And the code add a workaround on Microsoft windows 64bit only.

How do I set the interlaced flag on an MKV file so that VLC can automatically play it back deinterlaced?

I've got an MKV file whose source is interlaced NTSC MPEG-2 video. I converted it to H.264 MKV with HandBrake, but this process did not set the "interlaced" flag in the MKV file. The content is interlaced—and I do want it to stay interlaced because it looks much better playing back as 60 fields-per-second content with on-the-fly deinterlacing than it does as 30 frames-per-second content that's been deinterlaced at encode-time.
I tried this...
mkvpropedit -e track:v1 -a interlaced=1 foo.mkv
which did indeed set the interlaced bit...
|+ Segment tracks
| + A track
| + Video track
| + Pixel width: 704
| + Pixel height: 480
| + Display width: 625
| + Display height: 480
| + Interlaced: 1
But when I play the video with VLC with Deinterlace set to Automatic, it doesn't think the video is interlaced and therefore doesn't do the deinterlacing.
What am I doing wrong?
Software versions:
HandBrake 0.9.5
mkvpropedit v5.0.1
Mac OS X 10.7.3
to make handbrake set the interlaced flag:
-use H.264(x264) Video Codec
-at the bottom of the Advanced Tab add :tff or :bff, ( dependant if source is top field first or bottom field first)
I would recommend trying FFMPEG.
Documentation: http://ffmpeg.org/ffmpeg.html
‘-ilme’
Force interlacing support in encoder (MPEG-2 and MPEG-4 only).
Use this option if your input file is interlaced and you want to keep
the interlaced format for minimum losses. The alternative is to
deinterlace the input stream with ‘-deinterlace’, but deinterlacing
introduces losses.
Since you mentioned you are on OSX 10.7 you can use MacPorts to install all dependencies + ffmpeg for you (once the deps are installed you can also build the latest from git).
http://www.macports.org/
(You must be comfortable with the command line for all these tools.)

Erlang/OTP - Timing Applications

I am interested in bench-marking different parts of my program for speed. I having tried using info(statistics) and erlang:now()
I need to know down to the microsecond what the average speed is. I don't know why I am having trouble with a script I wrote.
It should be able to start anywhere and end anywhere. I ran into a problem when I tried starting it on a process that may be running up to four times in parallel.
Is there anyone who already has a solution to this issue?
EDIT:
Willing to give a bounty if someone can provide a script to do it. It needs to spawn though multiple process'. I cannot accept a function like timer.. at least in the implementations I have seen. IT only traverses one process and even then some major editing is necessary for a full test of a full program. Hope I made it clear enough.
Here's how to use eprof, likely the easiest solution for you:
First you need to start it, like most applications out there:
23> eprof:start().
{ok,<0.95.0>}
Eprof supports two profiling mode. You can call it and ask to profile a certain function, but we can't use that because other processes will mess everything up. We need to manually start it profiling and tell it when to stop (this is why you won't have an easy script, by the way).
24> eprof:start_profiling([self()]).
profiling
This tells eprof to profile everything that will be run and spawned from the shell. New processes will be included here. I will run some arbitrary multiprocessing function I have, which spawns about 4 processes communicating with each other for a few seconds:
25> trade_calls:main_ab().
Spawned Carl: <0.99.0>
Spawned Jim: <0.101.0>
<0.100.0>
Jim: asking user <0.99.0> for a trade
Carl: <0.101.0> asked for a trade negotiation
Carl: accepting negotiation
Jim: starting negotiation
... <snip> ...
We can now tell eprof to stop profiling once the function is done running.
26> eprof:stop_profiling().
profiling_stopped
And we want the logs. Eprof will print them to screen by default. You can ask it to also log to a file with eprof:log(File). Then you can tell it to analyze the results. We tell it to collapse the run time from all processes into a single table with the option total (see the manual for more options):
27> eprof:analyze(total).
FUNCTION CALLS % TIME [uS / CALLS]
-------- ----- --- ---- [----------]
io:o_request/3 46 0.00 0 [ 0.00]
io:columns/0 2 0.00 0 [ 0.00]
io:columns/1 2 0.00 0 [ 0.00]
io:format/1 4 0.00 0 [ 0.00]
io:format/2 46 0.00 0 [ 0.00]
io:request/2 48 0.00 0 [ 0.00]
...
erlang:atom_to_list/1 5 0.00 0 [ 0.00]
io:format/3 46 16.67 1000 [ 21.74]
erl_eval:bindings/1 4 16.67 1000 [ 250.00]
dict:store_bkt_val/3 400 16.67 1000 [ 2.50]
dict:store/3 114 50.00 3000 [ 26.32]
And you can see that most of the time (50%) is spent in dict:store/3. 16.67% is taken in outputting the result, another 16.67% is taken by erl_eval (this is why you get by running short functions in the shell -- parsing them becomes longer than running them).
You can then start going from there. That's the basics of profiling run times with Erlang. Handle with care, eprof can be quite a load on a production system or for functions that run for too long. Especially on a production system.
You can use eprof or fprof.
The normal way to do this is with timer:tc. Here is a good explanation.
I can recommend you this tool: https://github.com/virtan/eep
You will get something like this https://raw.github.com/virtan/eep/master/doc/sshot1.png as a result.
Step by step instruction for profiling all processes on running system:
On target system:
1> eep:start_file_tracing("file_name"), timer:sleep(20000), eep:stop_tracing().
$ scp -C $PWD/file_name.trace desktop:
On desktop:
1> eep:convert_tracing("file_name").
$ kcachegrind callgrind.out.file_name

Resources