Changing default focused screen on awesome wm - lua

I have a bit of a hacky setup on my laptop where I use optimus-manager to configure my screen layout at X startup based on whether an external monitor is connected or not. If it is, I only want to use the external monitor and not the laptop monitor, but because of a limitation of NVIDIA drivers, I need to leave my laptop monitor on, and just lower the backlight brightness. See my /etc/optimus-manager/xsetup-hybrid.sh for how this works (eDP-1 is my laptop screen, and HDMI-1-0 is my external monitor):
#!/bin/sh
# Everything you write here will be executed by the display manager when setting up the login screen in "hybrid" mode.
xrandr --output eDP-1 --mode 1920x1080 --pos 3000x0 --output HDMI-1-0 --primary --mode 2560x1080 --pos 0x0
if [ "$?" -ne "0" ]; then
echo "Not connected to display, don't dim internal monitor"
exit 0
fi
echo 0 | tee /sys/class/backlight/intel_backlight/brightness
Notice that eDP-1 (laptop monitor) is positioned at 3000x0. I've done this to keep the screens far enough apart so that my mouse can't accidentally stray from my external monitor's screen into my laptop monitor's screen.
This works pretty well, but for some reason every time I log into awesome (or anytime I change the screen layout with xrandr for that matter), awesome focuses the laptop monitor screen by default, even though xrandr is specifying the external monitor screen as --primary.
How can I change this behaviour so that my external monitor screen is focused by default if that monitor is connected, falling back to my laptop screen if it isn't?

You could warp the mouse cursor to the primary screen in your config:
mouse.screen = screen.primary
The above can also be tested in an already running awesome session via awesome-client: awesome-client 'mouse.screen = screen.primary'.
If you want to decide the pointer coordinates more precisely (the above uses the upper left corner of the screen), there is mouse.coords({ x = 42, y = 21 }).

It's likely apps are being displayed where the mouse currently resides. You could use xdotool to ensure mouse is on desired display to begin with.
#!/bin/sh
## executed by display manager during login
if [ "$?" -eq "0" ] ; then
xrandr --output eDP-1 --mode 1920x1080 --pos 3000x0 --output HDMI-1-0 --primary --mode 2560x1080 --pos 0x0
echo 0 | tee /sys/class/backlight/intel_backlight/brightness
## sudo apt install xdotool
## x, y coordinates: half of 2560x1080, so middle of main screen
xdotool mousemove 1280 540
else
## external display not plugged in
xrandr --output eDP-1 --mode 1920x1080 --primary --pos 0x0
fi

Related

How can I access a memory mapped device, synthesized on fpga, with a Lauterbach script?

on a ZCU106 board with MPSoC Zynq Ultrascale+, I have developed a simple test application that performs reads and writes to a memory-mapped device register. The device in question is a GPIO peripheral, synthesized to fpga and accessed in memory with address (0xA0010000) configured via Vivado tool. The application does not use MMU. If I try to launch and run my application from the Vitis debugger, everything works correctly, with no problems. However, I need to launch the application from Lauterbach's Trace32. By doing the launch with a practice script (Lauterbach's scripting language), the application loads correctly, but upon reading or writing to the memory address where the synthesized device is mapped to fpga, a "debug port fail" type problem is returned. I therefore analyzed the tcl script automatically generated by Vitis to see if there are any hardware configurations that are made in tcl, but which I do not predict with the practice script (Lauterbach). My suspicion is that there are special instructions to enable the mapping of memory addresses that a processor can access. I attach the tcl script generated by Vitis, perhaps the "offending" instruction is the 'loadhw -hw'? Thanks in advance to anyone who can help me.
# In Vitis IDE create a Single Application Debug launch configuration,
# change the debug type to 'Attach to running target' and provide this
# tcl script in 'Execute Script' option.
# Path of this script: /home/daniele/vitis_workspace/interr_measurement_test_system/_ide/scripts/debugger_interr_measurement_test-default.tcl
#
#
# Usage with xsct:
# To debug using xsct, launch xsct and run below command
# source /home/daniele/vitis_workspace/interr_measurement_test_system/_ide/scripts/debugger_interr_measurement_test-default.tcl
#
connect -url tcp:127.0.0.1:3121
source /tools/Xilinx/Vitis/2021.2/scripts/vitis/util/zynqmp_utils.tcl
targets -set -nocase -filter {name =~"APU*"}
rst -system
after 3000
targets -set -filter {jtag_cable_name =~ "Xilinx HW-FTDI-TEST FT232H 49619" && level==0 && jtag_device_ctx=="jsn-HW-FTDI-TEST FT232H-49619-14730093-0"}
fpga -file /home/daniele/vitis_workspace/interr_measurement_test/_ide/bitstream/zcu106_int_meas_plat_wrapper.bit
targets -set -nocase -filter {name =~"APU*"}
loadhw -hw /home/daniele/vitis_workspace/zcu106_int_meas_plat_wrapper/export/zcu106_int_meas_pp
lat_wrapper/hw/zcu106_int_meas_plat_wrapper.xsa -mem-ranges [list {0x80000000 0xbfffffff} {0x400000000 0x5ffffffff} {0x1000000000 0x7fffffffff}] -regs
configparams force-mem-access 1
targets -set -nocase -filter {name =~"APU*"}
set mode [expr [mrd -value 0xFF5E0200] & 0xf]
targets -set -nocase -filter {name =~ "*A53*#0"}
rst -processor
dow /home/daniele/vitis_workspace/zcu106_int_meas_plat_wrapper/export/zcu106_int_meas_plat_wrapper/sw/zcu106_int_meas_plat_wrapper/boot/fsbl.elf
set bp_30_4_fsbl_bp [bpadd -addr &XFsbl_Exit]
con -block -timeout 60
bpremove $bp_30_4_fsbl_bp
targets -set -nocase -filter {name =~ "*A53*#0"}
rst -processor
dow /home/daniele/vitis_workspace/interr_measurement_test/Debug/interr_measurement_test.elf
configparams force-mem-access 0
bpadd -addr &main

how to remove zero packets (empty streams) records in wireshark

I am very new to wireshark. in my day to day job i need to remove the packet bytes zero records from captured PCAP file. please help me in this process. attached image is for reference
wireshark packets zero.png
Since you have 47 TCP Streams and 28 that you want to remove, it might be a bit faster to filter for all the TCP streams that you do want to keep since there are only 19 of those.
For the 19 streams you want:
Right-click on the first TCP conversation and choose "Prepare a Filter -> Selected -> A<-->B".
For the next 17 TCP conversations, right-click on each one and choose "Prepare a Filter -> ... And Selected -> A<-->B".
Finally, for the last TCP stream, right-click on the TCP conversation and choose "Apply as Filter -> ... And Selected -> A<-->B".
You may wish to export the resulting filtered packets to a new file via "File -> Export Specified Packets... -> All packets:Displayed" so you won't have to keep filtering for those streams anymore.
If you have a large number of streams to filter, then you are better off scripting something. Here's a script you can use that seems to work well in my testing on my Linux machine. If you're using Windows, you will need to write an equivalent batch file, or you may be able to use it as is if you have Cygwin installed.
#!/bin/sh
# Check usage
if [ ${#} -lt 2 ] ; then
echo "Usage: $0 <infile> <outfile>"
exit 0
fi
infile=${1}
outfile=${2}
# TODO: Could also pass the filter on the command-line too.
filter="ip.dst eq 192.168.10.44 and tcp.len > 0"
stream_filter=
for stream in $(tshark -r ${infile} -Y "${filter}" -T fields -e tcp.stream | sort -u | tr -d '\r')
do
if [[ -z ${stream_filter} ]] ; then
stream_filter="tcp.stream eq ${stream}"
else
stream_filter+=" or tcp.stream eq ${stream}"
fi
done
tshark -r ${infile} -Y "${stream_filter}" -w ${outfile}
echo "Wrote ${outfile}"

How to monitor resources during slurm job?

I'm running jobs on our university cluster (regular user, no admin rights), which uses the SLURM scheduling system and I'm interested in plotting the CPU and memory usage over time, i.e while the job is running. I know about sacct and sstat and I was thinking to include these commands in my submission script, e.g. something in the line of
#!/bin/bash
#SBATCH <options>
# Running the actual job in background
srun my_program input.in output.out &
# While loop that records resources
JobStatus="$(sacct -j $SLURM_JOB_ID | awk 'FNR == 3 {print $6}')"
FIRST=0
#sleep time in seconds
STIME=15
while [ "$JobStatus" != "COMPLETED" ]; do
#update job status
JobStatus="$(sacct -j $SLURM_JOB_ID | awk 'FNR == 3 {print $6}')"
if [ "$JobStatus" == "RUNNING" ]; then
if [ $FIRST -eq 0 ]; then
sstat --format=AveCPU,AveRSS,MaxRSS -P -j ${SLURM_JOB_ID} >> usage.txt
FIRST=1
else
sstat --format=AveCPU,AveRSS,MaxRSS -P --noheader -j ${SLURM_JOB_ID} >> usage.txt
fi
sleep $STIME
elif [ "$JobStatus" == "PENDING" ]; then
sleep $STIME
else
sacct -j ${SLURM_JOB_ID} --format=AllocCPUS,ReqMem,MaxRSS,AveRSS,AveDiskRead,AveDiskWrite,ReqCPUS,AllocCPUs,NTasks,Elapsed,State >> usage.txt
JobStatus="COMPLETED"
break
fi
done
However, I'm not really convinced of this solution:
sstat unfortunately doesn't show how many cpus are used at the
moment (only average)
MaxRSS is also not helpful if I try to record memory usage over time
there still seems to be some error (script doesn't stop after job finishes)
Does anyone have an idea how to do that properly? Maybe even with top or htop instead of sstat? Any help is much appreciated.
Slurm offers a plugin to record a profile of a job (PCU usage, memory usage, even disk/net IO for some technologies) into a HDF5 file. The file contains a time series for each measure tracked, and you can choose the time resolution.
You can activate it with
#SBATCH --profile=<all|none|[energy[,|task[,|filesystem[,|network]]]]>
See the documentation here.
To check that this plugin is installed, run
scontrol show config | grep AcctGatherProfileType
It should output AcctGatherProfileType = acct_gather_profile/hdf5.
The files are created in the folder referred to in the ProfileHDF5Dir Slurm configuration parameter (in slurm.conf)
As for your script, you could try replacing sstat with an SSH connection to the compute nodes to run ps. Assuming pdsh or clush is installed, you could run something like:
pdsh -j $SLURM_JOB_ID ps -u $USER -o pid,state,cputime,%cpu,rssize,command --columns 100 >> usage.txt
This will give you CPU and memory usage per process.
As a final note, your job never terminates simply because it will terminate when the while loop terminates, and the while loop will terminate when the job terminates... The condition "$JobStatus" == "COMPLETED" will never be observed from within the script. When the job is completed, the script is killed.

BeagleBone Black: P8_5 is stuck at HIGH

I can control P8_3, P8_4, but P8_5 stays stuck at 3V3.
I've booted the BB with this configuration to disable HDMI:
# cat /proc/cmdline
optargs=quiet capemgr.disable_partno=BB-BONELT-HDMI,BB-BONELT-HDMIN
Then I enabled GPIO 34+38+39 (P8_{5,3,4}):
# echo 34 > /sys/class/gpio/export
# echo 38 > /sys/class/gpio/export
# echo 39 > /sys/class/gpio/export
And set them as outputs:
# echo out > /sys/class/gpio/gpio34/direction
# echo out > /sys/class/gpio/gpio38/direction
# echo out > /sys/class/gpio/gpio39/direction
Then I'm setting them to alternate between HIGH/LOW at 1Hz using a script which writes as so:
# value is 0/1, pin is 34/38/39
# echo $value > /sys/class/gpio/gpio$pin/value
I measure P8_{3,4} alternating between 0V and 3V3, but P8_5 is stuck at 3V3.
I've tried compiling the devicetree overlay for gpio-P8.5 from here and adding it, but with the same result.
I'm aware that these pins are used by HDMI, but I thought I'd disabled HDMI with that kernel command-line, and I ideally need to use those specific pins due to the board I'm connecting the BBB to.
A colleague using my SD image also had the same issue on a BB green (which I believe doesn't have HDMI).
It seems the device tree files provided with the BBB were trying to configure GPIO#2 instead of GPIO#34 (gpio0_2 instead of gpio1_2).
I generated a new device-tree overlay for P8_5 (gpio1_2 / gpio34) with this and it all works now.

Raspberry Pi MJPG-Streamer low latency

I've built a raspberry pi robot. Now I want to stream video from Raspberry Pi onboard camera. I followed this tutorial:
http://blog.miguelgrinberg.com/post/how-to-build-and-run-mjpg-streamer-on-the-raspberry-pi/page/2
So I finally made it working, but now I want to get as low latency as possible. It's important to have low latency, cuz controlling a robot with such a lag is impossible.
Any advise ?
Have a nice day!
You should probably ask this on https://raspberrypi.stackexchange.com/
All potent solutions that can be found as by now use raspivid. It directly encodes the video as H.264/MPEG which is much more efficient as capturing every single frame.
The one which works out best for me so far is
- first on you raspberry pi
raspivid -t 999999 -w 1080 -h 720 -fps 25 -hf -b 2000000 -o - | gst-launch-1.0 -v fdsrc ! h264parse ! rtph264pay config-interval=1 pt=96 ! gdppay ! tcpserversink host=<IP-OF-PI> port=5000
on your PC/viewing device
gst-launch-1.0 -v tcpclientsrc host=<IP-OF-PI> port=5000 ! gdpdepay ! rtph264depay ! avdec_h264 ! videoconvert ! autovideosink sync=false
Source: http://pi.gbaman.info/?p=150
I think I have found from experimentation that the camera board does most of the processing relieveing the raspi from much load at all. You can see this by running top on the pi as it captures and streams.
First I run the following on a linux client:
nc -l -p 5001 | mplayer -fps 31 -cache 512 -
Then I run the following on the raspi:
/opt/vc/bin/raspivid -t 999999 -o -w 1920 -h 1080 - | nc 192.168.1.__ 5001
This was done over an ethernet connection from raspi to linux desktop both connected to a common ethernet hub.
I have made the following observations:
these settings give me a pretty low lag (<100ms)
increasing the cache size (on the client) only leads to a larger lag, since the client will buffer more of the stream before it starts
decreasing the cache size below some lower limit (512 for me) leads to a player error: "Cannot seek backward in linear streams!"
specifying dimensions less than the default 1920x1080 leads to longer delays for smaller dimensions especially when they are less than 640x480
specifying bitrates other than the default leads to longer delays
I'm not sure what the default bitrate is
for any of the scenarios that cause lag, it seems that the lag decreases gradually over time and most configurations I tried seemed to have practically no lag after a minute or so
It's unfortunate that very little technical information seems to be available on the board apart from what commands to run to make it operate. Any more input in the comments or edits to this answer would be appreciated.
I realise this is an old post but I recently needed to do something similar so I created a node Raspberry Pi MJpeg Server were you can pass the compression quality and timeout (number of frames per second).
Start the server:
node raspberry-pi-mjpeg-server.js -p 8080 -w 1280 -l 1024 -q 65 -t 100
Options:
-p, --port port number (default 8080)
-w, --width image width (default 640)
-l, --height image height (default 480)
-q, --quality jpeg image quality from 0 to 100 (default 85)
-t, --timeout timeout in milliseconds between frames (default 500)
-h, --help display this help
-v, --version show version
Open sourced as I'm sure it will help others.

Resources