grep a command output, but how to look for specific block? - grep

I'm looking for some specific block with grep
for example I have this output from android device:
Stream volumes (device: index)
- STREAM_VOICE_CALL:
Muted: false
Min: 1
Max: 5
Current: 40000000 (default): 4
Devices: earpiece
- STREAM_SYSTEM:
Muted: false
Min: 0
Max: 7
Current: 40000000 (default): 5
Devices: speaker
- STREAM_RING:
Muted: false
Min: 0
Max: 7
Current: 40000000 (default): 5
Devices: speaker
**- STREAM_MUSIC:
Muted: false
Min: 0
Max: 15
Current: 2 (speaker): 12, 4000000 (usb_headset): 3, 40000000 (default): 8
Devices: speaker**
- STREAM_ALARM:
Muted: false
Min: 0
Max: 7
Current: 40000000 (default): 6
Devices: speaker
- STREAM_NOTIFICATION:
Muted: false
Min: 0
Max: 7
Current: 40000000 (default): 5
Devices: speaker
- STREAM_BLUETOOTH_SCO:
Muted: false
Min: 0
Max: 15
Current: 40000000 (default): 7
Devices: earpiece
- STREAM_SYSTEM_ENFORCED:
Muted: false
Min: 0
Max: 7
Current: 40000000 (default): 5
Devices: speaker
- STREAM_DTMF:
Muted: false
Min: 0
Max: 15
Current: 40000000 (default): 11
Devices: speaker
- STREAM_TTS:
Muted: false
Min: 0
Max: 15
Current: 2 (speaker): 12, 4000000 (usb_headset): 3, 40000000 (default): 8
Devices: speaker
- STREAM_ACCESSIBILITY:
Muted: false
Min: 0
Max: 15
Current: 2 (speaker): 12, 4000000 (usb_headset): 3, 40000000 (default): 8
Devices: speaker
I need to get the block within ** ** with grep, which code grep command do I need to find that specific block of output?
I've tried with
adb shell dumpsys audio | grep {STREAM_MUSIC:,STREAM_ALARM} and
returns nothing adb shell dumpsys audio | grep -w STREAM_MUSIC
returns only the first line

If you can use awk, you can do this:
awk '/- STREAM/ {f=0} /- STREAM_MUSIC:/ {f=1} f'
- STREAM_MUSIC:
Muted: false
Min: 0
Max: 15
Current: 2 (speaker): 12, 4000000 (usb_headset): 3, 40000000 (default): 8
Devices: speaker

Related

Yolov5 model not able to train

I'm making a model to detect potholes in an image. I've done everything right or so it seems to me, but I can't train the model for some reason. What might be the problem here?
!python train.py --img 640 --cfg yolov5m.yaml --hyp data/hyps/hyp.scratch-med.yaml --batch 20 --epochs 300 --data data/potholeData.yaml --weights yolov5m.pt --workers 4 --name yolo_pothole_det_m
This is the final line of the code, which outputs the following.
train: weights=yolov5m.pt, cfg=yolov5m.yaml, data=data/potholeData.yaml, hyp=data/hyps/hyp.scratch-med.yaml, epochs=300, batch_size=20, imgsz=640, rect=False, resume=False, nosave=False, noval=False, noautoanchor=False, noplots=False, evolve=None, bucket=, cache=None, image_weights=False, device=, multi_scale=False, single_cls=False, optimizer=SGD, sync_bn=False, workers=4, project=runs/train, name=yolo_pothole_det_m, exist_ok=False, quad=False, cos_lr=False, label_smoothing=0.0, patience=100, freeze=[0], save_period=-1, seed=0, local_rank=-1, entity=None, upload_dataset=False, bbox_interval=-1, artifact_alias=latest
github: up to date with https://github.com/ultralytics/yolov5 ✅
YOLOv5 🚀 v7.0-23-g5dc1ce4 Python-3.9.13 torch-1.13.0 CPU
hyperparameters: lr0=0.01, lrf=0.1, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=0.05, cls=0.3, cls_pw=1.0, obj=0.7, obj_pw=1.0, iou_t=0.2, anchor_t=4.0, fl_gamma=0.0, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.9, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, mosaic=1.0, mixup=0.1, copy_paste=0.0
ClearML: run 'pip install clearml' to automatically track, visualize and remotely train YOLOv5 🚀 in ClearML
Comet: run 'pip install comet_ml' to automatically track and visualize YOLOv5 🚀 runs in Comet
TensorBoard: Start with 'tensorboard --logdir runs/train', view at http://localhost:6006/
Overriding model.yaml nc=80 with nc=1
from n params module arguments
0 -1 1 5280 models.common.Conv [3, 48, 6, 2, 2]
1 -1 1 41664 models.common.Conv [48, 96, 3, 2]
2 -1 2 65280 models.common.C3 [96, 96, 2]
3 -1 1 166272 models.common.Conv [96, 192, 3, 2]
4 -1 4 444672 models.common.C3 [192, 192, 4]
5 -1 1 664320 models.common.Conv [192, 384, 3, 2]
6 -1 6 2512896 models.common.C3 [384, 384, 6]
7 -1 1 2655744 models.common.Conv [384, 768, 3, 2]
8 -1 2 4134912 models.common.C3 [768, 768, 2]
9 -1 1 1476864 models.common.SPPF [768, 768, 5]
10 -1 1 295680 models.common.Conv [768, 384, 1, 1]
11 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
12 [-1, 6] 1 0 models.common.Concat [1]
13 -1 2 1182720 models.common.C3 [768, 384, 2, False]
14 -1 1 74112 models.common.Conv [384, 192, 1, 1]
15 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
16 [-1, 4] 1 0 models.common.Concat [1]
17 -1 2 296448 models.common.C3 [384, 192, 2, False]
18 -1 1 332160 models.common.Conv [192, 192, 3, 2]
19 [-1, 14] 1 0 models.common.Concat [1]
20 -1 2 1035264 models.common.C3 [384, 384, 2, False]
21 -1 1 1327872 models.common.Conv [384, 384, 3, 2]
22 [-1, 10] 1 0 models.common.Concat [1]
23 -1 2 4134912 models.common.C3 [768, 768, 2, False]
24 [17, 20, 23] 1 24246 models.yolo.Detect [1, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], [192, 384, 768]]
Isn't it supposed to train the model after that? What am I doing wrong for it to stop it right here?
in cmd you can see that it didn't read any images dataset. make sure that your potholedata.yaml file true located. in this file you have to write this code:
train: ../train/images #path to train images
val: ../valid/images #path to valid images
nc: 1 #number of classes
names: ['Weapon'] #name of classes
After this you can run and your train will continue

Expanding a list contained in a column, so that each element of the list corresponds to its own column and is represented as a binary variable

I have a dataframe that looks like this:
skill_list name profile 561 904 468 875 737 402 882...
[561, 564, 632, 859] Aaron Weidele wordpress developer 0 0 0 0 0 0 0
[737, 399, 882, 1086, 5...]Abdelrady Tantawy full stack developer 0 0 0 0 0 0 0
[904, 468, 783, 1120, 8...]Abhijeet A Mulgund machine learning dev... 0 0 0 0 0 0 0 [468] Abhijeet Tiwari salesforce programmi... 0 0 0 0 0 0 0
[518, 466, 875, 445, 402..]Abhimanyu Veer A...machine learning devel...0 0 0 0 0 0 0
The skill_list column contains a list of encoded skills, which correspond to a developer. I would like to expand each list contained within the skill_list column, so that each encoded skill is represented within its own column as a binary variable (1 for on and 0 for off). Expected output would be:
skill_list name profile 561 904 468 875 737 402 882...
[561, 564, 632, 859] Aaron Weidele wordpress developer 1 0 0 0 0 0 0
[737, 399, 882, 1086, 5...]Abdelrady Tantawy full stack developer 0 0 0 0 1 0 1
[904, 468, 783, 1120, 8...]Abhijeet A Mulgund machine learning dev... 0 1 1 0 0 0 0 [468] Abhijeet Tiwari salesforce programmi... 0 0 1 0 0 0 0
[518, 466, 875, 445, 402..]Abhimanyu Veer A...machine learning devel...0 0 0 0 0 1 0
I've tried:
for index, row in df_vector_matrix["skill_list"].items():
for item in row:
for col in df_vector_matrix.columns:
if item == col:
df_vector_matrix.loc[item, col] = "1"
else:
0
I would really appreciate the help!
You can try MultiLabelBinarizer from sklearn.
Below example might help.
from sklearn.preprocessing import MultiLabelBinarizer
lb = MultiLabelBinarizer()
lb_res = lb.fit_transform(df_vector_matrix['skill_list'])
# convert result into dataframe
res = pd.DataFrame(lb_res,columns=lb.classes_)
# concatenate data result and original dataframe
df_vector_matrix = pd.concat([df_vector_matrix,res],axis=1)
Below is with the example dataframe, where the col column have list values.
>>> import pandas as pd
>>> from sklearn.preprocessing import MultiLabelBinarizer
>>> d ={'col':[[1,2,3],[2,3,4,5],[2]],'name':['abc','vdf','rt']}
>>> df = pd.DataFrame(d)
>>> df
col name
0 [1, 2, 3] abc
1 [2, 3, 4, 5] vdf
2 [2] rt
>>> lb = MultiLabelBinarizer()
>>> lb_res = lb.fit_transform(df['col'])
>>> res = pd.DataFrame(lb_res,columns=lb.classes_)
>>> pd.concat([df,res],axis=1)
col name 1 2 3 4 5
0 [1, 2, 3] abc 1 1 1 0 0
1 [2, 3, 4, 5] vdf 0 1 1 1 1
2 [2] rt 0 1 0 0 0
>>>

CIFilter/CIContext gives different results on Simulator and Device

I am using (relatively) simple filters on ios using coreimage.
However, on the Simulator I get my expected results but the device gives slightly different output.
The filter causing the issue is CIEdgeWork. But it is combined with some other filters.
The code just adds a set of CIFilters to a CIImage (created by loading PNG data).
The filters are; CIColorClamp -> CIColorInvert -> Composited over CIConstantColorGenerator (white) -> CIEdgeWork
The resulting CIImage is rendered out using CIContext.pngRepresentation(…) and displayed in a UIImageView.
Outputting the debug descriptions of the CIImage shows only one (minor?) difference.
Simulator
<CIImage: 0x600000ae4d20 extent [0 0 240 240]>
crop [0 0 240 240] extent=[0 0 240 240]
colorkernel _edgeWorkContrast(src,contrast=15.3272) extent=[infinite]
kernel _cubicUpsample10(src,scale=[0.25 0.25 0 0]) extent=[infinite]
kernel _gaussianBlur3(src,offset0=[0 0 0 0]) extent=[infinite]
kernel _gaussianBlur3(src,offset0=[0 0 0 0]) extent=[infinite]
kernel _gaussianReduce4(src,scale=[1 4 0 1]) extent=[infinite]
kernel _gaussianReduce4(src,scale=[4 1 1 0]) extent=[infinite]
colorkernel _edgeWork(src,blurred) extent=[infinite]
colorkernel _srcOver(src,dst) extent=[infinite] <0>
premultiply extent=[0 0 240 240]
colorkernel _colorClampAP(c,lo=[0 0 0 0],hi=[0 0 0 1]) extent=[0 0 240 240]
unpremultiply extent=[0 0 240 240]
affine [1 0 0 -1 0 240] extent=[0 0 240 240]
colormatch "sRGB IEC61966-2.1"_to_workingspace extent=[0 0 240 240]
IOSurface 0x6000033ec410(1) seed:0 RGBA8 alpha_unpremul extent=[0 0 240 240]
fill [1 1 1 1 devicergb] extent=[infinite][0 0 1 1] opaque
kernel _cubicUpsample10(src,scale=[0.25 0.25 0 0]) extent=[infinite]
kernel _gaussianBlur3(src,offset0=[0 0 0 0]) extent=[infinite]
kernel _gaussianBlur3(src,offset0=[0 0 0 0]) extent=[infinite]
kernel _gaussianReduce4(src,scale=[1 4 0 1]) extent=[infinite]
kernel _gaussianReduce4(src,scale=[4 1 1 0]) extent=[infinite]
<0>
Device
<CIImage: 0x280743ba0 extent [0 0 240 240]>
crop [0 0 240 240] extent=[0 0 240 240]
colorkernel _edgeWorkContrast(src,contrast=15.3272) extent=[infinite]
kernel _cubicUpsample10(src,scale=[0.25 0.25 0 0]) extent=[infinite]
kernel _gaussianBlur3(src,offset0=[0 0 0 0]) extent=[infinite]
kernel _gaussianBlur3(src,offset0=[0 0 0 0]) extent=[infinite]
kernel _gaussianReduce4(src,scale=[1 4 0 1]) extent=[infinite]
kernel _gaussianReduce4(src,scale=[4 1 1 0]) extent=[infinite]
colorkernel _edgeWork(src,blurred) extent=[infinite]
colorkernel _srcOver(src,dst) extent=[infinite] <0>
premultiply extent=[0 0 240 240]
colorkernel _colorClampAP(c,lo=[0 0 0 0],hi=[0 0 0 1]) extent=[0 0 240 240]
unpremultiply extent=[0 0 240 240]
affine [1 0 0 -1 0 240] extent=[0 0 240 240]
colormatch "sRGB IEC61966-2.1"_to_workingspace extent=[0 0 240 240]
IOSurface 0x28074d5c0(70) seed:1 RGBA8 extent=[0 0 240 240]
fill [1 1 1 1 devicergb] extent=[infinite][0 0 1 1] opaque
kernel _cubicUpsample10(src,scale=[0.25 0.25 0 0]) extent=[infinite]
kernel _gaussianBlur3(src,offset0=[0 0 0 0]) extent=[infinite]
kernel _gaussianBlur3(src,offset0=[0 0 0 0]) extent=[infinite]
kernel _gaussianReduce4(src,scale=[1 4 0 1]) extent=[infinite]
kernel _gaussianReduce4(src,scale=[4 1 1 0]) extent=[infinite]
<0>
The lines that differ are:
Simulator
IOSurface 0x6000033ec410(1) seed:0 RGBA8 alpha_unpremul extent=[0 0 240 240]
Device
IOSurface 0x28074d5c0(70) seed:1 RGBA8 extent=[0 0 240 240]
I don't know enough about graphics to say if this will have an effect.
What did I try?
My first (uneducated) guess was the simulator was using CPU and the Device was using GPU. So, using this code to try and force the device to use the CPU (it had no effect):
// ... Create CIImage with filters, set to `outputCIImage`
guard let colourSpace = CGColorSpace(name: CGColorSpace.sRGB) else { throw RenderError.failedToCreateColourSpace }
let context = CIContext(options: [CIContextOption.useSoftwareRenderer : true])
guard let png = context.pngRepresentation(of: outputCIImage, format: .RGBA8, colorSpace: colourSpace, options: [:]) else { throw RenderError.failedToCreatePNGData }
// ... return `png` as Data, then used in `UIImage(data: ...)`

Highcharts polar spider with Min and Max for each y-axis

I want to create a Spider Graph with Each axis having a different scale.
Example :
First axis Min : 0 and Max : 1
Second axis Min : 0 and Max : 100
Third axis Min : 0 and Max : 1
Fourth axis Min : 0 and Max : 5
Is it possible ?
Set the min and the max for each yAxis and assign the correct axis to each series:
yAxis: [{
min: 0,
max : 1,
angle: 0
}, {
min: 0,
max: 100,
angle: 90
},{
min: 0,
max: 5,
angle: 180
}],
series: [{
data: [.8, .7, .6, .5, .4, .3, .2, .1],
yAxis: 0
}, {
data: [11,22, 33, 44, 55, 66, 77, 88],
yAxis: 1
}, {
data: [1, 4, 2, .7, 3, .6, 4, .5],
yAxis: 2
}]
});
http://jsfiddle.net/z8kawx9m/1/

x-axis minimum value to 2 and tickinterval of 6 months using highcharts

I would like to put x-axis starting value as 2.0 and end value to 19.0 with tick-interval as 0.60. When i give difference as 0.60 its starts from 1.8 and ends at 19.2 even if i give minimum and maximum value. Please help me to sort out this!
$(function () {
$('#container').highcharts({
chart: {
},
xAxis: {
startOnTick: true,
min:2.0,
step: 2,
max: 19.0,
startOnTick: true,
endOnTick: true,
tickInterval: 0.60
},
series: [{
data: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12,13,14,15,16,17,18,19]
}]
});
});
http://jsfiddle.net/aparnaunny/6mHfw/1/
^ This is what i tried.
Thanks,
Aparna Unny
The problem is that you want a range of size 17 (19 - 2) with a tickInterval of 0.6. 17 doesn't devide equally by 0.6, so the chart has to adjust the min/max.
Also, 2 isn't a multiple of 0.6, so startOnTick must be false if you want to start at 2.
Either pick a different ticker interval (e.g. 0.5) or chose a min/max range which devides eqully by 0.6 e.g. 2 / 19.2
xAxis: {
min:2.0,
step: 2,
max: 19.0,
startOnTick: true,
endOnTick: true,
tickInterval: 0.50
},
or
xAxis: {
step:2,
min:2.0,
max: 19.4,
startOnTick: false,
endOnTick: false,
tickInterval: 0.60
},

Resources