Using the modulo operator with integers - modulo

I'm trying to work out how the modulo operation 1 % 32 is equal to 1, and 2 % 32 is equal to 2, and so on and so forth.
I have this code:
uint8_t value;
for (uint16_t i = 0; i < 64; i++) {
value = (i % 32);
Serial.println(value);
}
I get a result of:
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
I can see the logic when i is equal to 0, or i is equal to 32. For instance, if we take the remainder of the latter: 32 / 32 we get 0. But, for all the numbers within the range of 1 to 31, I can't seem to derive the same answer as the program. For example, if I divide the integers 1 by 32 in order to obtain the remainder, I just get 0. This is the same for 2, 3, 4, 5, 6, n,..
There must be something I'm missing here.

The result of the modulo operation is the remainder:
0 divided by 32 equals 0 with a remainder of 0
1 divided by 32 equals 0 with a remainder of 1
2 divided by 32 equals 0 with a remainder of 2
3 divided by 32 equals 0 with a remainder of 3
4 divided by 32 equals 0 with a remainder of 4
5 divided by 32 equals 0 with a remainder of 5
...
29 divided by 32 equals 0 with a remainder of 29
30 divided by 32 equals 0 with a remainder of 30
31 divided by 32 equals 0 with a remainder of 31
32 divided by 32 equals 1 with a remainder of 0
33 divided by 32 equals 1 with a remainder of 1
34 divided by 32 equals 1 with a remainder of 2
35 divided by 32 equals 1 with a remainder of 3
36 divided by 32 equals 1 with a remainder of 4
37 divided by 32 equals 1 with a remainder of 5
38 divided by 32 equals 1 with a remainder of 6
39 divided by 32 equals 1 with a remainder of 7
...
So the results you see are what you should expect. Consider 2 % 32. "2 divided by 32 equals 0 with a remainder of 2". So 2 % 32 == 2.

Related

Calculating the most frequent pairs in a dataset

Is it possible to calculate most frequent pairs from a combinations of pairs in a dataset with five columns?
I can do this with a macro in excel, I'd be curious to see if there's a simple solution for this in google sheets.
I have a sample data and results page here :
Data:
B1 B2 B3 B4 B5
6 22 28 32 36
7 10 17 31 35
8 33 38 40 42
10 17 36 40 41
8 10 17 36 54
9 30 32 51 55
1 4 16 26 35
12 28 30 40 43
42 45 47 49 52
10 17 30 31 47
10 17 33 51 58
4 10 17 30 32
2 35 36 37 43
6 10 17 38 55
3 10 17 25 32
Results would be like:
Value1 Value2 Frequency
10 17 8
10 31 2
17 31 2
10 36 2
17 36 2
30 32 2
10 30 2
17 30 2
10 32 2
17 32 2
etc
Each row represents a data set. The pairs don't have to be adjoining. There can be numbers between them.
Create a combination of pairs for each row using the method mentioned here. Then REDUCE all the pairs to create a virtual 2D array. Then use QUERY to group and find the count:
=QUERY(
REDUCE(
{"",""},
A2:A16,
LAMBDA(acc,cur,
{
acc;
QUERY(
LAMBDA(mrg,
REDUCE(
{"",""},
SEQUENCE(COLUMNS(mrg)-1,1,0),
LAMBDA(a_,c_,
{
a_;
LAMBDA(rg,
REDUCE(
{"",""},
OFFSET(rg,0,1,1,COLUMNS(rg)-1),
LAMBDA(a,c,{a;{INDEX(rg,1),c}})
)
)(OFFSET(mrg,0,c_,1,COLUMNS(mrg)-c_))
}
)
)
)(OFFSET(cur,0,0,1,5)),
"where Col1 is not null",0
)
}
)
),
"Select Col1,Col2, count(Col1) group by Col1,Col2 order by count(Col1) desc "
)
Input:
B1(A1)
B2
B3
B4
B5
6
22
28
32
36
7
10
17
31
35
8
33
38
40
42
10
17
36
40
41
8
10
17
36
54
9
30
32
51
55
1
4
16
26
35
12
28
30
40
43
42
45
47
49
52
10
17
30
31
47
10
17
33
51
58
4
10
17
30
32
2
35
36
37
43
6
10
17
38
55
3
10
17
25
32
Output(partial):
count
10
17
8
10
30
2
10
31
2
10
32
2
10
36
2
17
30
2
17
31
2
17
32
2
17
36
2
30
32
2
1
4
1
1
16
1
1
26
1
1
35
1
2
35
1
2
36
1
2
37
1
2
43
1
3
10
1
3
17
1
3
25
1

Generated DXF file opens in AutoCAD but crashes BricsCAD

I am working on a DXF (AC1021 version) exporter in Delphi and I ran into some problems. I was looking closely at ezdxf for minimum file structure and I was able to successfully generate it in delphi.
Now the problem I have is that the generated file works OK in AutoCAD but chrashes BricsCAD as soon as I try to click on entity from block.
Below I am sending you my generated file. Maybe somebody knows an analyzing tool are maybe has an idea what is wrong with my dxf exporter.Thanks for all the hints!
999
TFPDxfWriteBridge by wingdesigner
0
SECTION
2
HEADER
9
$ACADVER
1
AC1021
9
$HANDSEED
5
20000
0
ENDSEC
0
SECTION
2
CLASSES
0
ENDSEC
0
SECTION
2
TABLES
0
TABLE
2
VPORT
5
A
330
0
100
AcDbSymbolTable
70
1
0
VPORT
5
B
330
A
100
AcDbSymbolTableRecord
100
AcDbViewportTableRecord
2
*ACTIVE
70
0
10
0
20
0
11
1
21
1
12
209
22
86
13
0
23
0
14
10
24
10
15
1
25
1
16
0
26
0
36
1
17
0
27
0
37
0
40
319
41
2
42
50
43
0
44
0
50
0
51
0
71
0
72
100
73
1
74
3
75
0
76
0
77
0
78
0
281
0
65
1
110
0
120
0
130
0
111
1
121
0
131
0
112
0
122
1
132
0
79
0
146
0
348
10020
60
7
61
5
292
1
282
1
141
0
142
0
63
250
421
3358443
0
ENDTAB
0
TABLE
2
LTYPE
5
C
330
0
100
AcDbSymbolTable
70
4
0
LTYPE
5
D
330
C
100
AcDbSymbolTableRecord
100
AcDbLinetypeTableRecord
2
ByBlock
70
0
3
72
65
73
0
40
0.000
0
LTYPE
5
E
330
C
100
AcDbSymbolTableRecord
100
AcDbLinetypeTableRecord
2
ByLayer
70
0
3
72
65
73
0
40
0.000
0
LTYPE
5
F
330
C
100
AcDbSymbolTableRecord
100
AcDbLinetypeTableRecord
2
CONTINUOUS
70
0
3
Solid line
72
65
73
0
40
0.000
0
ENDTAB
0
TABLE
2
LAYER
5
10
330
0
100
AcDbSymbolTable
70
1
0
LAYER
5
11
330
10
100
AcDbSymbolTableRecord
100
AcDbLayerTableRecord
2
0
70
0
62
7
6
CONTINUOUS
370
-3
390
F
0
ENDTAB
0
TABLE
2
STYLE
5
12
330
0
100
AcDbSymbolTable
70
3
0
STYLE
5
13
330
12
100
AcDbSymbolTableRecord
100
AcDbTextStyleTableRecord
2
Standard
70
0
40
0.00
41
1.00
50
0.00
71
0
42
1.00
3
txt
4
0
ENDTAB
0
TABLE
2
VIEW
5
15
330
0
100
AcDbSymbolTable
70
0
0
ENDTAB
0
TABLE
2
UCS
5
17
330
0
100
AcDbSymbolTable
70
0
0
ENDTAB
0
TABLE
2
APPID
5
18
330
0
100
AcDbSymbolTable
70
1
0
APPID
5
19
330
18
100
AcDbSymbolTableRecord
100
AcDbRegAppTableRecord
2
ACAD
70
0
0
ENDTAB
0
TABLE
2
DIMSTYLE
5
1A
330
0
100
AcDbSymbolTable
70
1
100
AcDbDimStyleTable
71
1
0
DIMSTYLE
105
1B
330
1A
100
AcDbSymbolTableRecord
100
AcDbDimStyleTableRecord
2
Standard
70
0
40
1
41
0.18
42
0.0625
43
0.38
44
0.18
45
0
46
0.00
47
0.0
48
0.0
140
0.18
141
0.09
142
0.0
143
25.39999
144
1.0
145
0.0
146
1.0
147
0.09
148
0
71
0
72
0
73
0
74
1
75
0
76
0
77
0
78
0
79
0
170
0
171
2
172
0
173
0
174
0
175
0
176
0
177
0
178
0
179
0
271
4
272
4
273
2
274
2
275
0
276
0
277
2
278
46
279
0
280
0
281
0
282
0
283
1
284
0
285
0
286
0
288
0
289
3
340
Standard
341
371
-2
372
-2
0
ENDTAB
0
TABLE
2
BLOCK_RECORD
5
1C
330
0
100
AcDbSymbolTable
70
2
0
BLOCK_RECORD
5
1D
330
1C
100
AcDbSymbolTableRecord
100
AcDbBlockTableRecord
2
*Model_Space
70
0
280
1
281
0
0
BLOCK_RECORD
5
21
330
1C
100
AcDbSymbolTableRecord
100
AcDbBlockTableRecord
2
*Paper_Space
70
0
280
1
281
0
0
BLOCK_RECORD
5
25
330
1C
100
AcDbSymbolTableRecord
100
AcDbBlockTableRecord
2
TEST_BLOCK
70
0
280
1
281
0
0
ENDTAB
0
ENDSEC
0
SECTION
2
BLOCKS
0
BLOCK
5
1E
330
1D
100
AcDbEntity
8
0
100
AcDbBlockBegin
2
*Model_Space
70
0
10
0.00
20
0.00
30
0.0
3
*Model_Space
1
0
ENDBLK
5
20
330
1D
100
AcDbEntity
8
0
100
AcDbBlockEnd
0
BLOCK
5
22
330
21
100
AcDbEntity
8
0
100
AcDbBlockBegin
2
*Paper_Space
70
0
10
0.00
20
0.00
30
0.0
3
*Paper_Space
1
0
ENDBLK
5
24
330
21
100
AcDbEntity
8
0
100
AcDbBlockEnd
0
BLOCK
5
26
330
25
100
AcDbEntity
8
0
100
AcDbBlockBegin
2
TEST_BLOCK
70
0
10
0.00
20
0.00
30
0.0
3
TEST_BLOCK
1
0
LINE
5
27
330
25
100
AcDbEntity
8
0
100
AcDbLine
10
1688.00
20
1430.00
30
0.00
11
1185.00
21
1097.00
31
0.00
0
POINT
5
28
330
25
100
AcDbEntity
8
0
100
AcDbPoint
10
1715.00
20
1205.00
30
0.00
0
CIRCLE
5
29
330
25
100
AcDbEntity
8
0
100
AcDbCircle
10
847.31
20
1694.50
30
0.00
40
272.44
0
ARC
5
2A
330
25
100
AcDbEntity
8
0
100
AcDbCircle
10
595.07
20
875.17
30
0.00
40
384.38
100
AcDbArc
50
232.00
51
224.00
0
LWPOLYLINE
5
2B
330
25
100
AcDbEntity
8
0
100
AcDbPolyline
90
10
70
0
10
1783.00
20
113.00
10
1927.00
20
545.00
10
766.00
20
955.00
10
1583.00
20
1624.00
10
1057.00
20
959.00
10
1136.00
20
785.00
10
1851.00
20
1672.00
10
142.00
20
674.00
10
174.00
20
1296.00
10
40.00
20
736.00
0
SPLINE
5
2C
330
25
100
AcDbEntity
8
0
100
AcDbSpline
210
0.0
220
0.0
230
1.0
70
8
71
3
72
14
73
10
74
0
42
0.0000000001
43
0.0000000001
40
0.00000
40
0.00000
40
0.00000
40
0.00000
40
1.00000
40
2.00000
40
3.00000
40
4.00000
40
5.00000
40
5.00000
40
5.00000
40
5.00000
40
5.00000
40
5.00000
10
1783.00
20
113.00
30
0.0
10
1927.00
20
545.00
30
0.0
10
766.00
20
955.00
30
0.0
10
1583.00
20
1624.00
30
0.0
10
1057.00
20
959.00
30
0.0
10
1136.00
20
785.00
30
0.0
10
1851.00
20
1672.00
30
0.0
10
142.00
20
674.00
30
0.0
10
174.00
20
1296.00
30
0.0
10
40.00
20
736.00
30
0.0
0
ENDBLK
5
2D
330
25
100
AcDbEntity
8
0
100
AcDbBlockEnd
0
ENDSEC
0
SECTION
2
ENTITIES
0
INSERT
5
2E
330
25
100
AcDbEntity
8
0
100
AcDbBlockReference
2
TEST_BLOCK
10
0.00
20
0.00
30
0.0
0
ENDSEC
0
SECTION
2
OBJECTS
0
DICTIONARY
5
2F
330
0
100
AcDbDictionary
281
1
3
ACAD_GROUP
350
D
0
DICTIONARY
5
30
330
2F
100
AcDbDictionary
281
1
0
ENDSEC
0
EOF
EDIT
As it turnes out, BricsCAD has a nice recover tool. Acoording to that tool Hard POinter/ID Handle of PlotStyleName Object (390) is wrong.
Name: AcDbLayerTableRecord(17); Value: PlotStyleName Id (F); Validation: Invalid; Replaced by: Set to Null.
This narrows the possibilites a lot, but doesn't quite solve the problem, as I am not really sure what PlotStyleName object is in my case.
I found out that BricsCAD can use internal _RECOVER function to analyze the input file and warn the user of possible errors.
As it turns out, self pointers of layers (390) were not correctly defined. Setting 390 to 0 instead of F, is not the cleanest and the most correct way to solve the problem but it does the job.
I don't know about BricsCAD but It could be some of the AutoCAD-friendly dxf codes that are not supported by BricsCAD.
Try this dxf file to see if it generates the same error in BricsCAD
It contains one block only (Zoom-Extends to find it).
If it works we can figure-out what made your file crash.
0
SECTION
2
HEADER
9
$ACADVER
1
AC1009
9
$INSBASE
10
0.0
20
0.0
30
0.0
9
$REGENMODE
70
1
9
$FILLMODE
70
1
9
$QTEXTMODE
70
0
9
$MIRRTEXT
70
0
9
$DRAGMODE
70
2
9
$LTSCALE
40
1.0
9
$OSMODE
70
2215
9
$ATTMODE
70
1
9
$TEXTSIZE
40
0.15
9
$TRACEWID
40
1.0
9
$TEXTSTYLE
7
STANDARD
9
$CLAYER
8
DEFPOINTS
9
$CELTYPE
6
BYLAYER
9
$CECOLOR
62
256
9
$DIMSCALE
40
1.0
9
$DIMASZ
40
0.1
9
$DIMEXO
40
0.25
9
$DIMDLI
40
0.25
9
$DIMRND
40
0.0
9
$DIMDLE
40
0.0
9
$DIMEXE
40
0.1
9
$DIMTP
40
0.0
9
$DIMTM
40
0.0
9
$DIMTXT
40
0.15
9
$DIMCEN
40
0.1
9
$DIMTSZ
40
0.0
9
$DIMTOL
70
0
9
$DIMLIM
70
0
9
$DIMTIH
70
0
9
$DIMTOH
70
1
9
$DIMSE1
70
0
9
$DIMSE2
70
0
9
$DIMTAD
70
1
9
$DIMZIN
70
8
9
$DIMBLK
1
9
$DIMASO
70
1
9
$DIMSHO
70
1
9
$DIMPOST
1
9
$DIMAPOST
1
9
$DIMALT
70
0
9
$DIMALTD
70
3
9
$DIMALTF
40
0.03937007874016
9
$DIMLFAC
40
1.0
9
$DIMTOFL
70
1
9
$DIMTVP
40
0.0
9
$DIMTIX
70
0
9
$DIMSOXD
70
0
9
$DIMSAH
70
0
9
$DIMBLK1
1
9
$DIMBLK2
1
9
$DIMSTYLE
2
ISO-25
9
$DIMCLRD
70
2
9
$DIMCLRE
70
0
9
$DIMCLRT
70
7
9
$DIMTFAC
40
1.0
9
$DIMGAP
40
0.15
9
$LUNITS
70
2
9
$LUPREC
70
3
9
$SKETCHINC
40
1.0
9
$FILLETRAD
40
0.0
9
$AUNITS
70
1
9
$AUPREC
70
3
9
$MENU
1
.
9
$ELEVATION
40
0.0
9
$PELEVATION
40
0.0
9
$THICKNESS
40
0.0
9
$LIMCHECK
70
0
9
$BLIPMODE
70
0
9
$CHAMFERA
40
0.0
9
$CHAMFERB
40
0.0
9
$SKPOLY
70
0
9
$TDCREATE
40
2455559.7215111339
9
$TDUPDATE
40
2455601.6499361689
9
$TDINDWG
40
0.0182150694
9
$TDUSRTIMER
40
0.0182009375
9
$USRTIMER
70
1
9
$ANGBASE
50
0.0
9
$ANGDIR
70
0
9
$PDMODE
70
0
9
$PDSIZE
40
0.0
9
$PLINEWID
40
0.0
9
$COORDS
70
1
9
$SPLFRAME
70
0
9
$SPLINETYPE
70
6
9
$SPLINESEGS
70
8
9
$ATTDIA
70
0
9
$ATTREQ
70
1
9
$HANDLING
70
1
9
$HANDSEED
5
100006
9
$SURFTAB1
70
6
9
$SURFTAB2
70
6
9
$SURFTYPE
70
6
9
$SURFU
70
6
9
$SURFV
70
6
9
$UCSNAME
2
9
$UCSORG
10
0.0
20
0.0
30
0.0
9
$UCSXDIR
10
1.0
20
0.0
30
0.0
9
$UCSYDIR
10
0.0
20
1.0
30
0.0
9
$PUCSNAME
2
9
$PUCSORG
10
0.0
20
0.0
30
0.0
9
$PUCSXDIR
10
1.0
20
0.0
30
0.0
9
$PUCSYDIR
10
0.0
20
1.0
30
0.0
9
$USERI1
70
0
9
$USERI2
70
0
9
$USERI3
70
0
9
$USERI4
70
0
9
$USERI5
70
0
9
$USERR1
40
0.0
9
$USERR2
40
0.0
9
$USERR3
40
0.0
9
$USERR4
40
0.0
9
$USERR5
40
0.0
9
$WORLDVIEW
70
1
9
$SHADEDGE
70
3
9
$SHADEDIF
70
70
9
$TILEMODE
70
1
9
$MAXACTVP
70
64
9
$PLIMCHECK
70
0
9
$PEXTMIN
10
1.0000000000000000E+020
20
1.0000000000000000E+020
30
1.0000000000000000E+020
9
$PEXTMAX
10
-1.0000000000000000E+020
20
-1.0000000000000000E+020
30
-1.0000000000000000E+020
9
$PLIMMIN
10
0.0
20
0.0
9
$PLIMMAX
10
420.0
20
297.0
9
$UNITMODE
70
0
9
$VISRETAIN
70
1
9
$PLINEGEN
70
0
9
$PSLTSCALE
70
1
0
ENDSEC
0
SECTION
2
TABLES
0
TABLE
2
VPORT
70
1
0
ENDTAB
0
TABLE
2
LTYPE
70
3
0
LTYPE
2
CONTINUOUS
70
0
3
Solidline
72
65
73
0
40
0.0
0
LTYPE
2
ACAD_ISO04W100
70
0
3
ISOlong-dashdot____.____.____.____._
72
65
73
4
40
2.0
49
1.399999999999999
49
-0.3
49
0.0
49
-0.3
0
LTYPE
2
ACAD_ISO02W100
70
0
3
ISOdash__________________________
72
65
73
2
40
15.0
49
12.0
49
-3.0
0
ENDTAB
0
TABLE
2
LAYER
70
16
0
LAYER
2
0
70
0
62
7
6
CONTINUOUS
0
LAYER
2
DEFPOINTS
70
0
62
7
6
CONTINUOUS
0
LAYER
2
PIPE
70
0
62
6
6
CONTINUOUS
0
LAYER
2
GRID
70
0
62
8
6
CONTINUOUS
0
LAYER
2
GROUND
70
0
62
3
6
CONTINUOUS
0
LAYER
2
POINTID
70
0
62
1
6
CONTINUOUS
0
LAYER
2
ELEVATION
70
0
62
1
6
CONTINUOUS
0
LAYER
2
POINTS
70
0
62
6
6
CONTINUOUS
0
LAYER
2
X-Y-CORDS
70
0
62
6
6
CONTINUOUS
0
LAYER
2
NOTES
70
0
62
4
6
CONTINUOUS
0
LAYER
2
LATERAL
70
0
62
4
6
CONTINUOUS
0
LAYER
2
LATERALG
70
0
62
3
6
CONTINUOUS
0
LAYER
2
3DPOLY
70
0
62
5
6
CONTINUOUS
0
LAYER
2
HATCH
70
0
62
9
6
CONTINUOUS
0
LAYER
2
TEXT
70
0
62
7
6
CONTINUOUS
0
LAYER
2
DIMENSIONS
70
0
62
5
6
CONTINUOUS
0
LAYER
2
TABLES
70
0
62
7
6
CONTINUOUS
0
LAYER
2
MANHOLE
70
0
62
1
6
CONTINUOUS
0
LAYER
2
HIDDEN
70
0
62
7
6
ACAD_ISO02W100
0
LAYER
2
GV
70
0
62
5
6
CONTINUOUS
0
LAYER
2
FH
70
0
62
1
6
CONTINUOUS
0
LAYER
2
SL
70
0
62
5
6
CONTINUOUS
0
LAYER
2
PI
70
0
62
6
6
CONTINUOUS
0
LAYER
2
TR
70
0
62
1
6
CONTINUOUS
0
LAYER
2
HC
70
0
62
5
6
CONTINUOUS
0
LAYER
2
MH
70
0
62
1
6
CONTINUOUS
0
LAYER
2
Y
70
0
62
7
6
CONTINUOUS
0
ENDTAB
0
TABLE
2
STYLE
70
4
0
STYLE
2
STANDARD
70
0
40
0.15
41
1.0
50
0.0
71
0
42
0.15
3
txt.shx
4
0
STYLE
2
ANNOTATIVE
70
0
40
0.0
41
1.0
50
0.0
71
0
42
0.2
3
txt
4
0
STYLE
2
LOCAL
70
0
40
0.15
41
1.0
50
0.0
71
0
42
0.15
3
x-arab.shx
4
0
STYLE
2
70
1
40
0.0
41
1.0
50
0.0
71
0
42
2.5
3
ltypeshp.shx
4
0
ENDTAB
0
TABLE
2
VIEW
70
0
0
ENDTAB
0
TABLE
2
UCS
70
0
0
ENDTAB
0
TABLE
2
APPID
70
12
0
APPID
2
ACAD
70
0
0
APPID
2
ACADANNOTATIVE
70
0
0
APPID
2
ACAECLAYERSTANDARD
70
0
0
APPID
2
ACCMTRANSPARENCY
70
0
0
APPID
2
ACAD_EXEMPT_FROM_CAD_STANDARDS
70
0
0
APPID
2
ACAD_DSTYLE_DIMJAG
70
0
0
APPID
2
ACAD_DSTYLE_DIMBREAK
70
0
0
APPID
2
ACAD_DSTYLE_DIMTALN
70
0
0
APPID
2
ACADANNOPO
70
0
0
APPID
2
ACAD_DSTYLE_DIMJOGGED_JOGA
70
0
0
APPID
2
ACAD_DSTYLE_DIMTEXT_FILL
70
0
0
APPID
2
ACAD_MLEADERVER
70
0
0
ENDTAB
0
TABLE
2
DIMSTYLE
70
3
0
DIMSTYLE
2
STANDARD
70
0
3
4
5
6
7
40
1.0
41
0.18
42
0.0625
43
0.38
44
0.18
45
0.0
46
0.0
47
0.0
48
0.0
140
0.18
141
0.09
142
0.0
143
25.399999999999999
144
1.0
145
0.0
146
1.0
147
0.09
71
0
72
0
73
1
74
1
75
0
76
0
77
0
78
0
170
0
171
2
172
0
173
0
174
0
175
0
176
0
177
0
178
0
0
DIMSTYLE
2
ANNOTATIVE
70
0
3
4
5
6
7
40
1.0
41
0.18
42
0.0625
43
0.38
44
0.18
45
0.0
46
0.0
47
0.0
48
0.0
140
0.18
141
0.09
142
0.0
143
25.399999999999999
144
1.0
145
0.0
146
1.0
147
0.09
71
0
72
0
73
1
74
1
75
0
76
0
77
0
78
0
170
0
171
2
172
0
173
0
174
0
175
0
176
0
177
0
178
0
0
DIMSTYLE
2
ISO-25
70
0
3
4
5
6
7
40
1.0
41
0.1
42
0.25
43
0.25
44
0.1
45
0.0
46
0.0
47
0.0
48
0.0
140
0.15
141
0.1
142
0.0
143
0.03937007874016
144
1.0
145
0.0
146
1.0
147
0.15
71
0
72
0
73
0
74
1
75
0
76
0
77
1
78
8
170
0
171
3
172
1
173
0
174
0
175
0
176
2
177
0
178
7
0
ENDTAB
0
ENDSEC
0
SECTION
2
BLOCKS
0
BLOCK
8
POINTS
2
Block0
70
0
10
0
20
0
30
0
3
Block0
1
0
SOLID
5
100004
8
POINTS
10
678218.2191
20
2717042.676
30
0
11
678220.4691
21
2717042.676
31
0
12
678218.2191
22
2717040.426
32
0
13
678220.4691
23
2717040.426
33
0
39
1
210
0
220
0
230
1
0
TEXT
5
100005
8
POINTS
10
678221.5941
20
2717043.801
30
0
11
678221.5941
21
2717043.801
31
0
72
0
73
1
40
2.25
1
point
50
0
7
STANDARD
0
ENDBLK
5
100002
8
POINTS
0
BLOCK
8
0
2
$MODEL_SPACE
70
0
10
0.0
20
0.0
30
0.0
3
$MODEL_SPACE
1
0
ENDBLK
5
10
8
0
0
BLOCK
67
1
8
0
2
$PAPER_SPACE
70
0
10
0.0
20
0.0
30
0.0
3
$PAPER_SPACE
1
0
ENDBLK
5
11
67
1
8
0
0
ENDSEC
0
SECTION
2
ENTITIES
0
POINT
5
100003
8
POINTS
10
678219.3441
20
2717041.551
30
0
0
INSERT
5
100001
8
POINTS
2
Block0
10
0
20
0
30
0
0
ENDSEC
0
EOF

Convert 8bit Color image to Gray For VGA

I have a 8 bit color image . What is the method to convert this into a Grayscale Image .
For a normal 24 bit true color RGB image, we either perform averaging ( R + G + B ) / 3
And then there's' the Weighted Averaging wherein we calculate 0.21 R + 0.72 G + 0.07 B.
However these above formula works for a 24 bit image (correct me if i'm wrong) . Where 8 bits are used to denote R, G, B content each. Thus when we apply the above averaging methods, we get a resultant 8 bit grayscale image from a 24 bit True color image.
So how to calculate grayscale image for an 8 bit color image :
Please note :
Structure of an 8 bit color image is as follows :
Refer this link
Bit 7 6 5 4 3 2 1 0
Data R R R G G G B B
As we can see,
Bits 7,6,5 denote Red content
Bits 4,3,2 denote Green content
Bits 1,0 denote Blue content
So the above image will actually have 4 shades in total
(because, in grayscale, a white pixel is obtained when there is 100 % contribution of each of the R,G,B components. And since Blue component has only 2 bits, effectively, there are 22 combinations i.e. 4 levels. )
Therefore, if i consider 2 bits of R ,G and B, i manage to obtain gray levels as follows :
R G B GrayLevel
00 00 00 Black
01 01 01 Gray 1
10 10 10 Gray 2
11 11 11 White
Which bits to consider from Red and Green components and which to ignore .!
How to quantify the graylevels for values of bits other than the ones mentioned above.
EDIT
I want to implement the above system upon an FPGA, hence memory is a keen aspect. Quality of the image doesn't matter much. Somehow is it possible to quantify all the values of the 8 bit color img into the respective gray shades ?
This approach gives output range of gray 0..255 (not all gray levels are used):
b = rgb8 & 3;
g = (rgb8 >> 2) & 7;
r = rgb8 >> 5;
gray255 = 8 * b + 11 * r + 22 * g;
If you have 256 bytes available, you can fill LUT (Look-Up Table) once, and use it instead of calculations:
grayimage[i] = LUT[rgb8image[i]];
If you really want to stick to 2 bits per gray pixel and you can afford simple multipliers, you can think of the formula
G = 5 x R + 9 x G + 4 B
where R and G are taken with 3 bits and B with just 2 (the coefficient has been adapted). This will yield a 7 bits value, in range [0,110], of which you will keep the most significant 2.
You may think to adapt the coefficients to occupy the four levels more evenly.
You essentially have a Rubik's cube of colours, which measures 8 x 8 x 4 if you can take a moment to imagine that. One side has 8 squares going from black to red, one side has 8 squares going from black to green and one side has 4 squares going from black to blue.
In essence, you can divide it up how you like since you don't care too much for quality. So, if you want 4 output grey levels, you can essentially make any two cuts you like and lump together everything inside each of the resulting shapes as a single grey level. Normally, you would aim to make the volumes of each lump the same - so you could cut the red side in half and the green side in half and ignore any differences in the blue channel as one option.
One way to do it might be to make equi-volumed lumps according to the distance from the origin, i.e. from black. I don't have an 8x8x4 cube available, but imagine the Earth was 8x8x4, then we would be making all pixels in the inner core black, those in the outer core dark grey, those in the mantle light grey and the crust white - such that the number of your original pixels in each lump was the same. It sounds complicated but isn't!
Let's run through all your possible Red, Green and Blue values and calculate the distance of each one from black, using
d=R^2 +G^2 +B^2
then sort the values by that distance and then number the lines:
#!/bin/bash
for r in 0 1 2 3 4 5 6 7; do
for g in 0 1 2 3 4 5 6 7; do
for b in 0 1 2 3; do
# Calculate distance from black corner (r=g=b=0) - actually squared but it doesn't matter
((d2=(r*r)+(g*g)+(b*b)))
echo $d2 $r $g $b
done
done
done | sort -n | nl
# sort numerically by distance from black, then number output lines sequentially
That gives this where the first column is the line number, the second column is the distance from black (and the values are sorted by this column), and then there follows R, G and B:
1 0 0 0 0 # From here onwards, pixels map to black
2 1 0 0 1
3 1 0 1 0
4 1 1 0 0
5 2 0 1 1
6 2 1 0 1
7 2 1 1 0
8 3 1 1 1
9 4 0 0 2
10 4 0 2 0
11 4 2 0 0
12 5 0 1 2
13 5 0 2 1
14 5 1 0 2
15 5 1 2 0
16 5 2 0 1
17 5 2 1 0
18 6 1 1 2
19 6 1 2 1
20 6 2 1 1
21 8 0 2 2
22 8 2 0 2
23 8 2 2 0
24 9 0 0 3
25 9 0 3 0
26 9 1 2 2
27 9 2 1 2
28 9 2 2 1
29 9 3 0 0
30 10 0 1 3
31 10 0 3 1
32 10 1 0 3
33 10 1 3 0
34 10 3 0 1
35 10 3 1 0
36 11 1 1 3
37 11 1 3 1
38 11 3 1 1
39 12 2 2 2
40 13 0 2 3
41 13 0 3 2
42 13 2 0 3
43 13 2 3 0
44 13 3 0 2
45 13 3 2 0
46 14 1 2 3
47 14 1 3 2
48 14 2 1 3
49 14 2 3 1
50 14 3 1 2
51 14 3 2 1
52 16 0 4 0
53 16 4 0 0
54 17 0 4 1
55 17 1 4 0
56 17 2 2 3
57 17 2 3 2
58 17 3 2 2
59 17 4 0 1
60 17 4 1 0
61 18 0 3 3
62 18 1 4 1
63 18 3 0 3
64 18 3 3 0 # From here onwards pixels map to dark grey
65 18 4 1 1
66 19 1 3 3
67 19 3 1 3
68 19 3 3 1
69 20 0 4 2
70 20 2 4 0
71 20 4 0 2
72 20 4 2 0
73 21 1 4 2
74 21 2 4 1
75 21 4 1 2
76 21 4 2 1
77 22 2 3 3
78 22 3 2 3
79 22 3 3 2
80 24 2 4 2
81 24 4 2 2
82 25 0 4 3
83 25 0 5 0
84 25 3 4 0
85 25 4 0 3
86 25 4 3 0
87 25 5 0 0
88 26 0 5 1
89 26 1 4 3
90 26 1 5 0
91 26 3 4 1
92 26 4 1 3
93 26 4 3 1
94 26 5 0 1
95 26 5 1 0
96 27 1 5 1
97 27 3 3 3
98 27 5 1 1
99 29 0 5 2
100 29 2 4 3
101 29 2 5 0
102 29 3 4 2
103 29 4 2 3
104 29 4 3 2
105 29 5 0 2
106 29 5 2 0
107 30 1 5 2
108 30 2 5 1
109 30 5 1 2
110 30 5 2 1
111 32 4 4 0
112 33 2 5 2
113 33 4 4 1
114 33 5 2 2
115 34 0 5 3
116 34 3 4 3
117 34 3 5 0
118 34 4 3 3
119 34 5 0 3
120 34 5 3 0
121 35 1 5 3
122 35 3 5 1
123 35 5 1 3
124 35 5 3 1
125 36 0 6 0
126 36 4 4 2
127 36 6 0 0
128 37 0 6 1
129 37 1 6 0 # From here onwards pixels map to light grey
130 37 6 0 1
131 37 6 1 0
132 38 1 6 1
133 38 2 5 3
134 38 3 5 2
135 38 5 2 3
136 38 5 3 2
137 38 6 1 1
138 40 0 6 2
139 40 2 6 0
140 40 6 0 2
141 40 6 2 0
142 41 1 6 2
143 41 2 6 1
144 41 4 4 3
145 41 4 5 0
146 41 5 4 0
147 41 6 1 2
148 41 6 2 1
149 42 4 5 1
150 42 5 4 1
151 43 3 5 3
152 43 5 3 3
153 44 2 6 2
154 44 6 2 2
155 45 0 6 3
156 45 3 6 0
157 45 4 5 2
158 45 5 4 2
159 45 6 0 3
160 45 6 3 0
161 46 1 6 3
162 46 3 6 1
163 46 6 1 3
164 46 6 3 1
165 49 0 7 0
166 49 2 6 3
167 49 3 6 2
168 49 6 2 3
169 49 6 3 2
170 49 7 0 0
171 50 0 7 1
172 50 1 7 0
173 50 4 5 3
174 50 5 4 3
175 50 5 5 0
176 50 7 0 1
177 50 7 1 0
178 51 1 7 1
179 51 5 5 1
180 51 7 1 1
181 52 4 6 0
182 52 6 4 0
183 53 0 7 2
184 53 2 7 0
185 53 4 6 1
186 53 6 4 1
187 53 7 0 2
188 53 7 2 0
189 54 1 7 2
190 54 2 7 1
191 54 3 6 3
192 54 5 5 2
193 54 6 3 3 # From here onwards pixels map to white
194 54 7 1 2
195 54 7 2 1
196 56 4 6 2
197 56 6 4 2
198 57 2 7 2
199 57 7 2 2
200 58 0 7 3
201 58 3 7 0
202 58 7 0 3
203 58 7 3 0
204 59 1 7 3
205 59 3 7 1
206 59 5 5 3
207 59 7 1 3
208 59 7 3 1
209 61 4 6 3
210 61 5 6 0
211 61 6 4 3
212 61 6 5 0
213 62 2 7 3
214 62 3 7 2
215 62 5 6 1
216 62 6 5 1
217 62 7 2 3
218 62 7 3 2
219 65 4 7 0
220 65 5 6 2
221 65 6 5 2
222 65 7 4 0
223 66 4 7 1
224 66 7 4 1
225 67 3 7 3
226 67 7 3 3
227 69 4 7 2
228 69 7 4 2
229 70 5 6 3
230 70 6 5 3
231 72 6 6 0
232 73 6 6 1
233 74 4 7 3
234 74 5 7 0
235 74 7 4 3
236 74 7 5 0
237 75 5 7 1
238 75 7 5 1
239 76 6 6 2
240 78 5 7 2
241 78 7 5 2
242 81 6 6 3
243 83 5 7 3
244 83 7 5 3
245 85 6 7 0
246 85 7 6 0
247 86 6 7 1
248 86 7 6 1
249 89 6 7 2
250 89 7 6 2
251 94 6 7 3
252 94 7 6 3
253 98 7 7 0
254 99 7 7 1
255 102 7 7 2
256 107 7 7 3
Obviously, the best way to do that is with a lookup table, which is exactly what this is.
Just for kicks, we can look at how it performs if we make some sample images with ImageMagick and process them with this lookup table:
# Make a sample
convert -size 100x100 xc: -sparse-color Bilinear '30,10 red 10,80 blue 70,60 lime 80,20 yellow' -resize 400x400! gradient.png
# Process with suggested LUT
convert gradient.png -fx "#lut.fx" result.png
lut.fx implements the LUT and looks like this:
dd=(49*r*r)+(49*g*g)+(16*b*b);
(dd < 19) ? 0.0 : ((dd < 38) ? 0.25 : ((dd < 54) ? 0.75 : 1.0))
By comparison, if you implement my initial suggestion at the start of my answer, by doing:
R < 0.5 && G < 0.5 => black result
R < 0.5 && G >= 0.5 => dark grey result
R >= 0.5 && G < 0.5 => light grey result
r >= 0.5 && G >= 0.5 => white result
You will get this output - which, as you can see, is better at differentiating red from green, but worse at reflecting the brightness of the original.

fscaret VarImp$matrixVarImp.MSE returns 0

I'm trying to use the fscaret package on ordinal data (predictors) and an ordinal response, the data is from a survey on a scale interval from 1-10.
I have managed to get my script work , however sometimes depending on the data frame I feed into my script when calling VarImp$matrixVarImp.MSE , it returns 0. I have tried to figure out why but am not capable to find the root cause.
all_data is the dataframe in MISO format. I have not attached the data due to confideniallity...
Here is my simple script:
library(fscaret)
set.seed(1234)
splitIndex <- createDataPartition(all_data$response, p = .75, list = FALSE, times = 1)
trainDF <- all_data[ splitIndex,]
testDF <- all_data[-splitIndex,]
fsModels <- c("glmnet","pls", "nnet")
start.time <- Sys.time()
myFS<-fscaret(trainDF, testDF, myTimeLimit = 40, preprocessData=TRUE,
Used.funcRegPred = fsModels, with.labels=TRUE,
supress.output=FALSE, no.cores=2)
end.time <- Sys.time()
total.time <- end.time - start.time
output matrix
myFS$VarImp$matrixVarImp.MSE
myFirstRES$VarImp$matrixVarImp.MSE
gbm glmnet lm nnet pcr pls SUM SUM% ImpGrad Input_no
1 0 0 0 0 0 0 0 NaN 0 1
2 0 0 0 0 0 0 0 NaN NaN 2
3 0 0 0 0 0 0 0 NaN NaN 3
4 0 0 0 0 0 0 0 NaN NaN 4
5 0 0 0 0 0 0 0 NaN NaN 5
6 0 0 0 0 0 0 0 NaN NaN 6
7 0 0 0 0 0 0 0 NaN NaN 7
8 0 0 0 0 0 0 0 NaN NaN 8
9 0 0 0 0 0 0 0 NaN NaN 9
10 0 0 0 0 0 0 0 NaN NaN 10
11 0 0 0 0 0 0 0 NaN NaN 11
12 0 0 0 0 0 0 0 NaN NaN 12
13 0 0 0 0 0 0 0 NaN NaN 13
14 0 0 0 0 0 0 0 NaN NaN 14
15 0 0 0 0 0 0 0 NaN NaN 15
16 0 0 0 0 0 0 0 NaN NaN 16
17 0 0 0 0 0 0 0 NaN NaN 17
18 0 0 0 0 0 0 0 NaN NaN 18
19 0 0 0 0 0 0 0 NaN NaN 19
20 0 0 0 0 0 0 0 NaN NaN 20
21 0 0 0 0 0 0 0 NaN NaN 21
22 0 0 0 0 0 0 0 NaN NaN 22
23 0 0 0 0 0 0 0 NaN NaN 23
24 0 0 0 0 0 0 0 NaN NaN 24
25 0 0 0 0 0 0 0 NaN NaN 25
26 0 0 0 0 0 0 0 NaN NaN 26
27 0 0 0 0 0 0 0 NaN NaN 27
28 0 0 0 0 0 0 0 NaN NaN 28
29 0 0 0 0 0 0 0 NaN NaN 29
30 0 0 0 0 0 0 0 NaN NaN 30
31 0 0 0 0 0 0 0 NaN NaN 31
32 0 0 0 0 0 0 0 NaN NaN 32
33 0 0 0 0 0 0 0 NaN NaN 33
34 0 0 0 0 0 0 0 NaN NaN 34
35 0 0 0 0 0 0 0 NaN NaN 35
36 0 0 0 0 0 0 0 NaN NaN 36
37 0 0 0 0 0 0 0 NaN NaN 37
38 0 0 0 0 0 0 0 NaN NaN 38
any ideas?
Here is my actual data set :
I have dropped the na to clean the data up before running fscaret...
> str(all_data)
'data.frame': 7288 obs. of 39 variables:
$ v1 : int 9 8 7 9 10 9 10 10 10 8 ...
$ v3 : int 9 8 9 10 8 8 10 10 8 9 ...
$ v4 : int 9 8 8 9 8 8 10 10 8 9 ...
$ v5 : int 8 8 7 10 8 7 10 5 10 10 ...
$ v6 : int 8 8 8 9 9 9 10 5 10 8 ...
$ v7 : int 8 8 7 8 9 8 10 5 10 8 ...
$ v8 : int 9 8 8 10 10 9 10 5 10 9 ...
$ v9 : int 9 8 8 7 8 6 8 8 10 5 ...
$ v10 : int 9 7 7 9 5 7 10 6 10 7 ...
$ v11 : int 8 8 6 9 5 9 10 8 10 7 ...
$ v12 : int 8 9 6 9 9 9 10 10 10 10 ...
$ v13 : int 8 9 7 9 8 8 10 10 10 10 ...
$ v14 : int 9 10 8 9 9 9 10 10 10 10 ...
$ v15 : int 10 8 8 10 10 7 10 10 10 10 ...
$ v16 : int 9 7 7 10 9 9 10 10 10 8 ...
$ v17 : int 9 10 7 10 5 7 10 10 10 8 ...
$ v18 : int 8 8 6 10 10 7 10 10 10 10 ...
$ v19 : int 9 9 8 9 10 9 10 10 10 10 ...
$ v20 : int 8 8 8 9 6 8 10 10 10 8 ...
$ v21 : int 8 8 8 10 5 7 10 10 10 10 ...
$ v22 : int 8 8 7 9 5 8 10 10 10 10 ...
$ v23 : int 8 8 6 10 5 8 10 10 10 10 ...
$ v24 : int 9 9 8 9 9 9 10 7 10 10 ...
$ v25 : int 9 10 7 9 8 9 10 10 10 8 ...
$ v26 : int 9 8 7 7 8 9 10 9 10 9 ...
$ v27 : int 8 8 7 9 9 9 10 9 10 9 ...
$ v28 : int 8 8 7 9 8 8 10 9 10 6 ...
$ v29 : int 9 9 8 9 8 8 10 9 10 8 ...
$ v30 : int 9 9 7 7 8 8 10 8 10 8 ...
$ v31 : int 9 10 6 9 9 9 10 7 10 8 ...
$ v32 : int 8 8 7 9 9 7 10 8 10 5 ...
$ v33 : int 8 10 8 9 8 8 10 7 10 8 ...
$ v34 : int 8 6 8 10 9 9 10 9 10 8 ...
$ v35 : int 9 8 8 9 10 7 10 9 10 8 ...
$ v36 : int 9 10 9 10 10 9 10 10 10 10 ...
$ v37 : int 9 8 8 10 10 9 10 5 10 10 ...
$ v38 : int 9 10 8 10 10 8 10 10 10 8 ...
$ v39 : int 8 10 8 9 10 9 10 9 10 8 ...
$ response: int 10 7 8 9 9 8 10 10 10 10 ...
- attr(*, "na.action")=Class 'omit' Named int [1:3307] 12 13 15 17 32 34 35 40 41 42 ...
.. ..- attr(*, "names")= chr [1:3307] "12" "13" "15" "17" ...
###update
tried to down sample the df, then I got this error message:
Error in if (abs(x[i, j]) > cutoff) { :
missing value where TRUE/FALSE needed
Try using the following code with some changes:
myFS<-fscaret(trainDF, testDF, myTimeLimit = 40, preprocessData=FALSE,
Used.funcClassPred = fsModels, with.labels=TRUE,
supress.output=FALSE, no.cores=2)
Use Used.funcClassPred instead of Used.funcRegPred.
Secondly preprocessData is used to remove predictors will high correlation therefore using preProcessData for nominal data doesnt make sense.

Maple: How to parse such CSV (Comma Separated Values) document?

So I have a large txt file with such contents (like 20mbs long)
20 30 40 550 60 70 80 91
20 30 40 50 60 70 80 92
20 30 40 50 60 70 80 93
20 30 40 50 64 70 80 90
20 30 40 50 60 70 80 90
20 30 40 50 60 70 80 90
20 30 40 40 60 70 80 90
40 30 40 50 60 70 80 90
4 5 6 6
20 30 40 50 60 70 80 91
20 30 40 50 60 70 80 92
20 30 40 50 60 70 80 93
2 30 40 50 64 70 80 90
20 30 20 50 60 70 80 90
20 30 40 50 60 70 80 90
20 30 40 40 60 70 80 90
40 30 40 50 60 70 80 90
4 5 1 6
20 30 40 50 60 70 80 91
20 30 40 50 60 70 80 92
20 30 40 50 60 70 80 93
20 30 40 50 64 70 80 90
20 30 40 50 60 70 80 90
20 30 40 50 60 70 80 90
20 1 40 40 60 70 80 90
40 30 40 50 60 70 80 90
4 5 6 1
I want to get out of that document an array of matrices 8x8 and an array of matrices 1*4 is such thing possible and how to do it?
The following produces a table M of 8x8 Matrices, and a table V of 1x4 row Vectors.
You could optionally create M and V up front as Arrays of size n. Just uncomment those lines. You can see that it is hard-coded for 100 Matrix-Vector pairs of scans. Increase n as you wish. It will stop anyway when it fails to scan the next item, by detecting the fscanf failures and breaking out of the loop.
My example used a plaintext data file that contained just three pairs of Matrix and Vector, and it did a break when failing on the fourth pair of scan attempts.
restart:
Z:="C://TEMP//mydata.txt":
fclose(Z);
#M:=Array(1..100):
#V:=Array(1..100):
for i from 1 to 100 do
try
M[i]:=fscanf(Z,"%{8,8}ldm")[1];
V[i]:=fscanf(Z,"%{4}ldr")[1];
catch "end of input encountered":
break;
end try;
end do;
M[2]; # returns the 2nd entry (a 8x8 Matrix) of M
V[2]; # returns the 2nd entry (a 1x4 row Vector) of V

Resources