Where can I find the sequence of optimizations used by clang according to -OX?
clang executes the precisely same sequence of passes as opt -ON. So, you can do something like
llvm-as < /dev/null | opt -O3 -disable-output -debug-pass=Arguments
to derive the "full" set of passes which are run at O3.
Related
uci documentation says:
All "uci set", "uci add", "uci rename" and "uci delete" commands are staged into a temporary location and written to flash at once with "uci commit".
If I get it right, you first run some commands like the ones mentioned above and to make the changes be written to the configuration files you run uci commit. For example, let's say I have done the following changes...
root#OpenWrt:~# uci changes
network.vlan15.ifname='eth1.15'
network.vlan15.type='bridge'
network.vlan15.proto='static'
network.vlan15.netmask='255.255.255.0'
network.vlan15.ipaddr='192.168.10.0'
...but I don't want to continue and commit them. Is there an easy way to revert all staged changes and avoid doing it one by one?
This should be possible by the following command:
root#firlefanz:~# rm -rf /tmp/.uci/
There is a command to revert all staged changes
revert <config>[.<section>[.<option>]] Revert the given option, section or configuration file.
So, in your case, it should be
uci revert network.vlan15
See https://openwrt.org/docs/guide-user/base-system/uci
This one-liner should do the trick:
uci changes | sed -rn 's%^[+-]?([^=+-]*)([+-]?=.*|)$%\1%' | xargs -n 1 uci revert
tl;dr The sed command extracts the option names from the staged changes. The xargs command executes the revert command for every extracted option.
Now let's have a deep dive into everything:
uci changes prints the prepared changes which are then piped to the sed command.
The sed opton -r enables extended regular expressions and -n suppress automatic printing of pattern matches.
The sed command s is used to do a search and replace and % is used as separation character for the search and replace term.
The uci change lines have different formats.
Removed configuration options are prefixed with -.
Added configuration options are prefixed with +
Changed options don't have a prefix.
To match the prefixes [+-]? is used. A question mark means, that one of the characters in the square brackets can be matched optional.
The option name will be matched with the pattern [^=+-]*. This regex has the meaning of any number of characters as long as the character is not one of =+-.
It is inside round brackets to mark it as group to reuse it later.
The next pattern ([+-]?=.*|) is also a pattern group. There are two different groups spitted by the pipe.
The second part is the easy one and means no character at all. This happens when a uci option is deleted.
The fist part means that the character = can optional prepended with + or -. After the = can be one or more characters which is indicated by .*. =<value> happens on added configuration. The prepending of - or + indicates the value is removed from the list or added to the list if the option is a list.
In the replace pattern the whole line is replaced with the first group by its reference \1. In other words: only the option name is printed.
All the option names are then send to xargs. With option -n 1 xargs execuse uci revert <option_name> for every option_name send by sed.
This are some examples for the different formats of the uci changes output:
-a
+b='create new option with this value'
c='change an existing option to this value'
d+='appended to list'
e-='removed from list'
The extraced option names will be the following:
a
b
c
d
e
xargs -n 1 will then executed the following commands:
uci revert a
uci revert b
uci revert c
uci revert d
uci revert e
This is the whole magic of the one-liner.
I didn't find a uci command to revert all uncommitted changes, but you can probably parse the output of the uci changes command with some shell scripting to achieve the desired result. Here is an example script:
#!/bin/ash
# uci-revert-all.sh
# Revert all uncommitted uci changes
# Iterate over changed settings
# Each line has the form of an equation, e.g. parameter=value
for setting in $(uci changes); do
# Extract parameter from equation
parameter=$(echo ${setting} | grep -o '^\(\w\|[._-]\)\+')
# Display a status message
echo "Reverting: ${parameter}"
# Revert the setting for the given parameter
uci revert "${parameter}"
done
A simpler alternative might be to use the uci revert <config> syntax, e.g.:
#!/bin/ash
# uci-revert-all.sh
# Revert all uncommitted uci changes
for config in /etc/config/*; do
uci revert $(basename ${config})
done
Both of these approaches worked well for me on a router running LEDE 4.
Here's another short one-liner to revert ALL unstaged changes (as per the question):
for i in /etc/config/* ; do uci revert ${i##*/} ; done
(FYI, this uses posix parameter expansion's "Remove Largest Prefix Pattern".)
I'm using the go-clang library to parse the following C file: aac.c. For some reason when I run the file through clang and dump the AST, I don't get AST output for certain functions. For example, the C file contains a forward declaration of aac_ioctl_send_raw_srb and the actual definition later on in the file.
Given this I was expecting to see two AST nodes in the output but only one FuncDecl (the forward declaration) is dumped:
clang -Xclang -ast-dump -fsyntax-only aac.c | grep "aac_ioctl_send_raw_srb" | wc -l
aac.c:38:10: fatal error: 'opt_aac.h' file not found
#include "opt_aac.h"
^
1 error generated.
1 <--- wc output
(Ignoring the error)
I get the same result using the go-clang library to parse the C file from within my own application. Is there any explanation for why the definition is not dumped?
I got some help in #llvm IRC and someone suggested that the errors actually are causing the issue. Even though other nodes are being emitted, LLVM may just be ignoring ones that it thinks require information that reside in the missing #includes.
I fixed the include paths and sure enough the nodes I was looking for were emitted.
I'm trying to define a grammar for ninja build with xtext.
There are three tricky points that I can't answer.
Indentations by tab:
How to handle indentations. A rule in a ninja build file might have several variable definitions with preceding tab spacing (similar to make files). This becomes a problem when the language has SL comments, ignores white-spaces and does indentation by tabs (python, make,...)
cflags = -g
rule cc
command = gcc $cflags -c $in -o $out
Cross referencing reserved set of variable names:
There exists a set of reserved variables. Auto-complete should be able to reference both the reserved and the user defined set of variables.
command = gcc $cflags -c $in -o $out
Autocompleting cross referenced variable names which aren't seperated with WS
org.eclipse.xtext.common.Terminals hides WS tokens. ID tokens are seperated by white spaces. But in ninja script (similar to make files) the parsing should be done with longest matching variable name.
some_var = some_value
command = $some_var.h
Any ideas are appreciated. Thanks.
Check out the Xtext 2.8.0 release: https://www.eclipse.org/Xtext/releasenotes.html
The Whitespace-Aware Languages section states:
Xtext 2.8 supports languages in which whitespace is used to specify
the structure, e.g. using indentation to delimit code blocks as in
Python. This is done through synthetic tokens defined in the grammar:
terminal BEGIN: 'synthetic:BEGIN';
terminal END: 'synthetic:END';
These tokens can be used like other terminals in grammar rules:
WhitespaceAwareBlock:
BEGIN
...
END;
The new example language Home Automation available in the Eclipse examples (File → New → Example → Xtext Examples) demonstrates this concept. It allows code like the following:
Rule 'Report error' when Heater.error then
var String report
do
Thread.sleep(500)
report = HeaterDiagnostic.readError
while (report == null)
println(report)
More details are found in the documentation.
I have a pipe delimited feed file which has several fields. Since I only need a few, I thought of using awk to capture them for my testing purposes. However, I noticed that printf changes the value if I use "%d". It works fine if I use "%s".
Feed File Sample:
[jaypal:~/Temp] cat temp
302610004125074|19769904399993903|30|15|2012-01-13 17:20:02.346000|2012-01-13 17:20:03.307000|E072AE4B|587244|316|13|GSM|1|SUCC|0|1|255|2|2|0|213|2|0|6|0|0|0|0|0|10|16473840051|30|302610|235|250|0|7|0|0|0|0|0|10|54320058002|906|722310|2|0||0|BELL MOBILITY CELLULAR, INC|BELL MOBILITY CELLULAR, INC|Bell Mobility|AMX ARGENTINA SA.|Claro aka CTI Movil|CAN|ARG|
I am interested in capturing the second column which is 19769904399993903.
Here are my tests:
[jaypal:~/Temp] awk -F"|" '{printf ("%d\n",$2)}' temp
19769904399993904 # Value is changed
However, the following two tests works fine -
[jaypal:~/Temp] awk -F"|" '{printf ("%s\n",$2)}' temp
19769904399993903 # Value remains same
[jaypal:~/Temp] awk -F"|" '{print $2}' temp
19769904399993903 # Value remains same
So is this a limit of "%d" of not able to handle long integers. If thats the case why would it add one to the number instead of may be truncating it?
I have tried this with BSD and GNU versions of awk.
Version Info:
[jaypal:~/Temp] gawk --version
GNU Awk 4.0.0
Copyright (C) 1989, 1991-2011 Free Software Foundation.
[jaypal:~/Temp] awk --version
awk version 20070501
Starting with GNU awk 4.1 you can use --bignum or -M
$ awk 'BEGIN {print 19769904399993903}'
19769904399993904
$ awk --bignum 'BEGIN {print 19769904399993903}'
19769904399993903
§ Command-Line Options
I believe the underlying numeric format in this case is an IEEE double. So the changed value is a result of floating point precision errors. If it is actually necessary to treat the large values as numerics and to maintain accurate precision, it might be better to use something like Perl, Ruby, or Python which have the capabilities (maybe via extensions) to handle arbitrary-precision arithmetic.
UPDATE: Recent versions of GNU awk support arbitrary precision arithmetic. See the GNU awk manual for more info.
ORIGINAL POST CONTENT:
XMLgawk supports arbitrary precision arithmetic on floating-point numbers.
So, if installing xgawk is an option:
zsh-4.3.11[drado]% awk --version |head -1; xgawk --version | head -1
GNU Awk 4.0.0
Extensible GNU Awk 3.1.6 (build 20080101) with dynamic loading, and with statically-linked extensions
zsh-4.3.11[drado]% awk 'BEGIN {
x=665857
y=470832
print x^4 - 4 * y^4 - 4 * y^2
}'
11885568
zsh-4.3.11[drado]% xgawk -lmpfr 'BEGIN {
MPFR_PRECISION = 80
x=665857
y=470832
print mpfr_sub(mpfr_sub(mpfr_pow(x, 4), mpfr_mul(4, mpfr_pow(y, 4))), 4 * y^2)
}'
1.0000000000000000000000000
This answer was partially answered by #Mark Wilkins and #Dennis Williamson already but I found out the largest 64-bit integer that can be handled without losing precision is 2^53.
Eg awk's reference page
http://www.gnu.org/software/gawk/manual/gawk.html#Integer-Programming
(sorry if my answer is too old. Figured I'd still share for the next person before they spend too much time on this like I did)
You're running into Awk's Floating Point Representation Issues. I don't think you can find a work-around within awk framework to perform arithmetic on huge numbers accurately.
Only possible (and crude) way I can think of is to break the huge number into smaller chunk, perform your math and join them again or better yet use Perl/PHP/TCL/bsh etc scripting languages that are more powerful than awk.
Using nawk on Solaris 11, I convert the number to a string by adding (concatenate) a null to the end, and then use %15s as the format string:
printf("%15s\n", bignum "")
another caveat about the precision :
the errors pile up with extra operations ::
echo 19769904399993903 | mawk2 '{ CONVFMT = "%.2000g";
OFMT = "%.20g";
} {
print;
print +$0;
print $0/1.0
print $0^1.0;
print exp(-log($0))^-1;
print exp(1*log($0))
print sqrt(exp(exp(log(20)-log(10))*log($0)))
print (exp(exp(log(6)-log(3))*log($0)))^2^-1
}'
19769904399993903
19769904399993904
19769904399993904
19769904399993904
19769904399993912
19769904399993908
19769904399993628 <<<—— -275
19769904399993768 <<<—- -135
The first few only off by less than 10.
last 2 equations have triple digit deltas.
For any of the versions that require calling helper math functions, simply getting the -M bignum flag is insufficient. One must also set the PREC variable.
For this exmaple, setting PREC=64 and OFMT="%.17g" should suffice.
Beware of setting OFMT too high, relative to PREC, otherwise you'll see oddities like this :
gawk -M -v PREC=256 -e '{ CONVFMT="%.2000g"; OFMT="%.80g";... } '
19769904399993903
19769904399993903.000000000000000000000000000000000000000000000000000000000003734
19769904399993903.000000000000000000000000000000000000000000000000000000000003734
19769904399993903.000000000000000000000000000000000000000000000000000000000003734
19769904399993903.000000000000000000000000000000000000000000000000000000000003734
since 80 significant digits require precision of at least 265.75, so basically 266-bits, but gawk is fast enough that you can probably safely pre-set it at PREC=4096/8192 instead of having to worry about it everytime
I am trying to clean up a legacy database by dropping all procedures that are not used by the application. Using grep, I have been able to determine that a single procedure does not occur in the source code. Is there a way to do this for all of the procedures at once?
UPDATE: While using -E "proc1|proc2" produces an output of all lines in all files which match either pattern, this is not very useful. The legacy database has 2000+ procedures.
I tried to use the -o option thinking that I could use its output as the pattern for an inverse search on the original pattern. However, I found that there is no output when you use the -o option with more than one pattern.
Any other ideas?
UPDATE: After further experimenting, I found that it is the combination of the -i and -o options which are preventing the output. Unfortunately, I need a case insensitive search in this context.
feed the list of stored procedures to egrep separated by "|"
or:
for stored_proc in $stored_procs
do
grep $stored_proc $source_file
done
I've had to do this in the past as well. Don't forget about any procs that may be called from other procs.
If you are using SQL Server you can use this:
SELECT name,
text
FROM sysobjects A
JOIN syscomments B
ON A.id = B.id
WHERE xtype = 'P'
AND text LIKE '%< sproc name >%'
I get output under the circumstances described in your edit:
$ echo "aaaproc1bbb" | grep -Eo 'proc1|proc2'
proc1
$ echo $?
0
$ echo "aaabbb" | grep -Eo 'proc1|proc2'
$ echo $?
1
The exit code shows if there was no match.
You might also find these options to grep useful (-L may be specific to GNU grep):
-c, --count
Suppress normal output; instead print a count of matching lines
for each input file. With the -v, --invert-match option (see
below), count non-matching lines. (-c is specified by POSIX.)
-L, --files-without-match
Suppress normal output; instead print the name of each input
file from which no output would normally have been printed. The
scanning will stop on the first match.
-l, --files-with-matches
Suppress normal output; instead print the name of each input
file from which output would normally have been printed. The
scanning will stop on the first match. (-l is specified by
POSIX.)
-q, --quiet, --silent
Quiet; do not write anything to standard output. Exit
immediately with zero status if any match is found, even if an
error was detected. Also see the -s or --no-messages option.
(-q is specified by POSIX.)
Sorry for quoting the man page at you, but sometimes it helps to screen things a bit.
Edit:
For a list of filenames that do not contain any of the procedures (case insensitive):
grep -EiL 'proc1|proc2' *
For a list of filenames that contain any of the procedures (case insensitive):
grep -Eil 'proc1|proc2' *
To list the files and show the match (case insensitive):
grep -Eio 'proc1|proc2' *
Start with your list of procedure names. For easy re-use later, sort them and make them lowercase, like so:
tr "[:upper:]" "[:lower:]" < list_of_procedures | sort > sorted_list_o_procs
... now you have a sorted list of the procedure names. Sounds like you're already using gnu grep, so you've got the -o option.
fgrep -o -i -f sorted_list_o_procs source1 source2 ... > list_of_used_procs
Note the use of fgrep: these aren't regexps, really, so why treat them as such. Hopefully you will also find that this magically corrects your output issues ;). Now you have an ugly list of the used procedures. Let's clean them up as we did the orginal list above.
tr "[:upper:]" "[:lower:]" < list_of_used_procs | sort -u > short_list
Now you have a short list of the used procedures. Let's find the ones in the original list that aren't in the short list.
fgrep -v -f short_list sorted_list_o_procs
... and there they are.