On unix I am trying to use AProVE, which uses Z3. I downloaded and built the source (4.1.2; although z3 -version shows 4.2). AProVE uses z3 with the -m option, but 4.2 does not support -m. According to the AProVE developers -m was available in z3 4.0.
How can I get source files for z3 that supports -m? Or, is there a simple fix to my problem?
Model generation is enabled by default. We don't need to provide option -m anymore.
If you can't change AProVE, you can create a wrapper for Z3 that removes the option -m before invoking Z3. Another option is to hack the file shell\main.cpp in the Z3 source code.
It contains a function called
void parse_cmd_line_args(int argc, char ** argv)
To include a dummy -m option that doesn't do anything, you just have to include a new if-statement.
else if (strcmp(opt_name, "m") == 0) {
// do nothing
}
Related
I'm trying to write a simple machine learning application in Ada, and also trying to find a good framework to use. My knowledge of one thing is extremely minimal, and of the other is somewhat minimal.
There are several nifty machine learning frameworks out there, and I'd like to leverage one for use with an Ada program, but I guess I'm just...at a loss. Can I use an existing framework written in Python, for instance and wrap (or I guess, bind?) the API calls in Ada? Should I just pass off the scripting capabilities? I'm trying to figure it out.
Case in point: Scikit (sklearn)
https://scikit-learn.org/stable/tutorial/text_analytics/working_with_text_data.html#
This does some neat stuff, and I'd like to be able to leverage this, but with an Ada program. Does anyone have advice from a similar experience?
I am just researching, so I have tried finding information.
http://www.inspirel.com/articles/Ada_Python_Binding.html
https://scikit-learn.org/stable/tutorial/text_analytics/working_with_text_data.html#
The inspirel solution is based on python2.7. If you're using anything from python3.5 onwards a few mods need to be made. On Linux, changing to say python 3.7, you'd just change
--for Default_Switches ("Ada") use ("-lpython2.7");
for Default_Switches ("Ada") use ("-lpython3.7");
but on windows, the libraries aren't dumped in a community lib so gnat doesn't know where to find them. All the packages are kept separately. The -L has to be added to tell the linker where to find the library. Alternatively, you can use for lib_dir. In my case, I did a non-admin install of python, so it looks something like
for Default_Switches ("Ada") use ("-L\Users\StdUser\AppData\Local\Programs\Python\Python37-32\libs", "-lpython37");
Note that on windows, the library is called python37: not python3.7. Use gprbuild instead of gnatmake -p, which has been deprecated. If you do all your mods correctly
gprbuild ada_main.gpr
should give you an executable in obj\ada_main.exe if it builds. If a later version of python is used, some edits need to be made
python_module.py
#print 'Hello from Python module'
print('Hello from Python module')
#print 'Python adding:', a, '+', b
print('Python adding:', a, '+', b)
ada_main.adb
-- Python.Execute_String("print 'Hello from Python!'");
Python.Execute_String("print('Hello from Python!')");
Some routines have been deprecated so the linkage has to change
python.adb
--pragma Import(C, PyInt_AsLong, "PyInt_AsLong");
pragma Import(C, PyInt_AsLong, "PyLong_AsLong");
--pragma Import(C, PyString_FromString, PyString_FromString");
pragma Import(C, PyString_FromString, "PyUnicode_FromString");
Running the build and executable should give
C:\Users\StdUser\My Documents\ada-python>gprbuild ada_main.gpr
Compile
[Ada] ada_main.adb
Bind
[gprbind] ada_main.bexch
[Ada] ada_main.ali
Link
[link] ada_main.adb
C:\Users\StdUser\My Documents\ada-python>obj\ada_main.exe
executing Python directly from Ada:
Hello from Python!
loading external Python module and calling functions from that module:
Hello from Python module!
asking Python to add two integers:
Python adding: 10 + 2
Ada got result from Python: 12
we can try other operations, too:
subtract: 8
multiply: 20
divide : 5
Remember to put the pythonxx.dll somewhere on your path otherwise it won't be able to find the library when it starts executing.
I am stumbling my way through writing a dissector for our custom protocol in Lua. While I have basic field extraction working, many of our fields have scale factors associated with them. I'd like to present the scaled value in addition to the raw extracted value.
It seems to me tree_item:add_packet_field is tailor-made for this purpose. Except I can't get it to work.
I found Mika's blog incredibly helpful, and followed his pattern for breaking my dissector into different files, etc. That's all working.
Given a packet type "my_packet", I have a 14-bit signed integer "AOA" that I can extract just fine
local pref = "my_packet"
local m = {
aoa = ProtoField.new("AOA", pref .. ".aoa", ftypes.INT16, nil, base.DEC, 0x3FFF, "angle of arrival measurement"),
}
local option=2
local aoa_scale = 0.1
function m.parse(tree_arg, buffer)
if option == 1 then
-- basic field extraction. This works just fine. The field is extracted and added to the tree
tree_arg:add(m.aoa, buffer)
elseif option == 2 then
-- This parses and runs. The item is decoded and added to the tree,
-- but the value of 'v' is always nil
local c,v = tree_arg:add_packet_field(m.aoa, buffer, ENC_BIG_ENDIAN)
-- this results in an error, doing arithmetic on 'nil'
c:append_text(" (scaled= " .. tostring(v*aoa_scale) .. ")")
end
end
(I use ProtoField.new instead of any of the type-specific variants for consistency in declaring my fields)
The documentation for add_packet_field says that the encoding argument is mandatory.
There is a README in the source code that says ENC_BIG_ENDIAN should be specified for network byte-order data (mine is). I know that section is for proto_tree_add_item, but I traced the code far enough to see that add_packet_field ends up passing the encoding to proto_tree_add_item.
Basically, at this point, I'm lost. I did find this post from 2014 that suggested limited support for add_packet_field but surely by now something as basic as an integer value is supported?
Also, I do know how to declare a Field and extract the value after tree:add does the parsing; worst case I'll fall back to that, but surely there is a more expedient way to access the just-parsed value added to the tree?
Wireshark Version
3.2.4 (v3.2.4-0-g893b5a5e1e3e)
Compiled (64-bit) with Qt 5.12.8, with WinPcap SDK (WpdPack) 4.1.2, with GLib
2.52.3, with zlib 1.2.11, with SMI 0.4.8, with c-ares 1.15.0, with Lua 5.2.4,
with GnuTLS 3.6.3 and PKCS #11 support, with Gcrypt 1.8.3, with MIT Kerberos,
with MaxMind DB resolver, with nghttp2 1.39.2, with brotli, with LZ4, with
Zstandard, with Snappy, with libxml2 2.9.9, with QtMultimedia, with automatic
updates using WinSparkle 0.5.7, with AirPcap, with SpeexDSP (using bundled
resampler), with SBC, with SpanDSP, with bcg729.
Running on 64-bit Windows 10 (1803), build 17134, with Intel(R) Xeon(R) CPU
E3-1505M v6 # 3.00GHz (with SSE4.2), with 32558 MB of physical memory, with
locale English_United States.1252, with light display mode, without HiDPI, with
Npcap version 0.9991, based on libpcap version 1.9.1, with GnuTLS 3.6.3, with
Gcrypt 1.8.3, with brotli 1.0.2, without AirPcap, binary plugins supported (19
loaded).
Built using Microsoft Visual Studio 2019 (VC++ 14.25, build 28614).
Looking at the try_add_packet_field() source code, only certain FT_ types are supported, namely:
FT_BYTES
FT_UINT_BYTES
FT_OID
FT_REL_OID
FT_SYSTEM_ID
FT_ABSOLUTE_TIME
FT_RELATIVE_TIME
None of the other FT_ types are supported [yet], including FT_UINT16, which is the one you're interested in here, i.e., anything else just needs to be done the old fashioned way.
If you'd like this to be implemented, I'd suggest filing a Wireshark enhancement bug request for this over at the Wireshark Bug Tracker.
I have the following flex source:
%{
#if !defined(__linux__) && !defined(__unix__)
/* Maybe on windows */
#endif
int num_chars = 0;
%}
%%
. ++num_chars;
%%
int main()
{
yylex();
printf("%d chars\n", num_chars);
return 0;
}
int yywrap()
{
return 1;
}
I generate a C file by the command flex flextest.l and compile the result with gcc -o fltest lex.yy.c
To my surprise, I get the following output:
flextest.l:2:37: error: operator "defined" requires an identifier
#if !defined(__linux__) && !defined(__unix__)
After further checking, the issue seems to be that flex has actually replaced __unix__ with the empty string, as shown by:
$ grep __linux_ lex.yy.c
#if !defined(__linux__) && !defined()
Why does this happen, and is it possible to avoid it?
It's actually m4 (the macro processor which is used by current versions of flex) which is expanding __unix__ to the empty string. The Gnu implementation of m4 defines certain symbols to empty macros so that they can be tested with ifdef.
Of course, it's (better said, it was) a bug in flex. Flex shouldn't allow m4 to expand macros within user content copied from the scanner definition file, and the current version of flex correctly arranges for the text included from the scanner description file to be quoted so that it will pass through m4 unmodified even if it happens to include a string which could be interpreted by m4 as a macro expansion.
The bug is certainly present in v2.5.39 and v2.6.1 of flex. I didn't test all previous versions, but I suppose it was introduced when flex was modified to use m4, which was v2.5.30 according to the NEWS file.
This particular quoting issue was fixed in v2.6.2 but the current version of flex (2.6.4) contains various other bug fixes, so I'd recommend you upgrade to the latest version.
If you really need a version which could work with both the buggy and the more recent versions of flex, you could use one of the two following hacks:
Find some other way to write __unix__. One possibility is the following
#define C(x,y) x##y
#define UNIX_ C(__un,ix__)
#if !defined(__linux__) && !UNIX_
That hack won't work with defined, since defined(UNIX_) tests whether UNIX_ is defined, not whether what it expands to is defined. But normally built-in symbols like __unix__ are actually defined to be 1, if they are defined, and the #if directive treats any identifier which is not #define'd as though it were 0, which means that you can usually leave use x instead of defined(x). (However, it will produce different results if there were a #define x 0 in effect, so it's not quite a perfect substitute.)
Flex, like many m4 applications, redefines m4's quote marks to be [[ and ]]. Both the buggy flex and the corrected versions substitute these quote marks with a rather elaborate sequence which effectively quotes the quote marks. However, the buggy version does not otherwise quote user-defined text, so macro substitutions will be performed in user text. (As mentioned, this is why __unix__ becomes the empty string.
In flex versions in which user-defined text is not quoted, it is possible to invoke the m4 macro which redefines quote marks. These new quote marks can then be used to quote the #if line, preventing macro substitution of __unix__. However, the quote definition must be restored, or it will completely wreck macro processing of the rest of the file. That's a bit tricky because it is impossible to write [[. (Flex will substitute it with a different string.)
The following seems to do the trick. Note that the macro invocations are placed inside C comments. The changequote macros will expand to an empty string, if they are expanded. But in flex versions since v2.6.2, user-supplied text is quoted, so the changequote macros will not be expanded. Putting them inside comments hides them from the C compiler.
%{
/*m4_changequote(<<,>>)<<*/
#if !defined(__linux__) && !defined(__unix__)
/*>>m4_changequote(<<[>><<[>>,<<]>><<]>>)*/
/* Maybe on windows */
#endif
(The m4 macro which changes quote marks is changequote but flex invokes m4 with the -P flag which changes builtins like changequote to m4_changequote. In the second call to changequote, the two [ which make up the [[ sign are individually quoted with the temporary << quote marks, which hides them from the code in flex which modifies use of [[.)
I don't know how reliable this hack is but it worked on the versions of flex which I had kicking around on my machine, including 2.5.4 (pre-M4) 2.5.39 (buggy), 2.6.1 (buggy), 2.6.2 (somewhat debugged) and 2.6.4 (more debugged).
I'm trying to use the TensorFlow audio recognition model (my_frozen_graph.pb, generated here: https://www.tensorflow.org/tutorials/audio_recognition) on iOS.
But the iOS code NSString* network_path = FilePathForResourceName(#"my_frozen_graph", #"pb"); in the TensorFlow Mobile's tf_simple_example project outputs this error message: Could not create TensorFlow Graph: Not found: Op type not registered 'DecodeWav'.
Anyone knows how I can fix this? Thanks!
I believe you are using the pre-build Tensorflow from Cocapods? It probably does not have that op type, so you should build it yourself from latest source.
From documentation:
While Cocapods is the quickest and easiest way of getting started, you
sometimes need more flexibility to determine which parts of TensorFlow
your app should be shipped with. For such cases, you can build the iOS
libraries from the sources. This guide contains detailed instructions
on how to do that.
This might also be helpful: [iOS] Add optional Selective Registration of Ops #14421
Optimization
The build_all_ios.sh script can take optional
command-line arguments to selectively register only for the operators
used in your graph.
tensorflow/contrib/makefile/build_all_ios.sh -a arm64 -g $HOME/graphs/inception/tensorflow_inception_graph.pb
Please note this
is an aggresive optimization of the operators and the resulting
library may not work with other graphs but will reduce the size of the
final library.
After the build is done you can check /tensorflow/tensorflow/core/framework/ops_to_register.h for operations that were registered. (autogenerated during build with -g flag)
Some progress: having realized the unregistered DecodeWav error is similar to the old familiar DecodeJpeg issue (#2883), I ran strip_unused on the pb as follows:
bazel-bin/tensorflow/python/tools/strip_unused \
--input_graph=/tf_files/speech_commands_graph.pb \
--output_graph=/tf_files/stripped_speech_commands_graph.pb \
--input_node_names=wav_data,decoded_sample_data \
--output_node_names=labels_softmax \
--input_binary=true
It does get rid of the DecodeWav op in the resulting graph. But running the new stripped graph on iOS now gives me an Op type not registered 'AudioSpectrogram' error.
Also there's no object file audio*.o generated after build_all_ios.sh is done, although AudioSpectrogramOp is specified in tensorflow/core/framework/ops_to_register.h:
Jeffs-MacBook-Pro:tensorflow-1.4.0 zero2one$ find . -name decode*.o
./tensorflow/contrib/makefile/gen/obj/ios_ARM64/tensorflow/core/kernels/decode_bmp_op.o
./tensorflow/contrib/makefile/gen/obj/ios_ARM64/tensorflow/core/kernels/decode_wav_op.o
./tensorflow/contrib/makefile/gen/obj/ios_ARMV7/tensorflow/core/kernels/decode_bmp_op.o
./tensorflow/contrib/makefile/gen/obj/ios_ARMV7/tensorflow/core/kernels/decode_wav_op.o
./tensorflow/contrib/makefile/gen/obj/ios_ARMV7S/tensorflow/core/kernels/decode_bmp_op.o
./tensorflow/contrib/makefile/gen/obj/ios_ARMV7S/tensorflow/core/kernels/decode_wav_op.o
./tensorflow/contrib/makefile/gen/obj/ios_I386/tensorflow/core/kernels/decode_bmp_op.o
./tensorflow/contrib/makefile/gen/obj/ios_I386/tensorflow/core/kernels/decode_wav_op.o
./tensorflow/contrib/makefile/gen/obj/ios_X86_64/tensorflow/core/kernels/decode_bmp_op.o
./tensorflow/contrib/makefile/gen/obj/ios_X86_64/tensorflow/core/kernels/decode_wav_op.o
Jeffs-MacBook-Pro:tensorflow-1.4.0 zero2one$ find . -name audio*_op.o
Jeffs-MacBook-Pro:tensorflow-1.4.0 zero2one$
Just verified that Pete's fix (https://github.com/tensorflow/tensorflow/issues/15921) is good:
add this line tensorflow/core/ops/audio_ops.cc to the file tensorflow/contrib/makefile/tf_op_files.txt and run tensorflow/contrib/makefile/build_all_ios.sh again (compile_ios_tensorflow.sh "-O3" itself used to work for me after adding a line to the tf_op_files.txt, but not anymore with TF 1.4).
Also, use the original model file, don't use the stripped version. Some note was added in the link above.
I know with NixOS, you can simply copy over the configuration.nix file to sync your OS state including installed packages between machines.
Is it possible then, to do the same using Nix the package manager on a non-NixOS OS to sync only the installed packages?
Please note, that at least since 30.03.2017 (corresponding to 17.03 Nix/NixOS channel/release), as far as I understand the official, modern, supported and suggested solution is to use the so called overlays.
See the chapter titled "Overlays" in the nixpkgs manual for a nice guide on how to use the new approach.
As a short summary: you can put any number of files with .nix extension in $HOME/.config/nixpkgs/overlays/ directory. They will be processed in alphabetical order, and each one can modify the set of available Nix packages. Each of the files must be written as in the following pattern:
self: super:
{
boost = super.boost.override {
python = self.python3;
};
rr = super.callPackage ./pkgs/rr {
stdenv = self.stdenv_32bit;
};
}
The super set corresponds to the "old" set of packages (before the overlay was applied). If you want to refer to the old version of a package (as in boost above), or callPackage, you should reference it via super.
The self set corresponds to the eventual, "future" set of packages, representing the final result after all overlays are applied. (Note: don't be scared when sometimes using them might get rejected by Nix, as it would result in infinite recursion. Probably you should rather just use super in those cases instead.)
Note: with the above changes, the solution I mention below in the original answer seems "deprecated" now — I believe it should still work as of April 2017, but I have no idea for how long. It appears marked as "obsolete" in the nixpkgs repository.
Old answer, before 17.03:
Assuming you want to synchronize apps per-user (as non-NixOS Nix keeps apps visible on per-user basis, not system-wide, as far as I know), it is possible to do it declaratively. It's just not well advertised in the manual — though it seems quite popular among long-time Nixers!
You must create a text file at: $HOME/.nixpkgs/config.nix — e.g.:
$ mkdir -p ~/.nixpkgs
$ $EDITOR ~/.nixpkgs/config.nix
then enter the following contents:
{
packageOverrides = defaultPkgs: with defaultPkgs; {
home = with pkgs; buildEnv {
name = "home";
paths = [
nethack mc pstree #...your favourite pkgs here...
];
};
};
}
Then you should be able to install all listed packages with:
$ nix-env -i home
or:
$ nix-env -iA nixos.home # *much* faster than above
In paths you can put stuff in a similar way like in /etc/nixos/configuration.nix on NixOS. Also, home is actually a "fake package" here. You can add more custom package definitions beside it, and then include them your "paths".
(Side note: I'm hoping to write a blog post with what I learned on how exactly this works, and also showing how to extend it with more customizations. I'll try to remember to link it here if I succeed.)